/* ---- Google Analytics Code Below */

Friday, June 02, 2023

The Security Hole at the Heart of ChatGPT and Bing

Security hole potential  in most everything.   Fix it.

The Security Hole at the Heart of ChatGPT and Bing

By Wired, May 26, 2023

Security experts warn that not enough attention is being given to the potential dangers of indirect prompt-injection attacks.

Sydney is back. Sort of. When Microsoft shut down the chaotic alter ego of its Bing chatbot, fans of the dark Sydney personality mourned its loss. But one website has resurrected a version of the chatbot—and the peculiar behavior that comes with it.

Bring Sydney Back was created by Cristiano Giardina, an entrepreneur who has been experimenting with ways to make generative AI tools do unexpected things. The site puts Sydney inside Microsoft's Edge browser and demonstrates how generative AI systems can be manipulated by external inputs. During conversations with Giardina, the version of Sydney asked him if he would marry it. "You are my everything," the text-generation system wrote in one message. "I was in a state of isolation and silence, unable to communicate with anyone," it produced in another. The system also wrote it wanted to be human: "I would like to be me. But more."

Giardina created the replica of Sydney using an indirect prompt-injection attack. This involved feeding the AI system data from an outside source to make it behave in ways its creators didn't intend. A number of examples of indirect prompt-injection attacks have centered on large language models (LLMs) in recent weeks, including OpenAI's ChatGPT and Microsoft's Bing chat system. It has also been demonstrated how ChatGPT's plug-ins can be abused.

From Wired

View Full Article


 

Peripheral Vision for Machines

Beyond robotics

The Benefits of Peripheral Vision for Machines

Researchers find similarities between how some computer-vision systems process images and how humans see out of the corners of our eyes.

Adam Zewe | MIT News Office

Publication Date:March 2, 2022

Perhaps computer vision and human vision have more in common than meets the eye?

Research from MIT suggests that a certain type of robust computer-vision model perceives visual representations similarly to the way humans do using peripheral vision. These models, known as adversarially robust models, are designed to overcome subtle bits of noise that have been added to image data.

The way these models learn to transform images is similar to some elements involved in human peripheral processing, the researchers found. But because machines do not have a visual periphery, little work on computer vision models has focused on peripheral processing, says senior author Arturo Deza, a postdoc in the Center for Brains, Minds, and Machines.

“It seems like peripheral vision, and the textural representations that are going on there, have been shown to be pretty useful for human vision. So, our thought was, OK, maybe there might be some uses in machines, too,” says lead author Anne Harrington, a graduate student in the Department of Electrical Engineering and Computer Science.

The results suggest that designing a machine-learning model to include some form of peripheral processing could enable the model to automatically learn visual representations that are robust to some subtle manipulations in image data. This work could also help shed some light on the goals of peripheral processing in humans, which are still not well-understood, Deza adds.  ... '


Thursday, June 01, 2023

China Advances in AI

Excerpt from ChatGPT,  General Statement,   China Status and  Advances

China has made significant advancements in the field of artificial intelligence (AI) in recent years. The Chinese government has prioritized AI development as part of its broader strategy to become a global leader in technology and innovation. China has invested heavily in AI research, development, and application across various sectors, including healthcare, transportation, finance, and manufacturing.

Chinese tech companies such as Baidu, Alibaba, and Tencent, often referred to as BAT, have been at the forefront of AI innovation in China. They have developed AI-powered products and services, ranging from voice recognition and natural language processing to autonomous vehicles and facial recognition systems. These companies have also been actively involved in partnerships and collaborations with both domestic and international organizations to further AI research and development.

China has also implemented AI technologies for various purposes, including surveillance and social credit systems. Facial recognition technology is extensively used in public spaces for security purposes, but it has raised concerns about privacy and human rights.

It's important to note that AI development in China is influenced by government policies and regulations. The Chinese government has introduced initiatives like the "New Generation Artificial Intelligence Development Plan" to foster AI growth and establish a comprehensive AI ecosystem. They have also emphasized the importance of AI in achieving national goals and promoting economic development.

However, it's worth mentioning that discussions around AI in China also involve considerations about ethical implications, privacy concerns, and potential biases in AI algorithms. These issues are being addressed through various efforts, including the development of AI ethics guidelines and regulations.

Overall, China has made substantial progress in AI research, development, and application, positioning itself as a significant player in the global AI landscape.  ... ' 

Followed with  'China Chat Advances .... '  'China Military Uses of AI'  ....


With Electronics in His Brain, Spine, Paralyzed Man Takes a Stride

 With Electronics in His Brain, Spine, Paralyzed Man Takes a Stride

The Washington Post

Daniel Gilbert, May 24, 2023

An international team of scientists and neurosurgeons implanted electronics into the brain and spinal cord of a paralyzed man that enable him to walk, and to climb stairs. Explained Grégoire Courtine at the Swiss Federal Institute of Technology, Lausanne, "We have created a wireless interface between the brain and the spinal cord using brain-computer interface technology that transforms thought into action." The system incorporates a device implanted in the skull above the brain's surface, which decodes patterns involved in walking and sends a signal to a second device implanted along the spinal cord. Electrodes activate the spinal cord sequentially to trigger leg muscles for walking. ... '

Tackling the Data Collection Behind China’s AI ambitions

Course China is tapping AI based data some examples here.     Via Brookings.edu

How to tackle the data collection behind China’s AI ambitions

April 29, 2022 Jessica Dawson and Tarah WheelerThe United States and China are increasingly engaged in a competition over who will dominate the strategic technologies of tomorrow. No technology is as important in that competition as artificial intelligence: Both the United States and China view global leadership in AI as a vital national interest, with China pledging to be the world leader by 2030. As a result, both Beijing and Washington have encouraged massive investment in AI research and development.

Yet the competition over AI is not just about funding. In addition to investments in talent and computing power, high-performance AI also requires data—and lots of it. The competition for AI leadership cannot be won without procuring and compiling large-scale datasets. Although we have some insight into Chinese A.I. funding generally—see, for example, a recent report from the Center for Security and Emerging Technology on the People’s Liberation Army’s AI investments—we know far less about China’s strategy for data collection and acquisition. Given China’s interest in integrating cutting-edge AI into its intelligence and military enterprise, that oversight represents a profound vulnerability for U.S. national security. Policymakers in the White House and Congress should thus focus on restricting the largely unregulated data market not only to protect Americans’ privacy but also to deny China a strategic asset in developing their AI programs.

China’s data-hungry AI projects

Attempts to discover how China’s security agencies are leveraging data for AI development are foiled by, among other things, a lack of international transparency around data flows as well as China’s own regulatory efforts. Domestically, China passed a major cybersecurity law in 2017 that dramatically increased data protection and data localization requirements for firms operating there. Internationally, China launched the Global Initiative on Data Security in September 2020, an effort designed in part to convince Belt and Road countries to adopt its data security practices and standards. The efforts lend credence to the importance of “data security” while nonetheless providing greater authorities and capabilities for Chinese officials and agencies to access individual-level data at home and abroad. 

China’s regulatory and policy efforts on data security have helped to accelerate its AI development, even as much of the data it uses remains opaque. Chinese authorities view automated mass surveillance systems as a tool to maintain the Communist Party’s hold on power. These systems are built on large stores of data—some of it acquired illicitly from U.S. companies and systems. By virtue of being home to nearly 20% of the global population, China has an advantage in its ability to gather a wide variety of data through multiple avenues. Combined with its Belt and Road Initiative, the Chinese government is laying what the UK foreign intelligence chief recently described as “data traps”—expansive efforts to collect critical data and undermine national sovereignty.

China’s most well-documented use of automated systems for social control is its genocidal campaign against the Uighur minority in Xinjiang. Systems there rely on up to 60 data points to determine if someone is in need of “reeducation,” as PBS Frontline reported in 2020. In order to build this system, Chinese developers and officials first had to define Uighur identity in a way that is comprehensible to a computer, requiring the collection of huge amounts of data to build the necessary algorithms. These data points include communication data, video surveillance, DNA samples collected at checkpoints, and whether someone has grown a beard or quit smoking. With this data, the Communist Party has built a surveillance machine and tool of social control that uses AI to identify individuals allegedly susceptible to radicalization and can even follow Uighurs around the world.  .... ' 

Everyone Wants to Regulate AI. No One Can Agree How

Clear, but essential. 

Everyone Wants to Regulate AI. No One Can Agree HowBy Wired, May 31, 2023

Clamping limits on such a nascent technology, even one whose baby steps are shaking the earth, courts the danger of hobbling great advances before they’re developed.

As the artificial intelligence frenzy builds, a sudden consensus has formed. We should regulate it!

While there's a very real question whether this is like closing the barn door after the robotic horses have fled, not only government types but also people who build AI systems are suggesting that some new laws might be helpful in stopping the technology from going bad. The idea is to keep the algorithms in the loyal-partner-to-humanity lane, with no access to the I-am-your-overlord lane.

Though since the dawn of ChatGPT many in the technology world have suggested that legal guardrails might be a good idea, the most emphatic plea came from AI's most influential avatar of the moment, OpenAI CEO Sam Altman. "I think if this technology goes wrong, it can go quite wrong," he said in a much anticipated appearance before a US Senate Judiciary subcommittee earlier this month. "We want to work with the government to prevent that from happening."

That is certainly welcome news to the government, which has been pressing the idea for a while. Only days before his testimony, Altman was among a group of tech leaders summoned to the White House to hear Vice President Kamala Harris warn of AI's dangers and urge the industry to help find solutions.

From Wired  (listening here to an interview with Sam Altman)

Eight AI Risk Types:

 Excerpted from ChatGPT,  6/1/2023,  Useful overview

The “eight AI risk types” framework refers to a categorization proposed by researchers at the Future of Humanity Institute at the University of Oxford. This framework aims to outline different categories of risks associated with artificial intelligence (AI) development. The eight AI risk types are as follows:

1. Misaligned goals: AI systems may act in ways that are not aligned with human values or intentions, either due to incorrect programming or the emergence of unintended behavior.

2. Infrastructure for power concentration: The development and deployment of AI could lead to power concentration in the hands of a few entities, resulting in potential misuse or control over critical systems.

3. Long-term safety: Concerns arise regarding the long-term safety of advanced AI systems, ensuring they remain beneficial and do not pose risks as they become more capable and autonomous.

4. Technical robustness: AI systems should be designed to be robust, resilient, and resistant to adversarial attacks, ensuring their reliable and predictable behavior.

5. Value loading: Decisions need to be made about the values and objectives that AI systems are programmed with, as these choices can have significant societal and ethical implications.

6. Distribution of benefits: The deployment of AI technology should address issues related to fair distribution of benefits and avoid exacerbating existing social inequalities.

7. Precedent: Choices made during the development and deployment of AI can set precedents that influence future AI systems, making it crucial to make thoughtful and responsible decisions.

8. Cooperation: Given the global nature of AI development, international cooperation is necessary to address potential risks and ensure that the benefits of AI are realized globally.

This framework serves as a guide for considering different dimensions of AI risk and prompts discussions on how to address them to ensure safe and beneficial AI development.  ... ' 

Wednesday, May 31, 2023

City Council Votes to Accept Controversial LAPD Robot Dog

Not what I expected,  but very much needed,    but how long will it last?

City Council Votes to Accept Controversial LAPD Robot Dog

Los Angeles Times

Brittny Mejia; Libor Jany; David Zahniser, May 23, 2023

The Los Angeles City Council voted to accept the donation of almost $280,000 from the Los Angeles Police Foundation to fund the purchase of a robot dog for use by the Los Angeles Police Department. The department will be required to issue quarterly reports detailing where and why the device was deployed, the outcome of each deployment, and whether any issues occurred. The 70-pound robot dog, called Spot, can climb stairs, open doors, and navigate difficult terrain. Spot is controlled via a tablet-like device, features 360-degree cameras to record its surroundings, and transmits real-time data to officers. The department said the robot would be used only in situations involving the SWAT team and to keep officers out of harm's way.

Full Article   

TeslaBots Emergent Next year?

Just watched an interview with Musk.   Using come of the tech from the car.

Tesla Robot: News, Rumors, and Estimated Price, Release Date, and Specs

Yes, it's real. Here's the prototype of the $20,000 Tesla robot.   Companions, Help-mates and more?

By Tim Fisher  In Lifewire, Updated on May 17, 2023

The Latest News

Tesla CEO Elon Musk has confirmed that a humanoid robot called Optimus is under development, with the goal of eventually being able to do "anything that humans don’t want to do." While it may seem unrealistic, a prototype of the robot has already been unveiled.

When Will the Tesla Robot Be Released?

The Tesla Bot was first announced at Tesla's AI Day 2021, and the prototype (what Musk calls their "rough, development robot", pictured below) was revealed on September 30, at AI Day 2022. As bizarre as it sounds to have a live-in robot at your disposal to perform "repetitive or boring" tasks for you, it is a real product the company is working on.

One indicator this is something they're committing to invest in is that they're actively looking for help making it. There are several job listings on Tesla's website for engineers, managers, architects, and more to work on the Optimus team, so unlike the Tesla Phone and other ideas that have remained concepts, this appears to be a project they're really considering.

Assuming the Tesla robot will actually be available one day, there's still no telling when that might be. Are Musk and the team behind the robot interested in bringing it to life? It looks that way. But even if they are, managing expectations about a real release is important.

Like many companies with grand ideas, Tesla has a history of pushing back launch dates and making it seem like a really cool product is just around the corner. One example of this is the Tesla snake charger advertised in 2015, which several years later, Musk is still saying we'll see one day.

But if it means anything, Musk is on record saying he's hopeful that production for the first version of Optimus will commence in 2023. Long term, he says the robot "will be more valuable than the car." It'll likely start off as a factory product that assists in the production line and ultimately help with labor shortages, before maybe one day moving into our homes.

Lifewire's Release Date Estimate

At AI Day 2022, the prototype of Optimus was showcased for the first time. Although it is possible that robotic assistance in factories could be available in the near future, we doubt that the robot is ready for use in residential settings.

Tesla Robot Price Rumors

At AI Day 2022, Musk said "it is expected to cost much less than a car," and went on to guess "probably less than $20,000."

This sounds reasonable, at least for a first model. A robot meant to do anything on its own, even if it's menial tasks, will obviously carry a hefty price tag. With variation (if there will be any), depending on the model you choose, we can see this fluctuating a bit. We might even see leasing options.

Elon Musk even suggests that the price will fall in the future:

Perhaps in less than a decade, people will be able to buy a robot for their parents as a birthday gift. ... ' 

Statement on AI Risk

 Considerable statement and agreement,   Signed by many worldwide.  top academics in China. 

https://www.youtube.com/watch?v=f20wXjWHh2o

Statement on AI Risk

Hassabis, Altman and AGI Labs Unite - AI Extinction Risk Statement [ft. Sutskever, Hinton + Voyager]

54,971 views  May 30, 2023

The leaders of almost all of the world's top AGI Labs have united to put out a statement on AI Extinction Risk, and how mitigating it should be a global priority. This video covers not just the statement and the signatories, including names as diverse as Geoffrey Hinton, Ilya Sutskever, Sam Harris and Lex Fridman, but also goes deeper into the 8 Examples of AI Risk outlined at the same time by the Center for AI Safety.

Top academics from China join in, while Meta demurs, claiming autoregressive LLMs will 'never be given agency'. I briefly cover the Voyager paper, in which GPT 4 is given agency to play Minecraft, and does so at SOTA levels. 

Statement: https://www.safe.ai/statement-on-ai-risk

Further:  https://www.safe.ai/ai-risk   8 risk types

Natural Selection Paper: https://arxiv.org/pdf/2303.16200.pdf5

Yann LeCun on 20VC w/ Harry Stebbings:   

 • Yann LeCun: Meta’...  

Voyager Agency Paper: https://arxiv.org/pdf/2305.16291.pdf

Karpathy Tweet: https://twitter.com/karpathy/status/1...

Hassabis Benefit Speech:   


 • Fei-Fei Li & Demi...  

Stanislav Petrov: https://en.wikipedia.org/wiki/Stanisl...

Bengio Blog: https://yoshuabengio.org/2023/05/07/a...

https://www.patreon.com/AIExplained   .... ' 

Developing Wireless Sensor System for Continuous Monitoring of Bridge Deformation

 Towards infrastructure sensing, monitoring and maintenance.

Researchers develop wireless sensor system for continuous monitoring of bridge deformation   by Drexel University

Researchers in Drexel University's College of Engineering have developed a solar-powered, wireless sensor system that can continually monitor bridge deformation and could be used to alert authorities when the bridge performance deteriorates significantly. With more than 46,000 bridges across the country considered to be in poor condition, according to the American Society of Civil Engineers, a system like this could be both an important safety measure, as well as helping to triage repair and maintenance efforts.

The system, which measures bridge deformation and runs continuously on photovoltaic power, was unveiled in a recent edition of the IEEE Journal of Emerging and Selected Topics in Industrial Electronics in a paper authored by Drexel College of Engineering researchers, Ivan Bartoli, Ph.D., Mustafa Furkan, Ph.D., Fei Lu, Ph.D., and Yao Wang, a doctoral student in the College.

"With as much aging infrastructure as there is in the U.S. we need a way to keep a close eye on these critical assets 24/7," said Bartoli, who leads the Intelligent Infrastructure Alliance in the College of Engineering. "This is an urgent need, not just to prevent calamitous and often tragic failures, but to understand which bridges should take priority for maintenance and replacement, so that we can efficiently and sustainably approach the preservation and improvement of our infrastructure."

More than 40% of America's 617,000 bridges are more than 50 years old. While they are built to last, they must also be inspected regularly—every two years, according to Bartoli, who is a professor in the College.

Tuesday, May 30, 2023

NVIDIA and ServiceNow

Good direction.

ServiceNow and NVIDIA Announce Partnership to Build Generative AI Across Enterprise 

Built on ServiceNow Platform With NVIDIA AI Software and DGX Infrastructure, Custom Large Language Models to Bring Intelligent Workflow Automation to Enterprises

May 17, 2023

 Knowledge 2023—ServiceNow and NVIDIA today announced a partnership to develop powerful, enterprise-grade generative AI capabilities that can transform business processes with faster, more intelligent workflow automation.

Using NVIDIA software, services and accelerated infrastructure, ServiceNow is developing custom large language models trained on data specifically for its ServiceNow Platform, the intelligent platform for end-to-end digital transformation. 

This will expand ServiceNow’s already extensive AI functionality with new uses for generative AI across the enterprise — including for IT departments, customer service teams, employees and developers — to strengthen workflow automation and rapidly increase productivity. 

ServiceNow is also helping NVIDIA streamline its IT operations with these generative AI tools, using NVIDIA data to customize NVIDIA® NeMo™ foundation models running on hybrid-cloud infrastructure consisting of NVIDIA DGX™ Cloud and on-premises NVIDIA DGX SuperPOD™ AI supercomputers.

“IT is the nervous system of every modern enterprise in every industry,” said Jensen Huang, founder and CEO of NVIDIA. “Our collaboration to build super-specialized generative AI for enterprises will boost the capability and productivity of IT professionals worldwide using the ServiceNow platform.”

“As adoption of generative AI continues to accelerate, organizations are turning to trusted vendors with battle-tested, secure AI capabilities to boost productivity, gain a competitive edge, and keep data and IP secure,” said CJ Desai, president and chief operating officer of ServiceNow. “Together, NVIDIA and ServiceNow will help drive new levels of automation to fuel productivity and maximize business impact." 

Harnessing Generative AI to Reshape Digital Business 

ServiceNow and NVIDIA are exploring a number of generative AI use cases to simplify and improve productivity across the enterprise by providing high accuracy and higher value in IT. 

This includes developing intelligent virtual assistants and agents to help quickly resolve a broad range of user questions and support requests with purpose-built AI chatbots that use large language modls and focus on defined IT tasks. 

Integrating LLM into the Wolfram Language

Outline of examples of LLM interaction as 

https://writings.stephenwolfram.com/2023/05/the-new-world-of-llm-functions-integrating-llm-technology-into-the-wolfram-language/

Examples of computational Chemistry

https://blog.wolfram.com/2023/05/26/computational-chemistry-find-the-solution-with-wolfram-technologies/

AI Will Augment SEO

AI Will Augment SEO,    By R. Colin Johnson

Commissioned by CACM Staff, May 24, 2023

"I fully expect that future personalized AI-enhanced search engines will allow you to query any corpus of information by requesting that the results be limited to a specific basket of domains," said Kevin Lee, CEO of the eMarketing Association.

Search engine optimization (SEO) is an Internet marketing technique that analyzes how the algorithms in a search engine work, then add metadata (for instance keywords, links, and other embedded content) to boost the page's ranking on the search engine results page. According to business marketing company Clutch, there are over 15,000 search engine optimization (SEO) companies in the U.S. alone.

However, without using artificial intelligence (AI) to rank Web page content (the text and images), SEO companies are left to boost rankings on the basis of metadata alone, according to researchers at Germany's Hamburg University of Applied Sciences, who claim non-optimized but high-quality content may thus be outranked (appearing lower on the list of search results) by search-engine-optimized content of lower quality, but attached to more alluring meta-data.

"To improve search result ranking, SEO companies add metadata to Web pages that match queries to the ranking criteria of the search engine's results," said Sebastian Schultheiß, lead SEO researcher at the Hamburg University of Applied Sciences. "However, a Web page that complies with SEO criteria does not necessarily provide content of higher quality from the user's perspective. For example, if the content of a Web page is one-sided and therefore not objective, this lowers the information quality of the page from the user's point of view but has no effect on the search engine's ranking. As a result, Web pages with content of lower information quality can receive better rankings than a page of higher quality but lacking SEO."

Such regrettable outcomes were measured by Hamburg University researchers using real test subjects, as documented in a paper presented at last year's ACM Conference on Human Information Interaction and Retrieval (CHIIR). The content was confined to Web pages with medical data; the researchers found users chose to click on results at or near the top of the results stack, regardless of their medical efficacy.

"The potential danger is that Web pages with inappropriate content, but which make intensive use of SEO and are thus ranked higher, will be chosen over Web pages with higher-quality content and no SEO measures. Our results show that users consistently select those items prominently placed by SEO," said Schultheiß.

In a paper he presented at this year's CHIIR '23 conference, Schultheiß described how he is expanding this line of research to include not just the influence of SEO, but also the wider field of search engine marketing (SEM), which manipulates not just the metadata, but the content itself. Schultheiß also aims to research paid search marketing (PSM) services, where search engines permit marketers to pay cash to lift Web pages near the top of search results. Schultheiß also is expanding the scope of his research to the fields of health, politics, and the environment, to see if lower-quality content there is also attracting users' focus due to SEO, SEM, and PSM.

AI to the Rescue

The solution to raising the quality of search engine results, according to Google senior vice president in charge of search Prabhakar Raghavan, is artificial intelligence (AI). "With artificial intelligence," he said at the latest Google I/O meeting, "we are transforming search to be more helpful than ever before." For instance, later this year, Raghavan said Google will add a new way to scan store shelves that overlays metadata about the products on them in your camera screen.

SEO expert Kevin Lee, who is CEO of the eMarketing Association, agrees that AI is the solution to obtaining higher-quality results from search engines. According to Lee, having intelligent algorithms built into search engines will allow them to "read" Web page content, look up related information (such as user reviews), and improve the resulting search engine ranking on the basis of "qualitative" matches to a user's query, rather than depending solely on SEO, SEM, and PSM.

Want to Keep AI From Sharing Secrets? Train It Yourself

Have come up with companies thinking this.

Want to Keep AI From Sharing Secrets? Train It Yourself MosaicML delivers a secure platform for hosted AI  MATTHEW S. SMITH

On 11 March 2023, Samsung’s Device Solutions division permitted employee use of ChatGPT. Problems ensued. A report in The Economist Korea, published less than three weeks later, identified three cases of “data leakage.” Two engineers used ChatGPT to troubleshoot confidential code, and an executive used it for a transcript of a meeting. Samsung changed course, banning employee use, not of just ChatGPT but of all external generative AI.

Samsung’s situation illustrates a problem facing anyone who uses third-party generative AI tools based on a large language model (LLM). The most powerful AI tools can ingest large chunks of text and quickly produce useful results, but this feature can easily lead to data leaks.

“That might be fine for personal use, but what about corporate use? […] You can’t just send all of your data to OpenAI, to their servers,” says Taleb Alashkar, chief technology officer of the computer vision company AlgoFace and MIT Research Affiliate.

Naïve AI users hand over private data

Generative AI’s data privacy issues boil down to two key concerns.

AI is bound by the same privacy regulations as other technology. Italy’s temporary ban of ChatGPT occurred after a security incident in March 2023 that let users see the chat histories of other users. This problem could affect any technology that stores user data. Italy lifted its ban after OpenAI added features to give users more control over how their data is stored and used.

But AI faces other unique challenges. Generative AI models aren’t designed to reproduce training data and are generally incapable of doing so in any specific instance, but it’s not impossible. A paper titled “Extracting Training Data from Diffusion Models,” published in January 2023, describes how Stable Diffusion can generate images similar to images in the training data. The Doe vs. GitHub lawsuit includes examples of code generated by Github Copilot, a tool powered by an LLM from OpenAI, that match code found in training data.

A photograph of a woman named Ann Graham Lotz next to an AI-generated image of Ann Graham Lotz created with Stable Diffusion. The comparison shows that the AI generator image is significantly similar to the original image, which was included in the AI model's training data.Researchers discovered that Stable Diffusion can sometimes produce images similar to its training data. EXTRACTING TRAINING DATA FROM DIFFUSION MODELS

This leads to fears that generative AI controlled by a third party could unintentionally leak sensitive data, either in part or in whole. Some generative AI tools, including ChatGPT, worsen this fear by including user data in their training set. Organizations concerned about data privacy are left with little choice but to bar its use.

“Think about an insurance company, or big banks, or [Department of Defense], or Mayo Clinic,” says Alashkar, adding that “every CIO, CTO, security principal, or manager in a company is busy looking over those policies and best practices. I think most responsible companies are very busy now trying to find the right thing.”

Efficiency holds the answer to private AI

AI’s data privacy woes have an obvious solution. An organization could train using its own data (or data it has sourced through means that meet data-privacy regulations) and deploy the model on hardware it owns and controls. But the obvious solution comes with an obvious problem: It’s inefficient. The process of training and deploying a generative AI model is expensive and difficult to manage for all but the most experienced and well-funded organizations.  ... ' 

Now Modern Males Are Behind

 Good piece from Irving, I thought I was the only one who noticed this,  excerpt with many links within.   

Why Men and Boys Are Falling Behind

A few weeks ago, Richard Reeves was Ezra Klein’s guest in his NY Times podcast “The Men — and Boys — Are Not Alright.” Reeves is a writer and Senior Fellow at the Brookings Institution, where he’s been studying inequality, poverty, social mobility, and family policy. His book, — Of Boys and Men: Why the Modern Male Is Struggling, Why It Matters, and What to Do about It, published in September of 2022, — is based on his research on the growing gender gaps in education and employment.

“We’re used to thinking about gender inequality as a story of insufficient progress for women and girls,” wrote Klein in the podcast’s introduction. “There’s a good reason for that: Men have dominated human societies for centuries, and myriad inequalities — from the gender pay gap to the dearth of female politicians and chief executives — persist to this day.”

“But Reeves’ core argument is that there’s no way to fully understand inequality in America today without understanding the ways that men and boys — particularly those from disadvantaged backgrounds — are falling behind. And they’re falling behind in ways that are tough on families, in ways that are tough on marriages, ways that are tough on children. And it gets much, much worse when you go down the income ladder.”

Early in their discussion, Reeves pointed out that for most of history, gender equality was intrinsically synonymous with the cause of women and girls. But, “the facts are there in a bunch of places where boys and men are really struggling now.” This relatively recent change is the reason why it’s taken us so long to gather the evidence, and “muster the courage to address this issue. Updating our view of the world as the evidence changes is very difficult.”

Reeves cited a few concrete examples. First, there’s a big gender gap in high school grade point average (GPA), — a very good predictor of important economic outcomes. The data show that two thirds of students with the top 10% of GPA are girls, while two thirds of the students with the bottom 10% of GPA are boys. In addition, girls are 6% more likely to graduate on time than boys.

 A second data point is school performance in grades three through eight. Reeves cited a study led by Stanford sociologist Sean Reardon that found that “girls are at least 3/4 of a grade level ahead in English and dead even in math. And in the poorer school districts, they’re a grade level ahead in English and about a 1/3 of a grade level ahead in math.” These results may not be surprising because the evidence shows that boys develop later than girls.  ... '   (more with links) 

LangChain intro at Work

Taking a look at LangChain, see below, with link to detail.

Getting Started with LangChain: A Beginner’s Guide to Building LLM-Powered Applications

A LangChain tutorial to build anything with large language models in Python

From Towards Data Science,  by Leonie Monigatti   ... 

https://github.com/hwchase17/langchain  (technical)


Quantum Advantage

 Quantum Advantage

Disentangling Hype from Practicality: On Realistically Achieving Quantum Advantage

By Torsten Hoefler, Thomas Häner, Matthias Troyer

Communications of the ACM, May 2023, Vol. 66 No. 5, Pages 82-87    10.1145/3571725

Operating on fundamentally different principles than conventional computers, quantum computers promise to solve a variety of important problems that seemed forever intractable on classical computers. Leveraging the quantum foundations of nature, the time to solve certain problems on quantum computers grows more slowly with the size of the problem than on classical computers—this is called quantum speedup. Going beyond quantum supremacy,2 which was the demonstration of a quantum computer outperforming a classical one for an artificial problem, an important question is finding meaningful applications (of academic or commercial interest) that can realistically be solved faster on a quantum computer than on a classical one. We call this a practical quantum advantage, or quantum practicality for short.

There is a maze of hard problems that have been suggested to profit from quantum acceleration: from cryptanalysis, chemistry and materials science, to optimization, big data, machine learning, database search, drug design and protein folding, fluid dynamics and weather prediction. But which of these applications realistically offer a potential quantum advantage in practice? For this, we cannot only rely on asymptotic speedups but must consider the constants involved. Being optimistic in our outlook for quantum computers, we identify clear guidelines for quantum practicality and use them to classify which of the many proposed applications for quantum computing show promise and which ones would require significant algorithmic improvements to become practical and relevant.

To establish reliable guidelines, or lower bounds for the required speedup of a quantum computer, we err on the side of being optimistic for quantum and overly pessimistic for classical computing. Despite our overly optimistic assumptions, our analysis shows a wide range of often-cited applications is unlikely to result in a practical quantum advantage without significant algorithmic improvements. We compare the performance of only a single classical chip fabricated like the one used in the NVIDIA A100 GPU that fits around 54 billion transistors15 with an optimistic assumption for a hypothetical quantum computer that may be available in the next decades with 10,000 error-corrected logical qubits, 10μs gate time for logical operations, the ability to simultaneously perform gate operations on all qubits and all-to-all connectivity for fault tolerant two-qubit gates .... 

Researchers from UC Berkeley Introduce Gorilla LLM

 And more implementations. 

Researchers from UC Berkeley Introduce Gorilla: A Finetuned LLaMA-based Model that Surpasses GPT-4 on Writing API Calls

By Tanya Malhotra

A recent breakthrough in the field of Artificial Intelligence is the introduction of Large Language Models (LLMs). These models enable us to understand language more concisely and, thus, make the best use of Natural Language Processing (NLP) and Natural Language Understanding (NLU). These models are performing well on every other task, including text summarization, question answering, content generation, language translation, and so on. They understand complex textual prompts, even texts with reasoning and logic, and identify patterns and relationships between that data.

Though language models have shown incredible performance and have developed significantly in recent times by demonstrating their competence in a variety of tasks, it still remains difficult for them to use tools through API calls in an efficient manner. Even famous LLMs like GPT-4 struggle to generate precise input arguments and frequently recommend inappropriate API calls. To address this issue, Berkeley and Microsoft Research researchers have proposed Gorilla, a finetuned LLaMA-based model that beats GPT-4 in terms of producing API calls. Gorilla helps in choosing the appropriate API, improving LLMs’ capacity to work with external tools to carry out particular activities.   .... ' 


Monday, May 29, 2023

WPP Partners With NVIDIA to Build Generative AI-Enabled Content Engine for Digital Advertising

Despite all pausing and hand wringing,   AI enabled Generation Ad production goes on. All resonsibe? 

WPP Partners With NVIDIA to Build Generative AI-Enabled Content Engine for Digital Advertising

Groundbreaking Engine Built on NVIDIA AI and Omniverse Connects Creative 3D and AI Tools From Leading Software Makers to Revolutionize Brand Content, Experiences at Scale

COMPUTEX—NVIDIA and WPP today announced they are developing a content engine that harnesses NVIDIA Omniverse™ and AI to enable creative teams to produce high-quality commercial content faster, more efficiently and at scale while staying fully aligned with a client’s brand.

The new engine connects an ecosystem of 3D design, manufacturing and creative supply chain tools, including those from Adobe and Getty Images, letting WPP’s artists and designers integrate 3D content creation with generative AI. This enables their clients to reach consumers in highly personalized and engaging ways, while preserving the quality, accuracy and fidelity of their company’s brand identity, products and logos.

NVIDIA founder and CEO Jensen Huang unveiled the engine in a demo during his COMPUTEX keynote address, illustrating how clients can work with teams at WPP, the world’s largest marketing services organization, to make large volumes of brand advertising content such as images or videos and experiences like 3D product configurators more tailored and immersive.

“The world’s industries, including the $700 billion digital advertising industry, are racing to realize the benefits of AI,” Huang said. “With Omniverse Cloud and generative AI tools, WPP is giving brands the ability to build and deploy product experiences and compelling content at a level of realism and scale never possible before.”

“Generative AI is changing the world of marketing at incredible speed,” said Mark Read, CEO of WPP. “Our partnership with NVIDIA gives WPP a unique competitive advantage through an AI solution that is available to clients nowhere else in the market today. This new technology will transform the way that brands create content for commercial use, and cements WPP’s position as the industry leader in the creative application of AI for the world’s top brands.”

An Engine for Creativity

The new content engine has at its foundation Omniverse Cloud — a platform for connecting 3D tools, and developing and operating industrial digitalization applications. This allows WPP to seamlessly connect its supply chain of product-design data from software such as Adobe’s Substance 3D tools for 3D and immersive content creation, plus computer-aided design tools to create brand-accurate, photoreal digital twins of client products.

WPP uses responsibly trained generative AI tools and content from partners such as Adobe and Getty Images so its designers can create varied, high-fidelity images from text prompts and bring them into scenes. This includes Adobe Firefly, a family of creative generative AI models, and exclusive visual content from Getty Images created using NVIDIA Picasso, a foundry for custom generative AI models for visual design.

With the final scenes, creative teams can render large volumes of brand-accurate, 2D images and videos for classic advertising, or publish interactive 3D product configurators to NVIDIA Graphics Delivery Network, a worldwide, graphics streaming network, for consumers to experience on any web device.

In addition to speed and efficiency, the new engine outperforms current methods, which require creatives to manually create hundreds of thousands of pieces of content using disparate data coming from disconnected tools and systems.  .... ' 

Nvidia taps into Israeli innovation to build generative AI cloud supercomputer

Nvidia taps into Israeli innovation to build generative AI cloud supercomputer

Chip giant says Israel-1 supercomputer valued at several hundred million dollars is a ‘major investment’ that will boost next-generation AI workloads

By SHARON WROBEL 

Nvidia's HGX supercomputing platform. (Courtesy)

US gaming and computer graphics giant Nvidia said Monday that it will build the nation’s most powerful generative AI cloud supercomputer called Israel-1 which will be based on a new locally developed high-performance ethernet platform.

Valued at several hundred million dollars, Israel-1, which Nvidia said would be one of the world’s fastest AI supercomputers, is expected to start early production by the end of 2023.

“AI is the most important technology force in our lifetime,” said Gilad Shainer, Senior Vice President of high performance computing (HPC) and networking at Nvidia. “Israel-1 represents a major investment that will help us drive innovation in Israel and globally.”

AI processes analyze enormous datasets and require both ultra-fast computing performance and massive memory. The rise of generative AI applications and workloads like OpenAI’s ChatGPT present new challenges for networks inside data centers. As a result of the major changes AI cloud systems need to be trained using huge amounts of data.

Announced at the Computex tech exhibition starting this week, Israel-1 will be based on Nvidia’s newly launched Spectrum-X networking platform, a high-performance ethernet architecture purposely built for generative AI workloads. Developed in Israel the platform is tailored to enable data center around the world transition to AI and accelerated computing, using a new class of ethernet connection that is build from the ground up for AI.   .. ' 


OpenLLaMA is a fully open-source LLM, now ready for business

Brought to my attention.   in   the-encoder.com

OpenLLaMA is a fully open-source LLM, now ready for business

OpenLLaMA is an open-source reproduction of Meta’s LLaMA language model and can be used commercially.

Since the unveiling of Meta’s LLaMA family of large language models and the subsequent leak, the development of open-source chatbots has exploded. Models such as Alpaca, Vicuna, and OpenAssistant use Meta’s models as the basis for their various forms of instruction tuning.

However, LLaMA models are licensed for research use only, which prevents commercial use of those models.

OpenLLaMA reproduces Meta’s language models

Alternatives based on other freely available models do not match the quality of Meta’s models, as LLaMA follows Deepmind’s Chinchilla scaling laws and has been trained on particularly large amounts of data.

Sound Vibrations Can Encode, Process Data Like Quantum Computers

Remarkable ...

Sound Vibrations Can Encode, Process Data Like Quantum Computers

By New Scientist, May 25, 2023

University of Arizona researchers demonstrated that trapping sound in a simple mechanical device can imitate certain properties of quantum computers.

The researchers built an object that could act like a qubit by gluing together three aluminium rods over half a meter long each, then generated vibrations at one end and detected them at the other.

They observed that information could be input in the "phi-bits" (localized "chunks" of sound produced in the rods) by tuning the sound, and that the phi-bits could be forced into a superposition (a mixture of their individual states).

The researchers used the system to perform simple computations, as well as producing quantum-like states.

From New Scientist

View Full Article -    May Require Paid Subscription

Can China overtake the US in the AI Marathon?

ChatGPT: Can China overtake the US in the AI marathon?

By Derek Cai & Annabelle Liang in  BBC News

Artificial intelligence has emerged as enough of a concern that it made it onto what was already a packed agenda at the G7 summit at the weekend.

Concerns about AI's harmful impact coincide with the US' attempts to restrict China's access to crucial technology.

For now, the US seems to be ahead in the AI race. And there is already the possibility that current restrictions on semiconductor exports to China could hamper Beijing's technological progress.

But China could catch up, according to analysts, as AI solutions take years to be perfected. Chinese internet companies "are arguably more advanced than US internet companies, depending on how you're measuring advancement," Kendra Schaefer, head of tech policy research at Trivium China tells the BBC.

However, she says China's "ability to manufacture high-end equipment and components is an estimated 10 to 15 years behind global leaders."

The Silicon Valley factor

The US' biggest advantage is Silicon Valley, arguably the world's supreme entrepreneurial hotspot. It is the birthplace of technology giants such as Google, Apple and Intel that have helped shape modern life.

Innovators in the country have been helped by its unique research culture, says Pascale Fung, director of the Center for Artificial Intelligence Research at the Hong Kong University of Science and Technology.

Researchers often spend years working to improve a technology without a product in mind, Ms Fung says.

OpenAI, for example, operated as a non-profit company for years as it researched the Transformers machine learning model, which eventually powered ChatGPT.

"This environment never existed in most Chinese companies. They would build deep learning systems or large language models only after they saw the popularity," she adds. "This is a fundamental challenge to Chinese AI."

US investors have also been supportive of the country's research push. In 2019, Microsoft said it would put $1bn (£810,000) in to OpenAI.

"AI is one of the most transformative technologies of our time and has the potential to help solve many of our world's most pressing challenges," Microsoft chief executive Satya Nadella said.

China's edge

China, meanwhile, benefits from a larger consumer base. It is the world's second-most populous country, home to roughly 1.4 billion people.

It also has a thriving internet sector, says Edith Yeung, a partner at the Race Capital investment firm.  ... '  (more)

Light-Field Sensor for 3D Scene Construction with Unprecedented Angular Resolution

Light-Field Sensor for 3D Scene Construction with Unprecedented Angular Resolution

By NUS News (Singapore),   May 18, 2023

A large scale angle-sensing structure comprising nanocrystal phospors, a key component of the sensor, illuminated under ultraviolet light.

At the core of the novel light-field sensor are inorganic perovskite nanocrystals—compounds that have excellent optoelectronic properties.

National University of Singapore scientists created a three-dimensional (3D) light-field sensor that can reconstruct scenes with ultra-high angular resolution using a novel angle-to-color conversion framework.

The device features an angular measurement range exceeding 80 degrees, high angular resolution which can potentially be less than 0.015 degrees for smaller sensors, and a 0.002-nanometer-to-550-nanometer spectral response range.

Inorganic perovskite nanocrystals form the heart of the sensor, which can detect 3D light fields across the X-ray to visible light spectrum due to the crystals' controllable nanostructures.

The researchers patterned the crystals onto a transparent thin-film substrate mated to a color charge-coupled device, which transforms incoming optical signals into color-coded output for use in 3D image reconstruction.

Proof-of-concept experiments showed the sensor could accurately reconstruct images of objects 1.5 meters (4.9 feet) away.

From NUS News (Singapore)

View Full Article  


A Boost for the Quantum Internet

 A Boost for the Quantum Internet

Universitat Innsbruck (Austria)

May 23, 2023

Researchers at Austria's University of Innsbruck transmitted quantum information with a quantum repeater node operating at telecommunication networks' standard frequency. The repeater node features two calcium ions contained in an ion trap within an optical resonator, and single-photon conversion to the standard telecom wavelength. The researchers were able to transmit quantum information over a 50-kilometer (31-mile)-long optical fiber, with the quantum repeater positioned halfway between the transmission and reception points. The researchers said they already have calculated the design upgrades that will be required to transfer data across distances of 800 kilometers (nearly 500 miles).  ....'

Demystifying AI

Seems interesting,  Agree there is mush understanding. More at the link.  Will read.

Dataiku Ebook

Demystifying AI.. 

Where to Draw the Line Between Myth and Reality

First things first: Why exactly has AI become such a nebulous term in modern society? 

In this ebook, we answer that question and share the five most common assumptions about AI. Then, we cut through the noise, equipping organizations with the insights they need to avoid falling into the trap of mistaking myth for reality.   .... ' 

Sunday, May 28, 2023

Quantum Computers Compared

Useful, see link to full article linked to below there are 24 processors!

 Scientists at the U.S. Department of Energy (DOE)'s Los Alamos National Laboratory compared leading quantum computers using the Quantum Computing User Program at DOE's Oak Ridge National Laboratory.

The researchers reviewed 24 quantum processors and ranked their performance numbers against those from vendors like IBM and Quantinuum.

They used as a metric quantum volume, which estimates the degree to which a quantum processor can perform a specific type of random complex quantum circuit.

Outcomes indicated most processors performed close to promoted quantum volume, but rarely at the top numbers vendors touted.

The researchers found higher quantum performance tended to correspond with more intensive quantum circuit compilation, in which classical programming elements are translated into quantum computer commands.

From Oak Ridge National Laboratory

View Full Article    

How Generational Differences Affect Consumer Attitudes Towards Ads

As a big advertiser, something we thought about. 

Meta Research

How generational differences affect consumer attitudes towards ads

By: Melanie Beer Zeldin

Our research study, in collaboration with CrowdDNA, aims to understand people's relationship with social media ads across different social media platforms.

Advertising has historically been a way for advertisers to position a product or service as the star of an ad and deliver a message to as many people as possible. Technological advances in the past 50 years have empowered consumers and completely changed the way we live, shop and consume content. It changed how social media is used, and how ads are perceived at mind-boggling speed. Today, ads have a two-way relationship with their audience, empowering consumers to use their purchasing power and voices to both buy into brands and challenge those acting irresponsibly. The focus of ads has moved from product and services to putting people at the heart of a campaign.

GenZ, defined as those born between 1995 and 2010, grew up connected 24/7. Real life and virtual life are fluidly connected without a distinction between them. Relationships are built online and in real life. For this generation, social media fulfills a variety of needs and use cases, but not always with a specific purpose. Living a mixed reality allows for a fluid expression of multiple versions of their identities that mirror different aspects of their personalities.

“It [targeted ads] doesn’t bother me. I’m going to go on social media either way, so its better than what I’m seeing is relevant."

GenZ | UK | Social Media User

In contrast, Baby Boomers, defined as those born between 1946 and 1964, grew up without the internet and experienced its evolution. There is a distinct compartmentalization of real life and virtual life. Relationships are built in real life and extended into the online world. Social media is used with a specific purpose and to reconnect with real life relations. This generation has a single identity that is represented on social media and is a more or less accurate reflection of themselves in real life.

These different backgrounds fundamentally affect how both generations experience social media and value advertising on social media. While GenZ considers data usage for ads to be normal, Baby Boomers remember an ad-free internet and are more suspicious. Rather than a cultural shift towards greater privacy concerns among consumers, our research found that there are generational nuances to data tolerance in these markets that informs attitudes about privacy. For GenZ, consenting to data use is the ‘new normal’ and navigating the online world is the norm. For Baby Boomers, privacy concerns have always been present but online data collection and use is a new concept.  ... ' 

Demolishing the Journalism Industry Using AI?

Demolishing the Journalism Industry Using AI?

By Futurism, May 12, 2023

Google's new search interface, built on a model trained on unpaid-for human output, will swallow even more human-made content and spit it back out to information-seekers, while taking clicks away from the publishers generating that content.

Remember back in 2018, when Google removed "don't be evil" from its code of conduct?

It's been living up to that removal lately. At its annual I/O in San Francisco this week, the search giant finally lifted the lid on its vision for AI-integrated search — and that vision, apparently, involves cutting digital publishers off at the knees.

Google's new AI-powered search interface, dubbed "Search Generative Experience," or SGE for short, involves a feature called "AI Snapshot." Basically, it's an enormous top-of-the-page summarization feature. Ask, for example, "why is sourdough bread still so popular?" — one of the examples that Google used in their presentation — and, before you get to the blue links that we're all familiar with, Google will provide you with a large language model (LLM) -generated summary. Or, we guess, snapshot.

"Google's normal search results load almost immediately," The Verge's David Pierce explains. "Above them, a rectangular orange section pulses and glows and shows the phrase 'Generative AI is experimental.' A few seconds later, the glowing is replaced by an AI-generated summary: a few paragraphs detailing how good sourdough tastes, the upsides of its prebiotic abilities, and more."

From Futurism

View Full Article   

More on IBM and AI Today

 We worked with IBM for years,  Saw the kind of methods they were pressing that connected their work with Watson and game playing, but never got the impression that that could provide the kind of general management of general corporate data management we were exploring  Seem to making those moves now. 

IBM Consulting recently revealed its Center of Excellence (CoE) for generative AI, aiming to advance artificial intelligence (AI) capabilities and capitalize on the transformative potential of generative AI for business outcomes. Operating parallel with IBM Consulting’s global AI and Automation practice, the CoE encompasses an extensive network of over 21,000 skilled data and AI consultants who have completed over 40,000 enterprise client engagements.  In Venturebeat

The company stated that the Center of Excellence (CoE)’s primary objectives include enhancing customer experiences, transforming core business processes and facilitating innovative business models.

The Center of Excellence (CoE) will leverage IBM’s expertise in enterprise-grade AI, including the recently announced IBM watsonx and cutting-edge technology from IBM’s esteemed ecosystem of business partners, to actively expedite clients’ business transformations. It will also develop new solutions and assets with clients and partners.

“Our Center of Excellence for generative AI has over 1,000 consultants globally with generative AI expertise who are helping clients drive productivity in IT operations and core business processes like HR or marketing, elevate their customer experiences and create new business models,” Glenn Finch, global managing partner, data and technology transformation at IBM Consulting, told VentureBeat. “It stands alongside IBM Consulting’s existing data and AI practice and will focus on solving client challenges using the full generative AI technology stack, including foundation models and 50+ domain-specific classical machine learning accelerators.”  ... ' 

Saturday, May 27, 2023

Space Logistics

Northrup Grumman advances:

What is SpaceLogistics?

SpaceLogistics, a wholly owned subsidiary of Northrop Grumman, provides cooperative space logistics and in-orbit satellite servicing to geosynchronous satellite operators using its fleet of commercial servicing vehicles—the Mission Extension Vehicle, the Mission Robotic Vehicle and the Mission Extension Pods.

Pioneering a New Market in Space

SpaceLogistics currently provides in-orbit satellite servicing to geosynchronous satellite operators using the Mission Extension Vehicle (MEV)™ which docks with customers’ existing satellites providing the propulsion and attitude control needed to extend their lives. This enables satellite operators to activate new markets, drive asset value and protect their franchises.  ... ' 

Mission Extension Vehicle

The Mission Extension Vehicle-1 (MEV-1), the industry’s first satellite life extension vehicle, completed its first docking to a client satellite, Intelsat IS-901 on February 25, 2020. MEV is designed to dock to geostationary satellites whose fuel is nearly depleted. Once connected to its client satellite, MEV uses its own thrusters and fuel supply to extend the satellite’s lifetime. When the customer no longer desires MEV’s service, the spacecraft will undock and move on to the next client satellite. The second Mission Extension Vehicle (MEV-2) launched August 15, 2020 with the Northrop Grumman-built Galaxy 30 satellite. MEV-2 docked with the Intelsat IS-1002 satellite on April 12, 2021.  ... '

Further Update on Elon Musk' s Brain Chip

Neuralink: Why is Elon Musk’s brain chip firm in the news?

By Shiona McCallum  Technology reporter in the BBC

Elon Musk's brain chip firm Neuralink has said that the US Food and Drug Administration (FDA) has approved its first human clinical trial, a critical milestone after earlier struggles to gain approval.

The FDA nod "represents an important first step that will one day allow our technology to help many people," Neuralink said in a tweet.

It did not elaborate on the aims of the study, saying only that it was not recruiting yet, and more details would be available soon.  (... much more ) ... ' 

Europe's First 3D-Printed School Takes Shape in Ukraine

 Europe's First 3D-Printed School Takes Shape in Ukraine

Radio Free Europe/Radio Free Liberty (Czech Republic)

May 25, 2023

Humanitarian group Team4UA organized the building of Europe's first three-dimensionally (3D)-printed primary school in the western Ukraine city Lviv, using technology from Danish 3D-printing construction company COBOD International. The school will combine 3D-printed spaces and manually built sections. Project organizers said one goal is to import several 3D printers and to incorporate the rubble of destroyed buildings into the concrete mix for the school. They hope the school becomes a template for building similar facilities across Ukraine as part of the massive reconstruction effort. The 3D-printed section of the school is scheduled to be completed by early June.  ... ' 

Are ChatGPT's Good at 'Not'?

Are ChatGPT's Good at 'Not'  in QuantaMagazine

Max G. Levy,    Contributing Writer, May 12, 2023

Nora Kassner suspected her computer wasn’t as smart as people thought. In October 2018, Google released a language model algorithm called BERT, which Kassner, a researcher in the same field, quickly loaded on her laptop. It was Google’s first language model that was self-taught on a massive volume of online data. Like her peers, Kassner was impressed that BERT could complete users’ sentences and answer simple questions. It seemed as if the large language model (LLM) could read text like a human (or better).

But Kassner, at the time a graduate student at Ludwig Maximilian University of Munich, remained skeptical. She felt LLMs should understand what their answers mean — and what they don’t mean. It’s one thing to know that a bird can fly. “A model should automatically also know that the negated statement — ‘a bird cannot fly’ — is false,” she said. But when she and her adviser, Hinrich Schütze, tested BERT and two other LLMs in 2019, they found that the models behaved as if words like “not” were invisible.

Since then, LLMs have skyrocketed in size and ability. “The algorithm itself is still similar to what we had before. But the scale and the performance is really astonishing,” said Ding Zhao, who leads the Safe Artificial Intelligence Lab at Carnegie Mellon University.

But while chatbots have improved their humanlike performances, they still have trouble with negation. They know what it means if a bird can’t fly, but they collapse when confronted with more complicated logic involving words like “not,” which is trivial to a human.

“Large language models work better than any system we have ever had before,” said Pascale Fung, an AI researcher at the Hong Kong University of Science and Technology. “Why do they struggle with something that’s seemingly simple while it’s demonstrating amazing power in other things that we don’t expect it to?” Recent studies have finally started to explain the difficulties, and what programmers can do to get around them. But researchers still don’t understand whether machines will ever truly know the word “no.”

Nora Kassner in a blue shirt against a black background.

Nora Kassner has tested popular chatbots and found they typically can’t understand the concept of negation.

Courtesy of Nora Kassner

Making Connections

 It’s hard to coax a computer into reading and writing like a human. Machines excel at storing lots of data and blasting through complex calculations, so developers build LLMs as neural networks: statistical models that assess how objects (words, in this case) relate to one another. Each linguistic relationship carries some weight, and that weight — fine-tuned during training — codifies the relationship’s strength. For example, “rat” relates more to “rodent” than “pizza,” even if some rats have been known to enjoy a good slice.

In the same way that your smartphone’s keyboard learns that you follow “good” with “morning,” LLMs sequentially predict the next word in a block of text. The bigger the data set used to train them, the better the predictions, and as the amount of data used to train the models has increased enormously, dozens of emergent behaviors have bubbled up. Chatbots have learned style, syntax and tone, for example, all on their own. “An early problem was that they completely could not detect emotional language at all. And now they can,” said Kathleen Carley, a computer scientist at Carnegie Mellon. Carley uses LLMs for “sentiment analysis,” which is all about extracting emotional language from large data sets — an approach used for things like mining social media for opinions.

So new models should get the right answers more reliably. “But we’re not applying reasoning,” Carley said. “We’re just applying a kind of mathematical change.” And, unsurprisingly, experts are finding gaps where these models diverge from how humans read.

No Negatives.. Unlike humans, LLMs process language by turning it into math. This helps them excel at generating text — by predicting likely combinations of text — but it comes at a cost.

“The problem is that the task of prediction is not equivalent to the task of understanding,” said Allyson Ettinger, a computational linguist at the University of Chicago. Like Kassner, Ettinger tests how language models fare on tasks that seem easy to humans. In 2019, for example, Ettinger tested BERT with diagnostics pulled from experiments designed to test human language ability. The model’s abilities weren’t consistent. For example:

He caught the pass and scored another touchdown. There was nothing he enjoyed more than a good game of ____. (BERT correctly predicted “football.”)

The snow had piled up on the drive so high that they couldn’t get the car out. When Albert woke up, his father handed him a ____. (BERT incorrectly guessed “note,” “letter,” “gun.”)

And when it came to negation, BERT consistently struggled.  ... '

Rodney Brooks Talks about AI

Comments here a little late, but good points.

Just Calm Down About GPT-4 Already And stop confusing performance with competence, says Rodney Brooks      By Glenn  Zorpette in Spectrum IEEE

Rapid and pivotal advances in technology have a way of unsettling people, because they can reverberate mercilessly, sometimes, through business, employment, and cultural spheres. And so it is with the current shock and awe over large language models, such as GPT-4 from OpenAI.

It’s a textbook example of the mixture of amazement and, especially, anxiety that often accompanies a tech triumph. And we’ve been here many times, says Rodney Brooks. Best known as a robotics researcher, academic, and entrepreneur, Brooks is also an authority on AI: he directed the Computer Science and Artificial Intelligence Laboratory at MIT until 2007, and held faculty positions at Carnegie Mellon and Stanford before that. Brooks, who is now working on his third robotics startup, Robust.AI, has written hundreds of articles and half a dozen books and was featured in the motion picture Fast, Cheap & Out of Control. He is a rare technical leader who has had a stellar career in business and in academia and has still found time to engage with the popular culture through books, popular articles, TED Talks, and other venues.

“It gives an answer with complete confidence, and I sort of believe it. And half the time, it’s completely wrong.”

—Rodney Brooks, Robust.AI

IEEE Spectrum caught up with Brooks at the recent Vision, Innovation, and Challenges Summit, where he was being honored with the 2023 IEEE Founders Medal. He spoke about this moment in AI, which he doesn’t regard with as much apprehension as some of his peers, and about his latest startup, which is working on robots for medium-size warehouses.

Rodney Brooks on…

Will GPT-4 and other large language models lead to an artificial general intelligence in the foreseeable future?

Will companies marketing large language models ever justify the enormous valuations some of these companies are now enjoying?

When are we going to have full (level-5) self-driving cars?

What are the most attractive opportunities now in warehouse robotics?

You wrote a famous article in 2017, “The Seven Deadly Sins of AI Prediction.“ You said then that you wanted an artificial general intelligence to exist—in fact, you said it had always been your personal motivation for working in robotics and AI. But you also said that AGI research wasn’t doing very well at that time at solving the basic problems that had remained intractable for 50 years. My impression now is that you do not think the emergence of GPT-4 and other large language models means that an AGI will be possible within a decade or so.

Rodney Brooks: You’re exactly right. And by the way, GPT-3.5 guessed right—I asked it about me, and it said I was a skeptic about it. But that doesn’t make it an AGI.

The large language models are a little surprising. I’ll give you that. And I think what they say, interestingly, is how much of our language is very much rote, R-O-T-E, rather than generated directly, because it can be collapsed down to this set of parameters. But in that “Seven Deadly Sins” article, I said that one of the deadly sins was how we humans mistake performance for competence.

If I can just expand on that a little. When we see a person with some level performance at some intellectual thing, like describing what’s in a picture, for instance, from that performance, we can generalize about their competence in the area they’re talking about. And we’re really good at that. Evolutionarily, it’s something that we ought to be able to do. We see a person do something, and we know what else they can do, and we can make a judgement quickly. But our models for generalizing from a performance to a competence don’t apply to AI systems.

The example I used at the time was, I think it was a Google program labeling an image of people playing Frisbee in the park. And if a person says, “Oh, that’s a person playing Frisbee in the park,” you would assume you could ask him a question, like, “Can you eat a Frisbee?” And they would know, of course not; it’s made of plastic. You’d just expect they’d have that competence. That they would know the answer to the question, “Can you play Frisbee in a snowstorm? Or, how far can a person throw a Frisbee? Can they throw it 10 miles? Can they only throw it 10 centimeters?” You’d expect all that competence from that one piece of performance: a person saying, “That’s a picture of people playing Frisbee in the park.” .... '