/* ---- Google Analytics Code Below */
Showing posts with label AGI. Show all posts
Showing posts with label AGI. Show all posts

Saturday, June 04, 2022

Is Gato Approaching an AGI?

Saw this mentioned.  Gato/DeepMind push us towards AGI ( Artificial General Intelligence)  Does not seem to be close as yet, but is it the right direction?   Still think no, not even close, but surely some experiments are under way.  Key will be preloading  contextual knowledge to provide what I would call intelligent insight.   If and when so, we will hear of it quickly.    What is a "Generalist AI" 

Is Gato a true AGI?  in Venturebeat.

Artificial general intelligence (AGI) is back in the news thanks to the recent introduction of Gato from DeepMind. As much as anything, AGI invokes images of the Skynet (of Terminator lore) that was originally designed as threat analysis software for the military, but it quickly came to see humanity as the enemy. While fictional, this should give us pause, especially as militaries around the world are pursuing AI-based weapons. 

However, Gato does not appear to raise any of these concerns. The deep learning transformer model is described as a “generalist agent” and purports to perform 604 distinct and mostly mundane tasks with varying modalities, observations and action specifications. It has been referred to as the Swiss Army Knife of AI models. It is clearly much more general than other AI systems developed thus far and in that regard appears to be a step towards AGI.   .... ' 

Wednesday, April 06, 2022

Web3 as a New Economic System

Another Good Piece by  Irving Wladawsky-Berger, here explaining Web3, with lots of useful links.

Via Irving Wladawsky-Berger's excellent Blog: 

A collection of observations, news and resources on the changing nature of innovation, technology, leadership, and other subjects.

What Is Web3, and Could It Usher a New Economic System?

Bitcoin was introduced in 2008 with the release of Bitcoin: A Peer-to-Peer Electronic Cash System, which explained how to design a decentralized cryptocurrency and digital payment system without the need for central banks or trusted intermediaries. Blockchain, the digital ledger for managing and certifying the validity of bitcoin transactions, was introduced in the same paper.

Over the years blockchain has transcended its original objectives, and has evolved in two major directions. One continues to focus on blockchain as the underlying platform for bitcoin, but it’s also become the platform for the large number of cryptocurrencies, digital tokens, and other cryptoassets that have since been created. The other is focused on the use of blockchain as a trusted distributed data base for private and public sector applications involving multiple institutions, such as supply chains, financial services and healthcare. The cryptocurrency camp is based on public permissionless blockchains, which anyone can join and require some kind of proof-of-work or proof-of-stake systems. The multi-institution camp is based primarily on private permissioned blockchains where participation is restricted to the institutions transacting with each other.  .... '


Sunday, April 03, 2022

A Hybrid AI Wins at Bridge. And Explains Win. A more Human Intelligence?

 Most impressive, the win includes many kinds of human like interaction,  a complex, multi-player  gaming interaction.  Are we closer, or is this simply luck?

A Hybrid AI Just Beat Eight World Champions at Bridge—and Explained How It Did It

By Jason Dorrier -Apr 03, 2022   in Singularity Hub

Champion bridge player Sharon Osberg once wrote, “Playing bridge is like running a business. It’s about hunting, chasing, nuance, deception, reward, danger, cooperation and, on a good day, victory.”

While it’s little surprise chess fell to number-crunching supercomputers long ago, you’d expect humans to maintain a more unassailable advantage in bridge, a game of incomplete information, cooperation, and sly communication. Over millennia, our brains have evolved to read subtle facial queues and body language. We’ve assembled sprawling societies dependent on the competition and cooperation of millions. Surely such skills are beyond the reach of machines?

For now, yes. But perhaps not forever. In recent years, the most advanced AI has begun encroaching on some of our most proudly held territory; the ability to navigate an uncertain world where information is limited, the game is infinitely nuanced, and no one succeeds alone.

Last week, French startup NukkAI took another step when its NooK bridge-playing AI outplayed eight bridge world champions in a competition held in Paris.

The game was simplified, and NooK didn’t exactly go head-to-head with the human players—more on that below—but the algorithm’s performance was otherwise spectacular. Notably, NooK is a kind of hybrid algorithm, combining symbolic (or rule-based) AI with today’s dominant deep learning approach. Also, in contrast to its purely deep learning peers, NooK is more transparent and can explain its actions.

“What we’ve seen represents a fundamentally important advance in the state of artificial intelligence systems,” Stephen Muggleton, a machine learning professor at Imperial College London, told The Guardian. In other words, not too bad for a cold, calculating computer.   .... ' 

Friday, March 25, 2022

Book: The Age of AI and Our Human Future: Drones as Early Interactions?

Finished the below book, excellent,  useful read, especially in the later chapters as it relates to human future.    Was written before the Ukraine conflict which would have been a good example.  Note my recent posts on drone use.    I note that during the era of AI development, we also touched on such concerns, but it is clear we are much closer to needing to globally understand an age of AI now.   - FAD 

The Age of AI and our Human Future     by Henry A Kissinger, Eric Schmidt and Daniel Huttenlocher

Artificial Intelligence (Al) is transforming human society in fundamental and profound ways. Not since the Age of Reason have we changed how we approach security, economics, order, and even knowledge itself. In The Age of Al, three deep and accomplished thinkers come together to consider what Al will mean for us all.

An Al learned to win at chess by making moves that human grand masters had never conceived. Another Al discovered a new antibiotic by analyzing molecular properties human scientists did not understand. Now, Al-powered jets are defeating experienced human pilots in simulated dogfights. Al is coming online in searching, streaming, medicine, education, and many other fields and, in so doing, transforming how humans are experiencing reality.

The Age of Al is an essential road map to our present and our future; an era unlike any that has come before.  ...   

Saturday, October 09, 2021

A Challenge: Are we Close to AGI?

 Artificial General Intelligence. Quite interesting note ... we started in the 80s.  How far are we now?  

Artificial General Intelligence: Are We Close, and Does it Even Make Sense to Try?   By MIT Technology Review, August 25, 2021

The idea of artificial general intelligence as we know it today starts with a dot-com blowout on Broadway. 

Twenty years ago—before Shane Legg clicked with neuroscience postgrad Demis Hassabis over a shared fascination with intelligence; before the pair hooked up with Hassabis's childhood friend Mustafa Suleyman, a progressive activist, to spin that fascination into a company called DeepMind; before Google bought that company for more than half a billion dollars four years later—Legg worked at a startup in New York called Webmind, set up by AI researcher Ben Goertzel. Today the two men represent two very different branches of the future of artificial intelligence, but their roots reach back to common ground.

Even for the heady days of the dot-com bubble, Webmind's goals were ambitious. Goertzel wanted to create a digital baby brain and release it onto the internet, where he believed it would grow up to become fully self-aware and far smarter than humans. "We are on the verge of a transition equal in magnitude to the advent of intelligence, or the emergence of language," he told the Christian Science Monitor in 1998.

Webmind tried to bankroll itself by building a tool for predicting the behavior of financial markets on the side, but the bigger dream never came off. After burning through $20 million, Webmind was evicted from its offices at the southern tip of Manhattan and stopped paying its staff. It filed for bankruptcy in 2001 ...

Part of the problem is that artificial general intelligence is a catchall for the hopes and fears surrounding an entire technology .... 

From MIT Technology Review

View Full Article

Sunday, November 15, 2020

Are We at the Narrow Edge of General AI?

And what is even the definition of 'General AI'?   Say a means to repeatedly solve a non trivial problem in a business or scientific domain, with varying data and context.  One that could be 'intelligent' enough to explain its method, ethics and risks to groups of humans to assure them of its net value of implementation.  That can also learn when given new data.  Also one that could measure and report on its net value over time.  Its not to say that narrow AI is not valuable.  It is,  but its not general.   We are not close yet, and not close to a transition point either.

Are We at the edge of general AI?
We’re entering the AI twilight zone between narrow and general AI
Gary Grossman, Edelman    @garyg02

With recent advances, the tech industry is leaving the confines of narrow artificial intelligence (AI) and entering a twilight zone, an ill-defined area between narrow and general AI.

To date, all the capabilities attributed to machine learning and AI have been in the category of narrow AI. No matter how sophisticated – from insurance rating to fraud detection to manufacturing quality control and aerial dogfights or even aiding with nuclear fission research – each algorithm has only been able to meet a single purpose. This means a couple of things: 1) an algorithm designed to do one thing (say, identify objects) cannot be used for anything else (play a video game, for example), and 2) anything one algorithm “learns” cannot be effectively transferred to another algorithm designed to fulfill a different specific purpose. For example, AlphaGO, the algorithm that outperformed the human world champion at the game of Go, cannot play other games, despite those games being much simpler.  ... " 

Saturday, June 06, 2020

Its about Common Sense

Have repeated this many times, it was clear in the late 80s when we built systems that could solve a problem, but not implement it among decision makers.   For general AI, as well as installation of any system that interacts with humans.  Both directly or indirectly through results.   Looking for more out of the Allen Institute for AI.   Here a short introduction to the problem, again:

ACM NEWS
Giving AI Common Sense  By Bennie Mols

Senior Research Manager Yejin Choi says the Allen Institute for AI is teaching neural networks representations of common-sense knowledge and reasoning.

At the Allen Institute for AI:  https://allenai.org/ in Seattle, computer scientist Yejin Choi is leading project Mosaic, which aims to teach machines common-sense knowledge and reasoning, one of the hardest and longest-standing challenges in the field of artificial intelligence (AI).

Choi, senior research manager, leads the project, which started in 2018 and recently delivered its first results. Choi is also an associate professor at the Paul G. Allen School of Computer Science & Engineering at the University of Washington in Seattle.

What is your definition of common sense?

Common sense is about the basic level of practical knowledge and reasoning that concerns everyday situations and events. This is knowledge that is commonly shared among most people. Most 10-year-old kids possess it, but it is very hard for machines. For example: don't leave the door of the fridge open too long, because the food will go bad. Or: if I drop my mug of coffee on the floor, the floor will get wet and the mug might break.

Common sense knowledge is not just about the physical world, but also about the social world. "If Kate smiles, she is probably happy."  .... '

Sunday, May 31, 2020

Microsoft Builds Supercomputer for OpenAI

Some useful hints here about what is being contemplated.   And, of course, big companies like MS, Google, Amazon, Apple, IBM ....  have the access to huge amounts of data to work with, and exposure to rich problem types too.   So expect new things from them.  Note the statement that we are nowhere near 'AGI'  (Artificial General Intelligence)   yet, have been asked that several times lately.

Microsoft Just Built a World-Class Supercomputer Exclusively for OpenAI  By Jason Dorrier in SingularityHub

Last year, Microsoft announced a billion-dollar investment in OpenAI, an organization whose mission is to create artificial general intelligence and make it safe for humanity. No Terminator-like dystopias here. No deranged machines making humans into paperclips. Just computers with general intelligence helping us solve our biggest problems.

A year on, we have the first results of that partnership. At this year’s Microsoft Build 2020, a developer conference showcasing Microsoft’s latest and greatest, the company said they’d completed a supercomputer exclusively for OpenAI’s machine learning research. But this is no run-of-the-mill supercomputer. It’s a beast of a machine. The company said it has 285,000 CPU cores, 10,000 GPUs, and 400 gigabits per second of network connectivity for each GPU server.

Stacked against the fastest supercomputers on the planet, Microsoft says it’d rank fifth.

The company didn’t release performance data, and the computer hasn’t been publicly benchmarked and included on the widely-followed Top500 list of supercomputers. But even absent official rankings, it’s likely safe to say its a world-class machine.

“As we’ve learned more and more about what we need and the different limits of all the components that make up a supercomputer, we were really able to say, ‘If we could design our dream system, what would it look like?’” said OpenAI CEO Sam Altman. “And then Microsoft was able to build it.”

What will OpenAI do with this dream-machine? The company is building ever bigger narrow AI algorithms—we’re nowhere near AGI yet—and they need a lot of computing power to do it.  ... "

Monday, May 18, 2020

What is Artificial General Intelligence (AGI)

A good non technical look at AGI, why its not solved yet, while the narrow kind continues to spread.   I would further suggest that the narrow kind is also context  limited in many ways.  Often in unexpected ways. Impact regulation is also increasing, often motivated by unintended consequences of  applying intelligence.  here is still much to be done to approach general intelligence (AGI).

What is artificial general intelligence (general AI/AGI)?   By Ben Dickson in TechTalks

This article is part of Demystifying AI, a series of posts that (try to) disambiguate the jargon and myths surrounding AI.

From ancient mythology to modern science fiction, humans have been dreaming of creating artificial intelligence for millennia. But the endeavor of synthesizing intelligence only began in earnest in the late 1950s, when a dozen scientists gathered in Dartmouth College, NH, for a two-month workshop to create machines that could “use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves.”

The workshop marked the official beginning of AI history. But as the two-month effort—and many others that followed—only proved that human intelligence is very complicated, and the complexity becomes more evident as you try to replicate it.

That is why, despite six decades of research and development, we still don’t have AI that rivals the cognitive abilities of a human child, let alone one that can think like an adult. What we do have, however, is a field of science that is split into two different categories: artificial narrow intelligence (ANI), what we have today, and artificial general intelligence (AGI), what we hope to achieve.  .... " 

Monday, March 16, 2020

Learning New Ways to Continually Learn

Not quite I think of when I think of AGI (Artficial General Intelligence).    But sequences of useful tasks/learning can be seen as what humans do, provided they pay attention to both existing context and the changes in context introduced by the 'intelligence'. 

OpenAI’s Jeff Clune on deep learning’s Achilles’ heel and a faster path to AGI
 By Khari Johnson in Venturebeat

Neural networks learn differently from people. If a human comes back to a sport after years away, they might be rusty but they will still remember much of what they learned decades ago. A typical neural network, on the other hand, will forget the last thing it was trained to do. Virtually all neural networks today suffer from this “catastrophic forgetting.”

It’s the Achilles’ heel of machine learning, OpenAI research scientist Jeff Clune told VentureBeat, because it prevents machine learning systems from “continual learning,” the ability to remember previous tasks. But some systems can be taught to remember.

Before joining OpenAI last month to lead its multi-agent team, Clune worked with researchers from Uber AI Labs and the University of Vermont. This week, they collectively shared ANML (a neuromodulated meta-learning algorithm), which is able to learn 600 sequential tasks with minimal catastrophic forgetting.

“This is relatively unheard-of in machine learning. To my knowledge, it’s the longest sequence of tasks that AI has been able to do, and at the end of it, it’s still pretty good at all the tasks that it saw,” Clune said. “I think that these sorts of advances will be used in almost every situation where we use AI. It will just make AI better.”

Clune helped cofound Uber AI Labs in 2017, following the acquisition of Geometric Intelligence, and is one of seven coauthors of a paper called “Learning to Continually Learn” published Monday on arXiv.   https://arxiv.org/abs/2002.09571   ...... "

Sunday, February 02, 2020

Can an AI System Reinvent Physics?

The idea has been kicked around a bit.  I see Gary Marcus is one of the authors.  have much enjoyed his book on the current limitations of AI.  See my review at the tag below.  So the answer is if we include a rather narrow range of predictions, based on lots of data, we might get some predictive things that look like 'laws'.   But will they be useful broadly, testable in broader contexts?

Are Neural Networks About to Reinvent Physics?
The revolution of machine learning has been greatly exaggerated.
By   Gary Marcus and Ernest Davis in Nautil.us

Can AI teach itself the laws of physics? Will classical computers soon be replaced by deep neural networks? Sure looks like it, if you’ve been following the news, which lately has been filled with headlines like, “A neural net solves the three-body problem 100 million times faster: Machine learning provides an entirely new way to tackle one of the classic problems of applied mathematics,” and “Who needs Copernicus if you have machine learning?”. The latter was described by another journalist, in an article called “AI Teaches Itself Laws of Physics,” as a “monumental moment in both AI and physics,” which “could be critical in solving quantum mechanics problems.”

The trouble is, the authors have given no compelling reason to think that they could actually do this.

None of these claims is even close to being true. All derive from just two recent studies that use machine learning to explore different aspects of planetary motion. Both papers represent interesting attempts to do new things, but neither warrant the excitement. The exaggerated claims made in both papers, and the resulting hype surrounding these, are symptoms of a tendency among science journalists—and sometimes scientists themselves—to overstate the significance of new advances in AI and machine learning.

As always, when one sees large claims made for an AI system, the first question to ask is, “What does the system actually do?”  .... "

Saturday, February 01, 2020

Looking at the Google Meena Chatbot

Just pointed to this, the claims are considerable.  A very good conversational model would be a big step forward.  Current assistants do poorly except for the simplest requests.  I want assistants to be to not only consider context of a question, but to also expand that context during conversation.  Thats a big step towards 'general intelligence'.    Will be examining this.   Comments out there from people that have?   More to follow.

Just how big a deal is Google’s new Meena chatbot model?
By Ronald Ashri, Greenshot labs,  @Ronald_IStos in Venturebeat

Technology behemoths like Google and Facebook have got us used to, even fatigued by, their never-ending string of impressive announcements of progress in the AI field. Nevertheless, when Google announced that it has built a “conversational agent that can chat about… anything,” even the most jaded amongst us had to pay attention.

Since I work in the field, helping organizations build conversational solutions, I was particularly intrigued. One of the biggest challenges for bots is to handle the infinite possible phrases that a user might say and respond appropriately. A bot that can chat about anything seems like just the thing we would need to solve this challenge. So the question becomes, exactly what impact Google’s new bot, called Meena, will have on organizations looking to deploy conversational AI applications. Have we found the holy grail? Will our bots finally stop saying “I’m sorry, I didn’t quite understand that”? Well, the short answer is that no, we are not quite there yet. Nevertheless, Meena is incredibly impressive and represents a fascinating attempt to solve the problem. In the next few paragraphs, I will summarize what Google did and how this might impact conversational AI in the days, months, and years to come.   .... " 

Friday, January 03, 2020

Foresight Institute AGI Strategy Meeting

Was reminded of the Forsight Institute, which we actively followed regarding nano tech developments since the 90s.  Just got there yearly update, of interest.    More details at the link,reviewing now:

We're excited to share the report of our 2019 AGI Strategy Meeting: Toward Cooperation with you!

The meeting gathers representatives of major AI and AI safety organizations with policy strategists and other relevant actors with the goal of fostering cooperation amongst global AGI-relevant actors but also more directly amongst participants to contribute to a flourishing long-term community. 

While discussions followed the Chatham House Rule, a high-level summary of the sessions and action-is available in the report. 

The 2017 meeting in this series focused on drafting policy scenarios for different AI time frames, and was followed by the 2018 meeting that focused on increasing coordination among AGI-relevant actors, especially the US and China. The 2019 meeting expanded on this topic by mapping concrete strategies toward cooperation.

We welcome questions, comments, and feedback!  ... " 

Wednesday, December 04, 2019

AI to Hit the Wall?

Depends on your expectations.  If you want 'General, Human AI  (AGI)' you could expect some long delays,  but there will be advances along the way.   We waited since the 80s and made things very powerful along the way.  The current 'magic' solutions do seem to be remarkable, but quite constrained, but will they show us fundamentally new paths?

Facebook's Head of AI Says the Field Will Soon ‘Hit the Wall’ in Wired
Jerome Pesenti is encouraged by progress in artificial intelligence, but sees the limits of the current approach to deep learning.

Jerome Pesenti leads the development of artificial intelligence at one of the world’s most influential—and controversial—companies. As VP of artificial intelligence at Facebook, he oversees hundreds of scientists and engineers whose work shapes the company’s direction and its impact on the wider world.

AI is fundamentally important to Facebook. Algorithms that learn to grab and hold our attention help make the platform and its sister products, Instagram and WhatsApp, stickier and more addictive. And, despite some notable AI flops, like the personal assistant M, Facebook continues to use AI to build new features and products, from Instagram filters to augmented reality apps. .... "

Monday, July 23, 2018

Defining AI to Tune Expectations

Correct, the expectations are still much too high for the term AI, and too confused for related Analytics.   Narrow applications are here (again) but now can more different things.  AGI is much further away, with ranges of estimates of its creation from 5-500 years.

We Need to Fine-Tune Our Definition of Artificial Intelligence
By Thomas Hornigold in SingularityHub

" ... Narrow or Weak AI versus Broadly applicable, human-like AI.  Artificial General Intelligence (AGI).  Needing to understand the difference and implications of both.  We learned this very well.  We worked the weak first.  The weak side is often like anything you would use a computer for, but with adding a 'cognitive' element.   AGI meanwhile adds broader capabilities, and in recent years things like assistants have moved us in that direction,  but still at their core much more primitive than human. ... "