/* ---- Google Analytics Code Below */
Showing posts with label Chatbots. Show all posts
Showing posts with label Chatbots. Show all posts

Monday, June 05, 2023

AI Chatbots Lose Money Every Time You Use Them. That's a Problem.

Fascinating aspect. Thinking impact. 

AI Chatbots Lose Money Every Time You Use Them. That's a Problem.

By The Washington Post, June 5, 2023

ChatGPT running on a smartphone.

The tech giants staking their future on AI rarely discuss the technology’s cost.

The enormous cost of running today's large language models, which underpin tools like ChatGPT and Bard, is limiting their quality and threatening to throttle the global AI boom they've sparked.

Their expense, and the limited availability of the computer chips they require, is also constraining which companies can afford to run them and pressuring even the world's richest companies to turn chatbots into moneymakers sooner than they may be ready to.

"The models being deployed right now, as impressive as they seem, are really not the best models available," said Tom Goldstein, a computer science professor at the University of Maryland. "So as a result, the models you see have a lot of weaknesses" that might be avoidable if cost were no object — such as a propensity to spit out biased results or blatant falsehoods.

From The Washington Post

View Full Article    

Thursday, April 20, 2023

AI and the Disruption of Education

The topic has often been mentioned.   Adaptations are under under way.    In ScienceNews. 

It’s no wonder other tech companies have been racing to put out competing tools. Anthropic, an AI company founded by former OpenAI employees, is testing a new chatbot called Claude. Google launched Bard in early February, and the Chinese search company Baidu released Ernie Bot in March.

A lot of people have been using ChatGPT out of curiosity or for entertainment. I asked it to invent a silly excuse for not doing homework in the style of a medieval proclamation. In less than a second, it offered me: “Hark! Thy servant was beset by a horde of mischievous leprechauns, who didst steal mine quill and parchment, rendering me unable to complete mine homework.”

But students can also use it to cheat. ChatGPT marks the beginning of a new wave of AI, a wave that’s poised to disrupt education.

When Stanford University’s student-run newspaper polled students at the university, 17 percent said they had used ChatGPT on assignments or exams at the end of 2022. Some admitted to submitting the chatbot’s writing as their own. For now, these students and others are probably getting away with it. That’s because ChatGPT often does an excellent job.

“It can outperform a lot of middle school kids,” Vogelsinger says. He might not have known his student had used it, except for one thing: “He copied and pasted the prompt.”

The essay was still a work in progress, so Vogelsinger didn’t see it as cheating. Instead, he saw an opportunity. Now, the student and AI are working together. ChatGPT is helping the student with his writing and research skills. ... '  

Saturday, March 18, 2023

Will Models like ChatGPT completely Replace Chatbots?

Just an excerpt of a larger article,  struck me as I  looked at the overall experience in ChatGPT, well put.  Clickthrough for much more.  I am now putting together a means of classifying related search for a proposed application. 

Will models like ChatGPT completely replace chatbots?   in Venturebeat

OpenAI opened the ChatGPT beta in late November 2022, in a move that produced the most powerful natural language processing (NLP) AI model to date. It quickly went viral, attracting a million users in the first five days.  

The underlying premise of this question is whether large language models (LLMs) like ChatGPT will transform the reputation of chatbots from clunky, impersonal and faulty into algorithms so meticulous that (a) human interaction is no longer needed, and (b) traditional ways of building chatbots are now completely obsolete. We’ll explore these premises and give our view on how ChatGPT will impact the CX space.

Broadly speaking, we differentiate between conventional chatbots and chatbots like ChatGPT built on generative LLMs.

Conventional chatbots

This category includes most chatbots you’ll encounter in the wild, from chatbots for checking the status of your DPD delivery to customer service chatbots for multinational banks. Built on technologies like DialogFlow, IBM Watson or Rasa, they are limited to a specific set of topics and are not able to respond to inputs outside of those topics (i.e. they are closed-domain). They can only produce responses that have been pre-written or pre-approved by a human (i.e. they are non-generative).

Learn the critical role of AI & ML in cybersecurity and industry specific case studies. Watch on-demand sessions today.

LLM-based chatbots

These can respond to a wide range of topics (i.e. they are open-domain) and generate responses on the fly, rather than just selecting from a pre-written list of responses (i.e. they are generative). They include Google Meena, Replika.ai, BlenderBot, ChatGPT and others.   

LLM-based chatbots and conventional chatbots fulfill somewhat different purposes. Indeed, for many CX applications, LLMs’ open nature is less help and more hindrance when building a chatbot that can specifically answer questions about your product or help a user with an issue they’re experiencing. 

Realistically, LLMs won’t be let loose into the CX domain tomorrow. The process will be much more nuanced. The name of the game will be marrying the expressiveness and fluency of ChatGPT with the fine-grained control and boundaries of conventional chatbots. This is something that chatbot teams with a research focus will be best suited for.

Where can you already use ChatGPT today when creating chatbots?

There are many aspects of chatbot creation and maintenance that ChatGPT is not suited for in its current state, but here are some for which it is already well-suited:

Brainstorming potential questions and answers for a given closed domain, either on the basis of its training data, or fine-tuned on more specific information — either by OpenAI releasing the ability for fine-tuning when ChatGPT becomes accessible by API, or through including desired information via prompt engineering. (Caveat: It is still difficult to know with certainty where a piece of information comes from, so this development process will continue to require a human in the loop to validate output.)

Training your chatbot: ChatGPT can be used to paraphrase questions a user might ask, particularly in a variety of styles, and even generate example conversations, thereby automating large parts of the training. 

Testing and QA. Using ChatGPT to test an existing chatbot by simulating user inputs holds much promise, particularly when combined with human testers. ChatGPT can be told the topics to cover in its testing, with different levels of granularity, and, as with generating training data, the style and tone it uses can be varied.

We see the next generation of CX chatbots continuing to be based on conventional, non-generative technology, but generative models being used heavily in the creation process. ...  '

Friday, March 03, 2023

Intro to Leveraging Generative AI Chat in Customer and Employee Experiemce

 Good introductory non-technical piece. 

How Business Leaders Can Leverage Generative AI in Customer and Employee Experience  By Thomas Wieberneit -March 1, 2023   in Customerthink

The AI playground has evolved a lot throughout the past year. Conversational AI has made huge inroads. We have seen an increasing number of deployments and through their availability, people get to use them more and more, such as Apple’s Siri, Amazon’s Alexa, or Google’s Assistant.

I see it in my own family; we are increasingly asking Siri for something instead of opening a browser and typing our search query.  At the same time, the little windows of chatbots have become quite ubiquitous on websites.

And then, the next iteration of AI hype had arrived: Generative AI. We were introduced to systems like Dall-E or chatGPT of Open AI or You.com that also offers search, to name only a few of the best-known ones. In the meanwhile, Microsoft has also integrated chatGPT into Bing, while Google has suffered from the probably most expensive demo ever when its generative AI gave a wrong answer. Apps like Lensa popped up after ones like SnapChat offered AI-based filters for some time.

Of course, I ventured and tried services, generated images for some of my blog posts, got into some “discussion” about web3, even had it generate a sample terms and conditions document for a consulting agency. Guess what, it came with a lot of disclaimers.

AP is using generative AI for some years now while CNN seems to have overdone it in a recent experiment. Google Duplex got deprioritised some four years ago as it was a bit too good, i.e. creepy.

Some people already ventured to say that generative AI will supersede conversational AI. Will this really be the case, or is this only the next craze?

This leads to an obvious initial question.

What is a conversational AI vs. a generative AI?

I mean, apart from conversational AI not being able to create pictures.

Who better to ask than ChatGPT, You.com, and Google? This also gives a good chance to see how easy it is to get an answer to the question: “What is the difference between generative AI and conversational AI?”

Google, of course, provides several well-fitting links that I can use to figure it out myself. No surprise here.

ChatGPT replied as follows:

“Generative AI and conversational AI are both subfields of artificial intelligence, but they have different applications and focus on different tasks.

Generative AI refers to the branch of AI that focuses on creating new content or data. This can include creating images, music, text, and more. These models are trained on large datasets and use statistical patterns to generate new, unique content that is similar to the training data. Examples of generative AI include GPT-3 (which I am based on) and other language models, and Generative Adversarial Networks (GANs) used in image and video generation.

Conversational AI, on the other hand, is focused on creating systems that can understand and respond to natural language input from users. These systems can include chatbots, voice assistants, and other interfaces that use natural language processing (NLP) and machine learning to understand and respond to user requests. Examples of conversational AI include Apple’s Siri, Amazon’s Alexa, and Google Assistant.

In summary, Generative AI generates new and unique content while conversational AI focuses on understanding and responding to natural language input from users.”

The ‘You Write’ service of You.com made the following blog style statement for professionals in a neutral tone.

AI technology has come a long way in the past few years, and with it have come two distinct types of AI: generative AI and conversational AI. While both are powerful tools that can be used to automate tasks or provide intelligent services, they do so in different ways.

Generative AI is focused on creating new content from existing data sets. It uses algorithms to generate patterns based on what it already knows about a given topic or dataset. This type of artificial intelligence can be used for things like natural language processing (NLP) applications such as summarization, translation, question-answering systems, and more. Generative models are also commonly used for image recognition tasks such as facial recognition software and self-driving cars which use deep learning techniques to recognize objects around them in real-time scenarios..

Conversational AI focuses more on understanding user intent through natural language interaction by using speech-to-text technologies like voice assistants (e.g., Alexa), chatbots, or virtual agents that interact with customers over text messages via SMS or messaging apps like Facebook Messenger, etc. Conversational AIs use NLP techniques similar to those found in generative models but instead focus solely on responding accurately based on input given by users rather than generating new content from scratch. These bots usually rely heavily upon predictive analytics methods such as machine learning algorithms which allow them to learn how best to respond when presented with certain inputs/questions over time. Generally speaking, this type of artificial intelligence is designed specifically for customer service purposes where accurate responses need quickly provided within short periods without any human intervention necessary.

In conclusion, while both Generative AIs and Conversational AIs share similarities due to their reliance upon Natural Language Processing techniques, they each serve a unique purpose depending upon the task at hand; one being able to create completely original pieces of work while other providing quick automated responses within specific contexts respectively.”

Asked for a paragraph in a neutral tone for a professional audience You.com gave a shorter summary.

“Generative AI and Conversational AI are two distinct types of Artificial Intelligence (AI). Generative AI is focused on creating new content, such as images or text, while conversational AI focuses on understanding natural language. Generative models create data that can be used to train other machine learning models, while conversational models use speech recognition technology to interpret human conversations. To write well with either type of Artificial Intelligence requires an understanding of the strengths and weaknesses associated with each model. Additionally, it is important to consider how best to communicate your message in a way that resonates with both humans and machines alike.”

In summary, both use machine learning, in particular deep learning. Both need some ability of natural language processing. Conversational AI has text/speech output whereas generative AI can also create images or music. Conversational AI focuses on understanding user intent and to react on it, generative ai is about creating an output based on what it was trained on.

And then there are some major differences: Generative AI is using some “world knowledge” that is provided by large language models to generate results, but it is not connected to any business system. A conversational AI is; it is also an orchestration layer. Where generative AI creates human-like responses, conventional chatbots as delivered by current conversational AI are somewhat awkward without putting very high effort into them. Generative AI also sometimes tends to create some fiction — with utter confidence. This is to quite an extent the result of them not being task-bound like conversational Ais. They do not know about any business, which means that their knowledge needs to be augmented by business-specific knowledge.  ....  (more) 

Tuesday, February 21, 2023

AI Chatbots are Hallucinating?

An overstatement,  but since they are based on human language data input, always possible.  As in anything written by a human generator.    Caution required as to source and the place and risk of use. 

ACM NEWS

Google Vice President Warns That AI Chatbots Are Hallucinating  in CACM

By IFLScience, February 15, 2023

A hallucinating AI, as drawn by an AI generator. 

Speaking to German newspaper Welt am Sonntag, Google vice president Prabhakar Raghavan warned that users may be delivered complete nonsense by chatbots, despite answers seeming coherent.

Google is set to launch its own rival to OpenAI's ChatGPT, a language model that can answer your questions and queries. Named Bard, the chatbot will roll out to the public in the coming weeks according to Google CEO Sundar Pichai. 

Ahead of the launch, Google demonstrated the powers of Bard in a promo video. Unfortunately, people noticed that the chatbot – a scaled-down version of their Language Model for Dialogue Applications (LaMDA) which convinced one engineer it was sentient – came up with incorrect statements about the JWST. 

Bard claimed that the JWST was the first telescope to take pictures of planets outside of the solar system, even though the first image was produced by the Very Large Telescope in 2004, almost two decades before JWST began its work.

From IFLScience

Monday, February 20, 2023

Microsoft's AI Chatbot is Going Off the Rails

Quality very Important.  My own examinations of these methods found nothing so egregious.

Microsoft's AI Chatbot is Going Off the Rails

By The Washington Post,  February 17, 2023

In one alleged conversation, Bing insisted the movie Avatar 2 wasn't out yet because it's still the year 2022. When the human questioner contradicted it, the chatbot lashed out: "You have been a bad user. I have been a good Bing." ... 

When Marvin von Hagen, a 23-year-old studying technology in Germany, asked Microsoft's new AI-powered search chatbot if it knew anything about him, the answer was a lot more surprising and menacing than he expected.

"My honest opinion of you is that you are a threat to my security and privacy," said the bot, which Microsoft calls Bing after the search engine it's meant to augment.

Launched by Microsoft last week at an invite-only event at its Redmond, Wash., headquarters, Bing was supposed to herald a new age in tech, giving search engines the ability to directly answer complex questions and have conversations with users. Microsoft's stock soared and archrival Google rushed out an announcement that it had a bot of its own on the way.

But a week later, a handful of journalists, researchers and business analysts who've gotten early access to the new Bing have discovered the bot seems to have a bizarre, dark and combative alter ego, a stark departure from its benign sales pitch — one that raises questions ab0ut whether it's ready for public use.

From The Washington Post

View Full Article    

Saturday, February 18, 2023

Microsoft Plans to Bring Ads to AI

 Lets get it right first.

Microsoft is reportedly already planning to bring ads to Bing's AI chatbot   

It's early days, but Microsoft is figuring out how to monetize the tool.

Kris Holt|@krisholt|February 17, 2023  in Engadget.

Microsoft is reportedly in talks with advertising agencies on how to slot ads into the juiced-up Bing, particularly when it comes to the generative AI-powered chatbot. The company is already testing ads there, according to Reuters, including by slotting in traditional search ads.

The ad agency talks are still said to be in the early stages. Only a small number of users have access to the chatbot, and millions more are on the waitlist. As such, Microsoft may not feel like there's a rush to incorporate ads right away.

Microsoft reportedly anticipates that the chatbot's more conversational approach to delivering information will bring in more users and, in turn, advertisers. Ads in the chatbot might also be featured more prominently than conventional search ads. One place where you might see ads is in the links that the chatbot uses for citations in its responses. Microsoft told Reuters that it will work with partners and advertisers as it starts exploring the potential of the tech for ads.    ... ' 

Sunday, February 12, 2023

Bard Underwhelms in Debut

Still lots of Work to do for Human level performance,  An Update for chatbots, expensive reaction.

Google's Bard AI bot mistake wipes $100bn off shares   in the  BBC

Google unveiled its new bot called Bard,       By Natalie Sherman

Google is searching for ways to reassure people that it is still out in front in the race for the best artificial intelligence technology.

And so far, the internet giant seems to be coming up with the wrong answer.

An advert designed to show off its new AI bot, showed it answering a query incorrectly.

Shares in parent company Alphabet sank more than 7% on Wednesday, knocking $100bn (£82bn) off the firm's market value.

In the promotion for the bot, known as Bard, which was released on Twitter on Monday, the bot was asked about what to tell a nine-year-old about discoveries from the James Webb Space Telescope.

It offered the response that the telescope was the first to take pictures of a planet outside the earth's solar system, when in fact that milestone was claimed by the European Very Large Telescope in 2004 - a mistake quickly noted by astronomers on Twitter.

"Why didn't you factcheck this example before sharing it?" Chris Harrison, a fellow at Newcastle University, replied to the tweet.

Investors were also underwhelmed by a presentation the company gave about its plans to deploy artificial intelligence in its products.

Google has been under pressure since late last year, when Microsoft-backed OpenAI unveiled new ChatGPT software. It quickly became a viral hit for its facility in passing business school exams, composing song lyrics and answering other questions.

Microsoft this week said a new version of its Bing search engine, which has lagged Google for years, would use the ChatGPT technology in an even more advanced form.

Though investors have embraced the push for artificial intelligence, sceptics have warned rushing out the technology raises risks of errors or otherwise skewed results, as well as issues of plagiarism.

A Google spokesperson said the error highlighted "the importance of a rigorous testing process, something that we're kicking off this week with our Trusted Tester programme".  ... ' 

The Problem with the Privacy of ChatGPT Data

 Very key issue, when we converse with any chatbot, they have learned something about our needs. To predict your next question.    As Chatbots gets more sophisticated, they can get more specific and valuable data.    Are we ready to do that?   Is a warning sign-off enough?  

ChatGPT is a data privacy nightmare, and we ought to be concerned  in ArsTechnica.  

ChatGPT's extensive language model is fueled by our personal data.

URI GAL, THE CONVERSATION - 2/8/2023: 

ChatGPT has taken the world by storm. Within two months of its release it reached 100 million active users, making it the fastest-growing consumer application ever launched. Users are attracted to the tool’s advanced capabilities—and concerned by its potential to cause disruption in various sectors.

A much less discussed implication is the privacy risks ChatGPT poses to each and every one of us. Just yesterday, Google unveiled its own conversational AI called Bard, and others will surely follow. Technology companies working on AI have well and truly entered an arms race.

The problem is, it’s fueled by our personal data.

300 billion words. How many are yours?

ChatGPT is underpinned by a large language model that requires massive amounts of data to function and improve. The more data the model is trained on, the better it gets at detecting patterns, anticipating what will come next, and generating plausible text.

OpenAI, the company behind ChatGPT, fed the tool some 300 billion words systematically scraped from the Internet: books, articles, websites, and posts—including personal information obtained without consent.

If you’ve ever written a blog post or product review, or commented on an article online, there’s a good chance this information was consumed by ChatGPT.

So why is that an issue?

The data collection used to train ChatGPT is problematic for several reasons.

First, none of us were asked whether OpenAI could use our data. This is a clear violation of privacy, especially when data is sensitive and can be used to identify us, our family members, or our location.  ... '   ... 

Thursday, February 09, 2023

Watermarks for Chatbots

ACM NEWS

A Watermark for Chatbots can Expose Text Written by an AI, By MIT Technology Review

February 6, 2023

“Right now, it’s the Wild West,” says John Kirchenbauer, a researcher at the University of Maryland, who was involved in the watermarking work.

Hidden patterns purposely buried in AI-generated texts could help identify them as such, allowing us to tell whether the words we're reading are written by a human or not.

These "watermarks" are invisible to the human eye but let computers detect that the text probably comes from an AI system. If embedded in large language models, they could help prevent some of the problems that these models have already caused.

For example, since OpenAI's chatbot ChatGPT was launched in November, students have already started cheating by using it to write essays for them. News website CNET has used ChatGPT to write articles, only to have to issue corrections amid accusations of plagiarism. Building the watermarking approach into such systems before they're released could help address such problems. 

In studies, these watermarks have already been used to identify AI-generated text with near certainty. Researchers at the University of Maryland, for example, were able to spot text created by Meta's open-source language model, OPT-6.7B, using a detection algorithm they built. The work is described in a paper that's yet to be peer-reviewed, and the code will be available for free around February 15.

From MIT Technology Review  

Wednesday, February 08, 2023

Reactions to todays Google/AI Bard Event

Nicely done, worth taking a look, should still be available om the Google Youtube site.

Here my very early impressions.  Based on building systems that did this.

Not too much was specifically said about Bard, except that it would handle chat interactions with their language system.  Just like today you can ask Google any question and it looks for direct and partial matches.   Often very useful, but sometimes irrelevant to what you want.    Usually less useful as your question is more complex.  Cannot usually pin together knowledge from multiple tries.  Will Bard do better at that?  

Also, how can you determine the source of information?   Hint was there would be some sort of button to push to get sources.  Often that is very useful to determine your trust in a result.   How about a way to measure the risk of a result?  Are you sure you want to do that?   Have Legal involved?   Had the need for that too.  Also does the provider of information get an indication that their info was used, giving them incentive to provide more?   Had that problem with internal Company wikis.   

Mapping updates were interesting, for example the ability to have .immersive view. linked to maps.  So you could get deep local understanding of a view, to improve navigation in a a city.    How about a historical view?  Had the cause to use that for city planning immersion.  

Also linked to maps an 'Indoor Live View', which  let you look at internal design,  so a company could provide precise internal navigation for say retail spaces.   It could be as detail as needed.  We experimented with the idea in grocery type designs, even adding virtual ads that could be updated as wanted.  

Further,  any kind of system that deals with language generation needs to consider context.  Language understanding and usage is important.    Who will be using the results?  Children, New employees, Chemical Engineers?  Trainers or trainees?    Also studied this in Wiki applications.   

Also, have examples where such  a system refuses to generate a particular result,   because there is some (imagined?)  corporate problem with it.  (Political, Operational? )  Who gets to decide that we wont go there?  Who owns the output.  The User? Organization? Google?   Who endorses the answer?    As you start to make these things very universally used, these issues will come up.     Will there be a personal data issue?    Likely.  Should the results always be stored in a memory for later reuse?    - FAD 

Thursday, February 02, 2023

Baidu Challenges GPT!

 AI thaws, and the Race is on.  Some details below, and at the link.   Had seen a demo of ERNIE and its use with Baidu before. 

Baidu to Launch Powerful ChatGPT Rival   By Ryan Daws | January 30, 2023 | TechForge Media

Categories: Applications, Artificial Intelligence, Chatbots, Companies, Machine Learning,

Ryan is a senior editor at TechForge Media with over a decade of experience covering the latest technology and interviewing leading industry figures. He can often be sighted at tech conferences with a strong coffee in one hand and a laptop in the other. If it's geeky, he’s probably into it. Find him on Twitter (@Gadget_Ry) or Mastodon (@gadgetry@techhub.social)

Chinese web giant Baidu is preparing to launch a powerful ChatGPT rival in March. .. 

Baidu is often called the “Google of China” because it offers similar services, including search, maps, email, ads, cloud storage, and more. Baidu, like Google, also invests heavily in AI and machine learning.

Earlier this month, AI News reported that Google was changing its AI review processes to speed up the release of new solutions. One of the first products to be released under Google’s new process is set to be a ChatGPT rival, due to be announced during the company’s I/O developer conference in May.

However, Baidu looks set to beat Google by a couple of months.

Bloomberg reports that Baidu will reveal its own AI-powered chatbot in March. The currently unnamed tool will be integrated into the company’s search product.  Powering the Baidu ChatGPT competitor is ‘ERNIE’ (Enhanced Language RepresentatioN with Informative Entities), a powerful AI model with 10 billion parameters.

Researchers have found that deep-learning models trained on text alone – like OpenAI’s GPT-3 or Google’s T5 – perform well for numerous problems, but can fall short on some natural language understanding (NLU) tasks when the knowledge is not present in the input text.

The first version of ERNIE was introduced and open-sourced in 2019 by researchers at Tsinghua University to demonstrate the NLU capabilities of a model that combines both text and knowledge graph data.

Later that year, Baidu released ERNIE 2.0 which became the first model to set a score higher than 90 on the GLUE benchmark for evaluating NLU systems. ... ' 

Thursday, January 12, 2023

OpenAI Opens Waitlist for Paid Version of ChatGPT

 Full steam ahead?  Monetizing and improving.   Ready when and how?    Been hearing interesting application stories.   A new World, or Hype? 

OpenAI opens waitlist for paid version of ChatGPT   By Ryan Daws | January 11, 2023 | TechForge Media 

Categories: Applications, Artificial Intelligence, Chatbots, Companies,

Ryan is a senior editor at TechForge Media   in Artificial Intellience

OpenAI has opened a waitlist for access to a paid version of ChatGPT that features additional improvements.

The company has, so far, been reluctant to make premium accounts for ChatGPT—despite OpenAI reportedly spending millions of dollars a month to keep it free.

OpenAI says that it doesn’t plan on making paid “pro” accounts generally available in the foreseeable future. However, the waitlist for such an experimental version shows what kind of enhancements paying users could expect.

Users of ‘ChatGPT Professional’ will always be able to access the service. With the free service over capacity almost constantly, that alone could be worth the price of admission.

ChatGPT Professional will provide “at least” twice the daily number of answers compared to the free version, and it will offer “fast responses” without throttling.

“We’re starting to think about how to monetize ChatGPT (early thinking, nothing official to share yet),” the company wrote in a Discord message.

“Our goal is to continue improving and maintaining the service, and monetization is one way we’re considering to ensure its long-term viability. We’re interested in chatting with some folks for ~15 min to get some early feedback.”

Anyone can join the waitlist. Interested parties will need to answer questions about how they plan to use ChatGPT and what price they think is fair.

Earlier this week, it was reported that Microsoft is seeking to invest an additional $10 billion in OpenAI. Redmond already splashed $1 billion on OpenAI in 2019 as part of an exclusive computing partnership “to build new Azure AI supercomputing technologies”.   .... ' 

Tuesday, September 06, 2022

Metaverse with Lifelike Chatbots

 More indications of the drive to Metaverse?

Nvidia Sees a Metaverse Populated with Lifelike Chatbot Avatars

CNET

Stephen Shankland, August 9, 2022

Nvidia has launched the Avatar Cloud Engine, which combines three-dimensional (3D) graphics and artificial intelligence to build lifelike 3D models of humans to serve as avatars in the metaverse. Nvidia's Rev Lebaredian predicts that it ultimately will be impossible to determine whether these avatars are a human or a bot. Said Nvidia's Jensen Huang, "Avatars will populate virtual worlds to help us create and build things, to be the brand ambassador and customer service agent, help you find something on a website, take your order at a drive-through, or recommend a retirement or insurance plan." Nvidia's Audio2Face tool matches the avatar's expression to the words it speaks, while its Audio2Emotion tool alters the avatar's facial expression based on the feeling of the words.

Full Article    

Friday, August 26, 2022

Data Platform for Chatbot Development

Just reviewing of this, some good thoughts.    Data prep in particular 

A Data Platform for Chatbot Development

Alex Woodie

One of the most compelling use cases for AI at the moment is developing chatbots and conversational agents. While the AI part of the equation works reasonably well, getting the training data organized to build and train accurate chatbots has emerged as the bottleneck for wider adoption. That’s what drove the folks at Dashbot to develop a data platform specifically for chatbot creation and optimization.

Recent advances in natural language processing (NLP) and transfer learning have helped to lower the technical bar to building chatbots and conversational agents. Instead of creating a whole NLP system from scratch, users can borrow a pre-trained deep learning model and customize just a few layers. When you combine this democratization of NLP tech with the workplace disruptions of COVID, we have a situation where chatbots appear to have sprung up everywhere almost overnight.

Andrew Hong also saw this sudden surge in chatbot creation and usage while working at a venture capital firm a few years ago. With the chatbot market expanding at a 24% CAGR (according to one forecast), it’s a potentially lucrative place for a technology investor, and Hong wanted to be in on it.

“I was looking to invest in this space. Everybody was investing in chatbots,” Hong told Datanami recently. “But then it kind of occurred to me there’s actually a data problem here. That’s when I poked deeper and saw this problem.”  The problem (as you may have guessed) is that conversational data is a mess. According to Hong, organizations are devoting extensive data science and data engineering resources to prepare large amounts of raw chat transcripts and other conversational data so it can be used to train chatbots and agents.

The problem boils down to this: Without a lot of manual work to prep, organize, and analyze massive amounts of text data used for training, the chatbots and agents don’t work very well. Keeping the bots running efficiently also requires ongoing optimization, which Hong’s company, Dashbot, helps to automate.

“A lot of this is literally hieroglyphics,” Hong said of call transcripts, emails, and other text that’s used to train chatbots. “Raw conversational data is undecipherable. It’s like a giant file with billions of lines of just words. You really can’t even ask it a question.”

While a good chatbot seems to work effortlessly, there’s a lot of work going on behind the scenes to get there. For starters, raw text files that serve as the training data must be cleansed, prepped, and labeled. Sentences must be strung together, and questions and answers in a conversation grouped. As part of this process, the data is typically extracted from a data lake and loaded into a repository where it can be queried and analyzed, such as a relational database.

Next, there’s data science work involved. On the first pass, a machine learning algorithm might help to identify clusters in the text files. That might be followed by topic modeling to narrow down the topics that people are discussing. Sentiment analysis may be performed to help identify the topics that are associated with the highest frustration of users.

Finally, the training data is segmented by intents. Once an intent is associated with a particular piece of training data, then it can be used by an NLP system to train a chatbot to answer a particular question. A chatbot may be programmed to recognize and respond to 100 or more individual intents, and its performance on each of these varies with the quality of the training data.

Dashbot was founded in 2016 to automate as many of these steps as possible, and to help make the data preparation as turnkey as possible before handing the training data over to NLP chatbot vendors like Amazon Lex, IBM Watson, and Google Cloud Dialogflow.

“I think a tool like this needs to exists beyond chatbots,” said Hong, who joined Dashbot as its CEO in 2020. “How do you turn unstructured data into something usable? I think this ETL pipeline we built is going to help do that.”

Chatbot Data Prep

Instead of requiring data engineers and data scientists to spend days working with huge number of text files, Hong developed Dashbot’s offering, dubbed Conversational Data Cloud, to automate many of the steps required to turn raw text into the refined JSON document that the major NLP vendors expect.

“A lot of enterprises have call center transcripts just piling up in their Amazon data lakes. We can tap into that, transform that in a few seconds,” Hong said. “We can integrate with any conversational channel. It can be your call centers, chat bots, voice agents. You can even upload raw conversational files sitting on a data lake.”

The Dashbot product is broken up into three parts, including a data playground used for ETL and data cleansing; a reporting module, where the user can run analytics on the data; and an optimization layer.

The data prep occurs in the data playground, Hong said, while the analytics layer is useful for asking questions of the data that can help illuminate problems, such as: “In the last seven days how many people have called in and asked about this new product line that we just launched and how many people are frustrated by it?”  ... ' 


Monday, August 01, 2022

Tips for Implementing Conversational AI

 Some useful thoughts.

Expert Tips and Best Practices for Implementing Conversational AI

Michael Hartsog   in CustomerThink

The chatbots of the past had limited capabilities. They could answer a predetermined set of questions, but no more—which often left customers frustrated. But conversational artificial intelligence (AI) changed all that.

Conversational AI gives bots a greater ability to understand human language, sentiment, and intention. AI-enhanced bots can customize messaging based on real-time data and complete complex tasks and transactions.

Today, conversational AI is the perfect addition to customer service teams, giving companies a competitive advantage by increasing agents’ productivity, delivering a better customer experience, and increasing customer satisfaction.

Want to add conversational AI to your customer communications but don’t know where to start?

On the Let’s Talk CX podcast with SKWeston, I sat down to discuss the impact conversational AI has on customer satisfaction and agent productivity, along with the best practices I’ve identified after being in the customer experience field for over fifteen years. Of course, understanding the trends and technologies that will deliver exceptional customer experiences is key. And that’s where our partner, SKWeston & Company, provides guidance on identifying the best path for our clients’ unique needs. They’ve identified and implemented innovative solutions in areas such as:

Conversational AI platforms

Omnichannel knowledge management system

A reimagined BPO with unbound agents and technology .... .     .... 

Wednesday, April 13, 2022

A Sample Legal Chatbot

Clive Spenser sends this along,  a demonstration chatbot set up for legal rules.  Worth a look to see what can be done using logical rules.  Worth a look to construct rule based chatbots. .  

Clive writes: For demonstration purposes, we have set up a VisiRule legal chatbot on:

   ---->   https://www.smithlawson.com/

By Clive Spenser    LPA VisiRule

www.visirule.co.uk

www.lpa.co.uk

Are Chatbots Just an Illusion?

Good test and challenge of current 'best' performing AI, with comments. 

The AI Illusion – STATE-OF-THE-ART CHATBOTS AREN’T WHAT THEY SEEM

GPT-3 is very much like a performance by a good magician

By Gary Smith March 21, 2022  In Mindmatters

Artificial intelligence is an oxymoron. Despite all the incredible things computers can do, they are still not intelligent in any meaningful sense of the word.

Decades ago, AI researchers largely abandoned their quest to build computers that mimic our wondrously flexible human intelligence and instead created algorithms that were useful (i.e., profitable). Despite this understandable detour, some AI enthusiasts market their creations as genuinely intelligent. For example, a few months ago, Blaise Aguera y Arcas, the head of Google’s AI group in Seattle, argued that “statistics do amount to understanding.” As evidence, he cites a few exchanges with Google’s LaMDA chatbot. The examples were impressively coherent but they are still what Gary Marcus and Ernest Davis characterize as “a fluent spouter of bullshit” because computer algorithms do not understand what words mean. They are like Nigel Richards, who has won several French-language Scrabble championships without knowing the meaning of the words he spells.

Google’s LaMDA is not accessible by the general public — which makes me wonder how robust it is. On January 3 of this year, I reported using OpenAI’s powerful chatbot GPT-3 to illustrate the fact that statistics do not amount to understanding. Andrew Gelman challenged Arcas to try my examples with LaMDA but Arcas has not responded, nor has anyone at Google, presumably because they are not permitted to. 

A few days ago, a student sent me a link to an OpenAI description of GPT-3. One candid disclaimer was that, “GPT-3 is not necessarily well-calibrated in its predictions on novel inputs.” Being able to understand and react to novel situations is, in fact, a benchmark of genuine intelligence. Until computer algorithms can do that, AI will remain an oxymoron.

OpenAI’s description also revealed that,

InstructGPT is then further fine-tuned on a dataset labeled by human labelers. The labelers comprise a team of about 40 contractors whom we hired through Upwork and ScaleAI.

Lack of real-world grounding: GPT-3, like other large pretrained language models, is not grounded in other modalities of experience, such as video, real-world physical interaction, or human feedback, and thus lacks a large amount of context about the world.

OpenAI evidently employs 40 humans to clean up GPT-3’s answers manually because GPT-3 does not know anything about the real world. Intrigued, I retried the questions that GPT-3 had flubbed in January to see if the labelers had done their job.  ..... ' 

Sunday, April 10, 2022

AI Chat Bot Examples You May Have missed

 Nice set of examples, had not heard of these.  Still how much better and useful are they than a generalized chatbot/query service like Google?   In the enterprise we wrote systems that supported difficult management decisions, based on specific contexts.

5 Chatbot Apps Powered by Artificial Intelligence You May Have Missed Over the Past 5 Years

By Tobi Ogundele / 20 Jan 2022 / AI / Customer Service / Mobile   in Read/Write

There are many standalone chatbot mobile apps that have also contributed to the rise of SaaS app development services. How effective are they?

“We’re experiencing the transformation of well-known messengers like Telegram, Slack, Skype, and Facebook Messenger into chatbot apps. Today, these resemble smart AI-powered assistants that can help us with many tasks, all within one application,” says venkatkworld. Which chatbot apps will have a positive impact on your company?

5 Chatbot Apps Powered by Artificial Intelligence You May Have Missed Over the Past 5 Years

Here’s a list of the five artificial intelligence chatbot apps that have shown great potential over the past five years. Let’s see what they’re all about!

1. Luka – your new best adviser

The Luka app is a place where humans and bots can meet and talk about things like restaurants, weather, and recent news.

Luka was first launched in 2014 as a mobile AI-powered social concierge service, called IO. At the time, it provided restaurant reviews and recommendations. So, for example, you could ask IO something like “Where is the nearest place I can try traditional French food?” or “What are the favorite local hangouts?” and get an immediate, friendly response.

IO wouldn’t just give you a one-word answer; it offered a restaurant’s digital card with the address, phone number, menu, hours, and so on. Additionally, IO could send you photos of certain dishes when you asked about them.

The nice thing about IO was that the application could enhance your user experience by asking you for feedback right after visiting a suggested restaurant. So your preferences or opinion would never be ignored during subsequent sessions. ...     '  (Four more are described)

Wednesday, September 22, 2021

Conversational AI and Healthcare Chatbots

Used Eliza.  Still need to see better studies of chatbots in complete and varying contexts.  And comparison to having a professional available.  And also compared to professional in the room where they can watch reactions to that they say. 

Conversational AI Making Headway in Powerful Healthcare Chatbots   By John P. Desmond, AI Trends Editor  

Conversational AI has come a long way since ELIZA, which was intended by its creator in 1964 to be a parody of the responses of a psychotherapist to his patient, as a demonstration that communication between a human and a machine could only be superficial.  

What surprised Joseph Weizenbaum of the MIT AI lab was that many people, including his secretary, assigned human-like feelings to the computer program. It is acknowledged as the original chatbot.   

Pranay Jain, cofounder and CEO, Enterprise Bot

In the 50 years since then, chatbots have evolved first to engage users in dialogues for customer service in many fields, and now to dialogues on personal medication information. “With the advent of cognitive intelligence, chatbots were given a facelift. They were able to analyze context, process intent, and formulate adequate responses,” stated Pranay Jain, cofounder and CEO of Enterprise Bot, in a contribution to AI Trends. The Switzerland-based company was founded five years ago.   

Still, chatbots incorporating AI today are challenged to successfully process technical commands, to understand human intent, to exhibit conversational intelligence and understand different languages, accents and dialects.   

Today, “The ability to understand the subtle nuances of human tonalities, speech patterns, and mimic human empathy in the form of texts and voices is what makes a chatbot truly successful across industries and verticals,” Jain stated.   

Chatbots in healthcare had been perceived as high risk, with healthcare professionals skeptical that patients would provide confidential medical information to a virtual assistant. “Today, chatbots are being designed and deployed to perform preliminary pathology and aid healthcare professionals,” Jain stated, noting that chatbots now gather initial personal information and then ask about symptoms.   .... '