/* ---- Google Analytics Code Below */
Showing posts with label Uncanny Valley. Show all posts
Showing posts with label Uncanny Valley. Show all posts

Tuesday, February 14, 2023

NVIDIA Event to Push Avatars

Making a point of 'uncanny valley', aspect of Avatars.  Not sure that would be an issue at a conference, but for typical human interactions, perhaps.  Has this been cleansed by more people getting into gaming use?     Though I still think there is a feeling among many people that an avatar interaction is not genuine, and is less trust-able.   Will it weaken security?   Can you sue an avatar? 

Developer Conference March 20-23, 2023

Keynote, March 21

AI Fundamentals for Building Intelligent, Interactive Digital Humans [S51676]

What does it take to bring a 3D character to life? How can you create a convincing interactive avatar that can see, perceive, converse intelligently, and provide recommendations to enhance the user’s experience? Join NVIDIA as they discuss the foundational AI building blocks required to create a convincing, lifelike avatar. They’ll explore the technical and design challenges of designing, animating, and connecting intelligence to these 3D virtual characters and share the latest best practices and solutions for overcoming the challenge of the “uncanny valley.”  ... '

Friday, July 29, 2022

Considering the Uncanny Valley

Often considered, we did in building early interactions with consumers.

Crossing the Uncanny Valley,    By Logan Kugler

Communications of the ACM, August 2022, Vol. 65 No. 8, Pages 14-15    10.1145/3542817

In 1970, robotics expert Masahiro Mori first described the effect of the "uncanny valley," a concept that has had a massive impact on the field of robotics. The uncanny valley, or UV, effect, describes the positive and negative responses that human beings exhibit when they see human-like objects, specifically robots.

The UV effect theorizes that our empathy towards a robot increases the more it looks and moves like a human. However, at some point, the robot or avatar becomes too lifelike, while still being unfamiliar. This confuses the brain's visual processing systems. As a result, our sentiment about the robot plummets deeply into negative emotional territory.

Yet where the uncanny valley really has an impact is on how humans engage with robots in modern times, an impact that has been proven to change how we see human-like automatons.

In a 2016 research paper in Cognition, Maya Mathur and David Reichling discussed their study of human reactions to robot faces and digitally composed faces. What they found was that the uncanny valley existed across these reactions. They even found that the uncanny valley effect influenced whether or not humans found the robots and digital avatars trustworthy.

"How the uncanny valley has already impacted the design and direction of robots is clear; it has slowed progress," says Karl MacDorman, a professor of human-computer interaction at Indiana University–Purdue University Indianapolis (IUPUI). "The uncanny valley has operated as a kind of dogma to keep robot designers from exploring a high degree of human likeness in human–robot interaction."

To MacDorman and others, the uncanny valley must be dealt with in order to accelerate the adoption of robots in social settings.

More Human, More Problems

For clues as to why, researchers Christine Looser and Thalia Wheatley, then both of Dartmouth College, in 2010 evaluated human responses to a range of simulated faces. The faces ranged in realism from fully human-like to fully doll-like. The researchers found participants stopped viewing a face as doll-like and considered it human when it was 65% or more human-like.

Companies that develop robots now consider findings like this and take active steps to stop the UV effect from impacting how the market receives their technology. One way they do that is by sidestepping the uncanny valley entirely, says Alex Diel, a researcher at Cardiff University's School of Psychology who studies the uncanny valley effect.

"Many companies avoid the uncanny valley altogether by using a mechanical, rather than a human-like, appearance and motion," says Diel. That means companies intentionally remove human-like features, like realistic faces or eyes, from robots—or engineer their movements to be clearly non-human.

One example of this approach is the Tesla Bot, a concept robot unveiled by the electric car manufacturer. While humanoid, the robot has been designed without a face, which makes certain the human brain's facial processing systems will not see it as a deviant version of a human face, says Diel.

Another way companies mitigate the effect of the uncanny valley is by designing robots to be cartoon-like, which helps them appear humanlike and appealing, without becoming too realistic. Diel points to Pepper, a congenial-looking robot manufactured by SoftBank Robotics, as a product that takes this route.

"Cuteness can't be overrated," says Sarah Weigelt, a neuropsychologist researching neural bases of visual perception at the Department of Rehabilitation Sciences of TU Dortmund University in Germany. "If something is cute, you do not fear it and want to interact with it."

If companies can't make a robot cute, they'll often make obvious that it's not a human in some other way. Some companies do this by changing skin tones to non-human colors, or leaving mechanical parts of a robot's body intentionally and clearly exposed, Weigelt says. This averts any confusion that this strange object could be human, sidestepping the UV effect.

While companies work hard to avoid falling into the valley, sometimes they try to pass through the valley and climb out the other side by making robots indistinguishable from humans. However, this presents its own set of problems, says MacDorman.   .... ' 

Tuesday, January 26, 2021

More Digital Humans Emerging

Recall Microsoft's patent of a particular person, just mentioned here.   Recall too the 'uncanny valley', and other disturbing side-effects.   And the magic can collapse quickly of the conversation is not convincing.   But I expect this kind of play being quite common in not too many years.    I have mentioned our own primitive attempts here to simulate conversational brand equity.    Learned much before the capabilities were ready.

Home/News/'Are You Real?'

'Are You Real?'   By Gregory Goth, Commissioned by CACM Staff, January 26, 2021

A Digital Employee developed by IPoft company Amelia.

The digital human is a chatbot partnered with a lifelike avatar to add a visually relatable element to interactions.

Tyler Beck, chief operating officer of Dothan, Ala.-based Five Star Credit Union, recently began evaluating artificial intelligence-based technologies for the $500-million institution serving portions of Alabama and Georgia. As his research progressed, he said he was contacted by a vendor's representative via LinkedIn.

"When I finally got to talk to the company, the lady wasn't on the call," Beck said, "and it hit me she wasn't real. She was artificial intelligence. And not only was the interaction I had with her powered through AI, her profile picture was created through AI. That did a lot to tell me it was on a whole different level than I realized it was at. The functionality and use cases for AI, especially in financial services, is going to be great."

Among the latest iterations of AI-based platforms is the "digital human," a chatbot partnered with a lifelike avatar intended to add a visually relatable element to an interaction. Though Beck and many other business executives are not quite ready to pull the trigger on installing virtual tellers or advisors quite yet, several vendors are quickly emerging. The ecosystem that will enable their technologies to assume prominence is still in its infancy, but is growing quickly.

Additionally, vendors are branding their products to be readily identified as more than disembodied chatbots. New York-based Amelia (formerly known as IPSoft), for example, has trademarked the phrase Digital Employee for its platform, while Austin, Texas-based UneeQ's World Wide Web domain name is digitalhumans.com.

"We are fairly bullish on it," said Jim Lundy, founder and CEO of technology consultancy Aragon Research. "We've seen them in action. We are still in the very early innings and I say that because a lot of the bots that have been deployed thus far are terrible, but for every nine that are bad or average, there is usually one that is amazing."   ... " 

Thursday, November 05, 2020

Disney Research's Lifelike Robotic Gaze

Will Disney push forward Lifelike Robotics and Vision.  A very uncanny valley?

Disney Research Makes Robotic Gaze Interaction Eerily Lifelike

IEEE Spectrum Evan Ackerman

 A team of researchers from Disney Research, the California Institute of Technology, the University of Illinois at Urbana-Champaign, and Walt Disney Imagineering is imbuing animatronic robots with lifelike eye gaze. The system they are using decides where to gaze by first identifying a person to target using an RGB-D camera; if multiple people are visible, the system calculates a curiosity score for each, based on the amount of motion, and chooses the highest-scoring target. The robot will then display high-level gaze behavior (reading, glancing, engaging, or acknowledging) determined by score. An underlying subsumption architecture dictates lower-level motion behaviors like breathing, small head movements, eye blinking, and saccades. ... '

Monday, November 02, 2020

Robots Patrolling the Uncanny Valley

The notion of the 'uncanny valley', further and more widely examined.  Typically the idea has been applied to android robotics, that is robots or even visualizations that look much like humans, but are not.  That gives many people an uneasiness in interaction.   It can be expanded today to animals.  For example the robot dogs by Boston Dynamics, while obviously not dogs,  give some people a 'feeling of' large, potentially dangerous dogs.   A positive if you want them for guarding or patrolling.

 In general we don't feel quite the same way with large humanoid robots, who could have the same capabilities, but are not wrapped in the same animal skin we have a fear reaction to.  In general too 'AI' as a concept does not generate this fear reaction.  Our own experimentation with characters like Mr Clean, showed you could add smiles and positive interaction to cancel the uncanny reaction. 

Further in TechExplore:  Why robots and artificial intelligence creep us out   by Amanda Bowman, Texas Tech University ... ' 

Thursday, September 10, 2020

Updating the Uncanny Valley

We did related work with advertising and marketing engagements, fascinating area of behavioral interaction. 

by Carol Clark, Emory University

Androids, or robots with human-like features, are often more appealing to people than those that resemble machines—but only up to a certain point. Many people experience an uneasy feeling in response to robots that are nearly lifelike, and yet somehow not quite "right." The feeling of affinity can plunge into one of repulsion as a robot's human likeness increases, a zone known as "the uncanny valley."

The journal Perception published new insights into the cognitive mechanisms underlying this phenomenon made by psychologists at Emory University.  ... '
Shensheng Wang et al. The Uncanny Valley Phenomenon and the Temporal Dynamics of Face Animacy Perception, Perception (2020)https://journals.sagepub.com/doi/10.1177/0301006620952611 

Friday, November 22, 2019

Bots Work Better if they Impersonate

Interesting results, with shades of the implications of 'uncanny valley' and the need to define 'success' in strong context.   Our guard is already up now when we 'meet' bots online.   Expectations are set depending up this context.  'Prisoners Dilemma' context is interesting to test, but is it often a context that humans encounter?   We do now need to know much better how humans work with machines.

Bots Are More Successful If They Impersonate Humans
By Max Planck Institute for Human Development
November 21, 2019

Researchers at the Max Planck Institute for Human Development, along with colleagues in the U.S. and the United Arab Emirates, found that bots are more successful at certain human-machine interactions, but only if they are allowed to hide their non-human identity.

The researchers asked nearly 700 volunteers in an online cooperation game to interact with a human or an artificial partner. In the game, known as the prisoner's dilemma, players can either act in their own self-interest to exploit the other player, or act cooperatively with advantages for both sides. However, some of the participants interacting with another human were told they were playing with a bot, and vice versa.

The researchers found that bots impersonating humans were more successful in convincing their gaming partners to cooperate. However, as soon as they revealed their true identity, cooperation rates decreased.

From Max Planck Institute for Human Development
View Full Article  .... "

Thursday, May 23, 2019

Designing Robots with Personality

We always attribute a bit of personality into our devices,  but what amount and kind is useful for the best results, and minimal unintended consequences??

Character Engineer: Designing robots with a touch of personality.

So, Mark Palatucci EAS’00 wants to put a robot in every home.

That might sound familiar. After all, you may even already have one. But Palatucci, a cofounder of the San Francisco-based robotics company Anki, isn’t thinking about task-oriented automatons or self-directed vacuum cleaners. He’s not even thinking about smart speakers. He’s designing robots with “character”—enough to spark an emotional connection with their owners.

“People are much more willing to put a character in their home than they are just some smart cylinder or smart speaker that doesn’t have any emotion or character built around it,” he says. “It creates a sense of trust that a lot of other products don’t necessarily have.”

And if that trust leads to more engagement with the robot—whether it’s playing games with a robot called Cozmo, or getting Vector, another model, to take a picture when your hands are full—all the better.

Anki’s aim in building robots is to enable people to “build relationships with technology that feel a little more human.” Palatucci, who earned a computer science and engineering degree at Penn, is the company’s head of cloud artificial intelligence and machine learning. Their products have been getting notice. .... " 

Sunday, August 19, 2018

Robots Get Uncannily Realistic, but Why?

Perhaps to engage, sell, enforce or to just prove it can be done.  Replace humans?  Mimic humans?  Put them in situations where humans should be present, but are too expensive.   Even without claiming any intelligence as we see it. Add voice and intelligence, make them as pretty as they need to be,  keeping eye contact, Turing-enable them too, and it will be hard to say what we are engaging with.  Will they be regulated because they are a implicit even unethical lie?   Welcome to the Android.  Expect much more of this in coming decades.

Welcome to the uncanny valley: This robot head shows lifelike expressions
By Luke Dormehlin in Digital Trends  ...

The SEER robot is further described from SigGraph.   With its 'unnerving eye-contact'.  I used to attend that conference, when it was far less entertaining.

Tuesday, July 10, 2018

Digital Persons Emerge Again

Recall our long look at using digital personas to represent brand equities, including integrating a chatbot to represent useful information to the consumer.   Our consumers reacted well to this, but the interest faded.  That  also include a personality and image that places her in the 'Uncanny Valley'. 

ANZ has birthed its first "digital person". Jamie, an AI invention styled as a 25-year-old New Zealand woman, started work on Tuesday morning.

Jamie's first job is to chat with customers on the 30 questions the bank gets asked most often by customers.  Though "she" is capable of learning, it's "moderated" learning, so customers trying to teach Jamie swear words, or bad habits, will fail.

But chatting with Jamie for a few minutes results in her dropping hints that there's something more to her than a series of rote answers.  Ask her about her weekend, and she may tell you she enjoys ice dance. ... " 

Wednesday, April 04, 2018

What People See in Robot Faces



A reminder of work we did using brand equity to represent assistants on the Web.   We assumed that a well known ad character would an ideal way to engage, with an otherwise chatbot-style interaction. This could have been good input to the project.  Much more at the links below.   Be glad to chat.

What People See in 157 Robot Faces 
in IEEE Spectrum   By Evan Ackerman

University of Washington in Seattle researchers conducted a study of robot faces--157 in all--across 76 dimensions, to determine the distinct ways people experience them. Among the study's insights was that robots whose faces were rated less-friendly lacked a mouth and pupils, but had eyelids; eyebrows also were deemed to be indicators of intelligence. Although no robot faces were rated as significantly more likable than the baseline face, the robot that possessed irises was the most liked by survey participants overall. "I think our work helps to elucidate what kind of effect certain design choices have, and while these results are by no means definitive, I think they do help prime our thinking about design consequences," says the University of Washington's Alisa Kalegina. The work was presented in March at the ACM/IEEE International Conference on Human Robot Interaction (HRI 2018) in Chicago, IL.  ... " 

Friday, December 29, 2017

Google Creates Human Voices

Have been hearing computer voices regularly for several years now with assistants, and note their ability to evoke a personality.   Previous to that had seen the testing of voice generation systems for warnings in a military system,  with no attempt to create any personality,  just a robotic voice.   This piece points to an article in Quartz with more detail.

Google's Voice-Generating Artificial Intelligence Now Indistinguishable From Humans

Of use to marketers, customer service reps - and politicians - the new capability will make it difficult, if not impossible, to distinguish between human and technologically generated 'voices.'

The larger question is whether, at this stage, consumers will care. JL  ... " 

Monday, November 27, 2017

AVA Does Customer Service

As part of a look and conversation regarding conversational agents for both engagement, loyalty and enhanced customer service for Autodesk.  Does such a solution have to look like a human?   Note below Watson services are being tested to model customer sentiment.   Videos at the link.  Thanks to Walter Riker. 

This Chatbot Is Trying Hard To Look And Feel Like Us    In FastCompany

Modeled on a real person and equipped with a virtual “nervous system,” Autodesk’s AVA is built to be a font of empathy, no matter how mean a customer gets.without the nicotine/content.
Among the attributes credited for Apple’s famous customer loyalty is a network of stores where curious or frustrated consumers can meet the company face-to-face.

The 3D design software maker Autodesk is trying to achieve something similar online with a help service that allows people to interact with what sure looks like an actual human. The company says that next year it will introduce a new version of its Autodesk Virtual Agent (AVA) avatar, with an exceedingly lifelike face, voice, and set of “emotions” provided by a New Zealand AI and effects startup called Soul Machines. Born in February as a roughly sketched avatar on a chat interface, AVA’s CGI makeover will turn her into a hyper-detailed, 3D-rendered character–what Soul Machines calls a digital human. ... " 

Friday, June 23, 2017

Imaginary People

We are not far from generating very convincing faces of people.  Even making them dynamic.  And if you put these on humanoid robots?   Give them Generative AI?   In the Verge: 

" .... As we get better at making, faking, and manipulating human faces with machine learning, one thing is abundantly clear: things are going to get ~freaky~ fast.

Case in point: this online demo hosted (and, we presume, made) by web developer AlteredQualia. It combines two different research projects, both of which use neural networks. The first is DeepWarp, which alters where subjects in photographs are looking, and the second is a work in progress by Mike Tyka dubbed Portraits of Imaginary People. This does exactly what it says on the tin: feeding a generative neural network with a bunch of faces and getting it to create similar samples. .... "

Tuesday, June 13, 2017

Creating Caricature Stickers from Selfies

The Allo messaging capability from Google is now capable of creating caricature image stickers from selfies.  These are mostly for use within Allo, but you can snip them off and use them elsewhere.   Whats most interesting is how this was done, and the care to make it work well and not insult anyone,  even considering things like 'Uncanny Valley' effects.   Which hardly seems important in cartoon images,  since they don't  get close to images you would mistake for photos.

In general the results  are flattering caricatures of the cartoon style.   And you can play with the images to get them to your liking. The article about how this was done is in the Google Research Blog.   Allo is free from Google, so anyone can try.    It links to Google Assistant to make the messaging smart, in ways I am still trying to understand.  More on my examination of Allo.

Wednesday, April 26, 2017

Bots in the UnCanny Valley

The 'Uncanny Valley'  is a term used in the physical representation of people in android form,  as they get closer to reality we get uneasy. Now it has been suggested that the way bots chat is also reaching that.  I have conversed with bots where they seemed to have a personality.  Even the tone of Alexa when she says something like 'Good Night', can invoke an oddly social but uncanny feeling.  But a longer conversation,  which expects basic human performance, soon breaks that spell. So are we soon likely to combine image features and chat speech?  Will that be more or less disconcerting?

Chatbots Have Entered the Uncanny Valley 
The Atlantic,  Kaveh Waddell

The tendency for people to be repelled by increasingly humanoid and human-like robots may extend to chatbots and digital assistants as well. "The more human-like a system acts, the broader the expectations that people may have for it," says Carnegie Mellon University professor Justine Cassell. Modern chatbots use banter and humor, conversational speech, and parsing free-form questions and answers to coax users into engaging with them in more a human-like manner. "This creates a perception that if you say anything to this bot, it should respond to it," says Autodesk engineer Nikhil Mane. This makes for situations in which user requests exceed the bot's limitations, and the subsequent errors serve to remind users of the assistant's artificiality. Mane says a better approach for bots is to make users aware of their constraints, such as prompting  ... " 

Sunday, August 28, 2016

Exploring the Uncanny Valley of Bots

Have long been a student and practitioner of engineering how people react to intelligent machines. And here we mean by a depth beyond just posing and answering questions, but how people actually engage, trust and build some relationship with machines.

One aspect of this,  that came out of robotics, is the idea of an 'uncanny valley', where people are averse to machines that seem too human-like.  Can also be applied to bots, as described below.  From the CACM:

The Edge of the Uncanny By Gregory Mone 
Communications of the ACM, Vol. 59 No. 9, Pages 17-19

" ... Mitsuku is quick-witted, occasionally confusing, and strangely engaging. She is also a chatbot, built from the A.L.I.C.E. (Artificial Linguistic Internet Computer Entity) platform originally developed by Richard Wallace in 1995. She conducts hundreds of thousands of conversations daily, according to Lauren Kunze, principal of Pandorabots, the Oakland, CA-based company behind the technology. "She doesn't really do anything," Kunze says. "She's not designed to assist you. She can tell you the weather or perform an Internet search, but she's really just there to talk to you, and she's wildly popular with teens. People say, 'I love you' and 'you're my best friend.'"

The appeal is not accidental. The designers of chatbots like Mitsuku and the engineers of physical social robots have made significant advances in their understanding of how to build more engaging machines. Yet there are still many challenges, one of which is the unpredictability of humans. "We just don't understand how people are going to react to physical or software robots," says University of Southern California computer scientist Yolanda Gil, chair of SIGAI, ACM's Special Interest Group on Artificial Intelligence. "This is one kind of technology where people continue to surprise us."

While there are no absolute guidelines for building effective social robots or engaging chatbots, a few common themes have emerged.

Uncanny Expectations

One frequently cited theory in social robotics is the Uncanny Valley, first described by Japanese roboticist Masahiro Mori in 1970. The Uncanny Valley contends there is a risk in building machines that are too human, that instead of attracting people, realistic androids can have a repulsive effect because of their "uncanny" resemblance to real humans. The reasons for the aversion are varied. Researchers have found evidence that highly capable androids bother people because they represent a threat to human uniqueness, or that on a subconscious level, they actually remind us of corpses. ... " 

Sunday, July 06, 2014

Holographs Brings Back Performers

In the CACM:  Technical, but very readable piece on the history and use of the holographic illusion. Remember seeing these live in Disney World a long time ago, their improvement has been considerable, bringing back the dead, and enhancing the living.  We saw holographic packaging displays demonstrated.  Some augmented reality ideas embed holographics.   Engagement, or a scary uncanny valley?  Holographic Projection Systems Provide Eternal Life? 

Wednesday, May 28, 2014

Taxing Enjoyable Work, and Uncanny Valley of Work Related Social Events

Taxing new things, like work satisfaction.  I am sure this is all being thought of today.  And how do we determine if something is fun, or just engineered to be?  And can we can we get refunds for things that are just not enjoyable to us?

Sunday, December 01, 2013

Uncanny Valleys of Human Interaction

The 'uncanny valley' describes the uneasy reaction people have to robotics that look very similar to human beings, but they know are artificial.  A lengthy piece on the topic.  Of course a valley has a rise on the other side, and if we ultimately don't know its artificial?  Sounds arcane, but a very interesting view of our own built in emotions.  Also, in the WP, from which the example image on the right comes.