/* ---- Google Analytics Code Below */
Showing posts with label Fear. Show all posts
Showing posts with label Fear. Show all posts

Tuesday, July 04, 2023

Hope, Fear, and AI

following up on this ...

Hope, Fear, and AI,    By The Verge  June 29, 2023

Use of these new tools is still fairly limited, and experience with them skews decidedly toward younger users.

Credit: Diana Young/The Verge

AI is about to change the world — the problem is, no one's quite sure how. Some look at the past year's rapid progress and see opportunities to remove creative constraints, automate rote work, and discover new ways to learn and teach. Others see how this tech can disrupt our lives in more damaging ways: how it can generate misinformation, destroy or diminish jobs, and, if left unchecked, pose a serious threat to our safety.

Tech leaders, lawmakers, and researchers have all been weighing in on how we should handle this emerging tech. Some industry figures, like OpenAI CEO Sam Altman, want AI giants to steer regulation, shifting the focus to perceived future threats, including the "risk of extinction." Others, like EU politicians, are more concerned with current dangers and banning dangerous use cases (while holding back positive applications, say skeptics). Meanwhile, many small artists would just like a guarantee that they won't be replaced by machines.

To find out what people really think about AI and what they want from it, The Verge teamed up with Vox Media's Insights and Research team and the research consultancy firm The Circus to poll more than 2,000 US adults on their thoughts, feelings, and fears about AI. The results tell the story of an emerging, uncertain, and exciting technology — where many have yet to use it, many are fearful of its potential, and many still have great hopes for what it could someday do for them.

From The Verge

View Full Article   

Wednesday, January 25, 2023

Fear Can Inspire Remote Workers to Protect IT Resources

Obvious?  Perhaps, but some combination with stewardship  ...

Fear Can Inspire Remote Workers to Protect IT Resources

Washington State University, Will Ferguson, January 11, 2023

A study by researchers at Washington State University (WSU), the University of North Texas, and Oklahoma State University found that remote workers are most motivated to protect their employer's IT security when they fear the consequences of a security breach and understand the seriousness of potential security threats. The study compared protection motivation theory, which involves encouraging secure behaviors using fear appeals and threat messages; stewardship theory, which involves motivating employee behavior through moral responsibility; and a combination of the two. In a survey of 339 workers, the researchers found an approach that focused on fear and threats was more effective than a stewardship-based approach, but that promoting the stewardship theory's sense of collectivism increased the efficacy of protection motivation-based methods. ... ' 

Monday, November 02, 2020

Robots Patrolling the Uncanny Valley

The notion of the 'uncanny valley', further and more widely examined.  Typically the idea has been applied to android robotics, that is robots or even visualizations that look much like humans, but are not.  That gives many people an uneasiness in interaction.   It can be expanded today to animals.  For example the robot dogs by Boston Dynamics, while obviously not dogs,  give some people a 'feeling of' large, potentially dangerous dogs.   A positive if you want them for guarding or patrolling.

 In general we don't feel quite the same way with large humanoid robots, who could have the same capabilities, but are not wrapped in the same animal skin we have a fear reaction to.  In general too 'AI' as a concept does not generate this fear reaction.  Our own experimentation with characters like Mr Clean, showed you could add smiles and positive interaction to cancel the uncanny reaction. 

Further in TechExplore:  Why robots and artificial intelligence creep us out   by Amanda Bowman, Texas Tech University ... ' 

Saturday, June 08, 2019

People First AI Strategy

Considerable article that is a transcript of a conversation with  Soumitra Dutta, professor of operations, technology, and information management at the Cornell SC Johnson College of Business on how decisions need to be made when including people and machines.   Agree in principle, but how do we place people first?   Which and how many people?   Can we  crowdsource the agreement of people.  And how transparent do we need to be to make sure that this is actually happening?

Why We Need a People-first AI Strategy  in K@W

With more access to data and growing computing power, artificial intelligence (AI) is becoming increasingly powerful. But for it to be effective and meaningful, we must embrace people-first artificial intelligence strategies, according to Soumitra Dutta, professor of operations, technology, and information management at the Cornell SC Johnson College of Business. “There has to be a human agency-first kind of principle that lets people feel empowered about how to make decisions and how to use AI systems to support their decision-making,” notes Dutta. Knowledge@Wharton interviewed him at a recent conference on artificial intelligence and machine learning in the financial industry, organized in New York City by the SWIFT Institute in collaboration with Cornell’s SC Johnson College of Business.

In this conversation, Dutta discusses some myths around AI, what it means to have a people-first artificial intelligence strategy, why it is important, and how we can overcome the challenges in realizing this vision.

An edited transcript of the conversation follows: 

Knowledge@Wharton: What are some of the biggest myths about AI, especially as they relate to financial services?

Soumitra Dutta: AI, as we all know, is not new per se. It has been there for as long as modern computing has been around, and it has gone through ups and downs. What we are seeing right now is an increased sense of excitement or hype. Some people would argue it’s over-hyped. I think the key issue is distinguishing between hope and fear. Today, what you read about AI is largely focused around fear — fear of job losses, fear of what it means in terms of privacy, fear of what it means for the way humans exist in society. The challenge for us is to navigate the fear space and move into the hope space. By “hope,” I mean that AI, like any other technology, has negative side effects – but it also presents enormous positive benefits. Our collective challenge is to be able to move into the positive space and look at how AI can help empower people, help them become better individuals, better human beings, and how that can lead to a better society.

Knowledge@Wharton: How do you get to the “hope” space in a way that is based on reality and away from the myths and hype?

Dutta: We need to have what I term as a “people-first” AI strategy. We have to use technology, not because technology exists, but because it helps us to become better individuals. When organizations deploy AI inside their work processes or systems, we have to explicitly focus on putting people first.

This could mean a number of things. There will be some instances of jobs getting automated, so we have to make sure that we provide adequate support for re-skilling, for helping people transition across jobs, and making sure they don’t lose their livelihoods. That’s a very important basic condition. But more importantly, AI provides tools for predicting outcomes of various kinds, but the actual implementation is a combination of the outcome prediction plus judgment about the outcome prediction. The judgment component should largely be a human decision. We have to design processes and organizations such that this combination of people and AI lets people be in charge as much as possible.

There has to be a human agency-first kind of principle that lets people feel empowered about how to make decisions, how to use AI systems to make better decisions. They must not feel that their abilities are being questioned or undercut. It’s the combination of putting people and technology together effectively that will lead to good AI use in organizations.

“The key issue is distinguishing between hope and fear…. The big challenge for us is to navigate the fear space and move into the hope space.”  ..... " 

Tuesday, October 30, 2018

Whats Creepy About AI

Our views and expectations about what to expect from machines, even in a very general sense, are constantly changing.  Experience from media, experiences in work and at home adjust our views.  Getting examples of the value provided, all have influence.   Here a poll looks at consumer thoughts.

AI is Creeping America Out, But It Doesn’t Have To

An Interactions/Harris Poll reveals what people find creepy about AI

FRANKLIN, MA – October 30, 2018 – With every click, download or voice command, AI has another data point to slip into its back pocket – ready and waiting to help inform business decisions, marketing strategies and campaign targeting. To date, companies have been experimenting with how to use this data to dazzle customers. From alerting people when their milk is low, to pre-selecting their online shopping carts, to helping them book a vacation, brands have cast the deciding vote on how and when consumer data should be used. But without insight from customers, they’ve been operating in the dark—blindly walking the line between helpful and creepy.

That’s why Intelligent Virtual Assistant leader Interactions commissioned The Harris Poll to conduct an online survey of 2,000+ American adults in August to figure out exactly where the “creepy” line is, and when AI crosses it. In the process, we identified consumer comfort level with AI utilizing personal information, and what tips the scale from helpful to creepy. Here are the top consumer concerns that crossed the creepy line:  ... '

Wednesday, October 25, 2017

Chronophobia: Fear of Future

Not sure I agree,  based on Pew,  people will say they dislike the future, yet they continually pay, as much as they are able, to participate in that future.   That will change if they are not part of the future.

Chronophobia: Fear of the Future   from Pew Internet

The Pew Internet report, Automation in Everyday Life, is more about fear of automation than enthusiasm for it

Reading the recently released Pew Internet report¹, Automation in Everyday Life, I came away with several specific observations, like the growing concerns about joblessness in an increasingly automated world.

Americans’ concerns about emerging automation technologies demonstrate a deepening fear of the future, or chronophobia.

But more than anything else the report highlights a growing appreciation of something more fundamental: Americans’ concerns about emerging automation technologies demonstrate a deepening fear of the future, or chronophobia. ... " 

Monday, October 09, 2017

Pew Research: Automation in Everyday Life

An extensive report based on surveys. Pointer is to an overview, and then links further to a full PDF report I am reading.  Addressing fear about the future of technology.  Nicely done so far.

Pew Research Center for Internet & Technology

Automation in Everyday Life
Americans express more worry than enthusiasm about coming developments in automation – from driverless vehicles to a world in which machines perform many jobs currently done by humans ... " 

By Aaron Smith and Monica Anderson