/* ---- Google Analytics Code Below */

Saturday, November 30, 2019

Our Relationship with AI

Wired Launches a new podcast: Sleepwalkers, in the post below with links to it. Will be listening to and following. Posting comments here.

Rethinking Our Relationship With Artificial Intelligence
A podcast series examines AI and its influence on humans. 
Artificial intelligence now shapes our lives in profound ways, curating social media posts that drive us apart, determining who gets a loan or probation, and even helping choose our romantic partners.

This week, WIRED is launching Sleepwalkers, based on a series of podcasts that examine the AI revolution.

The first episode, available here, examines how AI manipulates and exploits us. It asks what kind of a future are we letting the technology build and offers some ideas for what to do about it. Host Oz Woloshyn discusses the sway that AI has over us with several experts trying to understand technology’s influence and to unravel where we may be headed.

Tristan Harris, who once worked on technological persuasion at Google, now runs a think tank called the Center for Humane Technology, where he worries about AI’s power to seduce and manipulate us.

“We’ve basically got 2 billion humans completely jacked into an environment where every single thing on your phone wants your attention,” Harris says. “Their incentive is to calculate ‘what is the perfect, most seductive thing can I show you next?’”   ... " 

Robots Supporting Teachers in Class Sessions

Assistance is important, especially with respect to repetitive tasks that need strong context.

Robots can learn how to support teachers in class sessions
by University of Plymouth in Techexplore

Robots can take just three hours to successfully learn techniques which can be used to support teachers in a classroom environment, according to new research.

The study, published in Science Robotics, saw a robot being programmed to progressively learn autonomous behaviour from human demonstrations and guidance.

A human teacher controlled the robot, teaching it how to help young pupils in an educational activity, and it was then able to support the children in the same activity autonomously. The advice it subsequently provided was shown to be consistent with that offered by the teacher.

Researchers say the technique could have a number of benefits to teachers, as they face increasing demands on their time, and could be positive for pupils, with research previously showing that using robots alongside teachers in the classroom can have benefits for their education.

They also believe it holds considerable potential for a number of other sensitive applications of social robots, such as in eHealth and assistive robotics.

The study was coordinated by researchers at the University of Plymouth, which has a long history of developing social robots for a range of education and health settings, working with colleagues at the University of Lincoln and the University of the West of England.... " 

Fighting Cybercrime in the Cloud

Evidence for Cybercrime in the cloud.

Machine Learning Advances Tool to Fight Cybercrime in the Cloud
Purdue University News
By Chris Adam

Purdue University researchers used machine learning to develop a cloud forensic model that collects digital evidence associated with illegal activities in cloud storage applications. The system deploys deep learning models to classify child exploitation, illegal drug trafficking, and illegal firearms transactions uploaded to cloud storage applications, and to automatically report detection of any such illegal activities via a forensic evidence collection system. The researchers tested the system on more than 1,500 images, and found that the model accurately classified an image about 96% of the time. Said Purdue’s Fahad Salamh, "It is important to automate the process of digital forensic and incident response in order to cope with advanced technology and sophisticated hiding techniques and to reduce the mass storage of digital evidence on cases involving cloud storage applications." .... "

Designing Games

There is value in having game developers aid in designing interaction with business problems.

The Sims’ Creator Shares His Secrets
“If I was going to teach somebody videogame design from scratch, where would I start?”

That’s how Will Wright, the mastermind behind gaming’s most indelible fantasies, has embarked on an epic quest of his own: to help budding game developers transform their ideas into reality.

The Sims creator has joined MasterClass, the popular education app where extraordinary talents (director Ron Howard, writer Margaret Atwood, chef Alice Waters) teach the finer points of their crafts.

“I felt a little…what’s the word…overestimated, or humbled” by being invited into the MasterClass fold, Wright tells us.

But he certainly has the stature. From the groundbreaking SimCity to the spacefaring Spore, Wright has long been one of the biggest names in game development. What fellow MasterClass teacher Martin Scorsese is to crime films, Wright is to simulation gaming.

Wright spent more than six months distilling his decades of experience into a series of videos. Through 21 lessons and a detailed workbook, he covers everything from the fundamentals of game design to the importance of prototyping and playtesting.

“I really want this class to be focused on aspiring or even very experienced designers,” Wright says. “For aspiring game designers, I think it would be a good starting point to get off on the right mind-set.” .... "

Predicting Humor for Engagement

Humor as engagement.     Can it be delivered artificially, as a strong component of story?

It's No Joke: AI Beats Humans at Making You Laugh    |by Dina Gerdeman  in HBSWK

We all enjoy sharing jokes with friends, hoping a witty one might elicit a smile—or maybe even a belly laugh. Here’s one for you:

A lawyer opened the door of his BMW, when, suddenly, a car came along and hit the door, ripping it off completely. When the police arrived at the scene, the lawyer was complaining bitterly about the damage to his precious BMW.

"Officer, look what they've done to my Beeeeemer!” he whined.

"You lawyers are so materialistic, you make me sick!” retorted the officer. "You're so worried about your stupid BMW that you didn't even notice your left arm was ripped off!”

“Oh, my god,” replied the lawyer, finally noticing the bloody left shoulder where his arm once was. “Where's my Rolex?!”

Do you think your friends would find that joke amusing—well, maybe those who aren’t lawyers?

A research team led by Harvard Business School post-doctoral fellow Michael H. Yeomans put this laughing matter to the test. In a new study, he used that joke and 32 others to determine whether people or artificial intelligence (AI) could do a better job of predicting which jokes other people consider funny. ... "

Friday, November 29, 2019

Benchmarking a Big Quantum Computer

This article ultimately gets very technical, but attracted me because the bit size starts to get interesting for real problems.   The abstract and intro part are enough to give you a feeling for advances and their implications.  I post it here for a look I will do at the use of such systems for supply chain optimization.

Benchmarking an 11-qubit quantum computer

K. Wright, K. M. Beck, S. Debnath, J. M. Amini, Y. Nam, N. Grzesiak, J.-S. Chen, N. C. Pisenti, M. Chmielewski, C. Collins, K. M. Hudek, J. Mizrahi, J. D. Wong-Campos, S. Allen, J. Apisdorf, P. Solomon, M. Williams, A. M. Ducore, A. Blinov, S. M. Kreikemeier, V. Chaplin, M. Keesan, C. Monroe & J. Kim 

Nature Communications volume 10, Article number: 5464 (2019)  in Nature.com
  
Abstract
The field of quantum computing has grown from concept to demonstration devices over the past 20 years. Universal quantum computing offers efficiency in approaching problems of scientific and commercial interest, such as factoring large numbers, searching databases, simulating intractable models from quantum physics, and optimizing complex cost functions. Here, we present an 11-qubit fully-connected, programmable quantum computer in a trapped ion system composed of 13 171Yb+ ions. We demonstrate average single-qubit gate fidelities of 99.5%, average two-qubit-gate fidelities of 97.5%, and SPAM errors of 0.7%. To illustrate the capabilities of this universal platform and provide a basis for comparison with similarly-sized devices, we compile the Bernstein-Vazirani and Hidden Shift algorithms into our native gates and execute them on the hardware with average success rates of 78% and 35%, respectively. These algorithms serve as excellent benchmarks for any type of quantum hardware, and show that our system outperforms all other currently available hardware.  .... "   .... ' 

Novel Mobile Mapping Algorithm

As I read this, it too could have been used usefully for our forestry application, since we had large scale databases of measurements based on location contexts, like slopes and area history.

Mobile Mapping More Accurate With a Novel Algorithm
University of Twente (Netherlands)
K.W. Wesselink

A researcher at the University of Twente in the Netherlands developed an algorithm that improves the accuracy of surveyed mobile mapping imaging products. The algorithm can compensate for measurement errors introduced from erroneous satellite-based positioning, which usually occurs in urban areas. Mobile mapping includes all forms of geospatial data acquisition using a mobile platform carrying one or more sensor systems. The algorithm uses aerial images to make the acquired data more accurate. The algorithm recognizes objects from an overhead view, but also from a street view. Identified objects are used to establish thousands of links between the data sets, which enables a mathematical procedure to correct the mobile mapping data. .... "

Preventing Theft By Drones

Brief,  intriguing piece in Coindesk, a blockchain App of the tracking type.   Seems a very simple solution regarding altitude and IOT.    No indication anything is currently being created.

IBM Patents Blockchain to Stop Drones From Stealing Packages in Coindesk

Amazon, DHL and FedEx are building drones that deliver packages to your door. IBM, however, envisions a future where drones steal them instead. 

The computing giant won a patent on Nov. 12 for “Preventing anonymous theft by drones” with an Internet of Things (IoT) altimeter that triggers upon liftoff, tracking the package’s altitude and uploading the data to a blockchain platform.

The patent seeks to get ahead of two modern realities: people buy goods online, and people fly their own personal drones. That could be a problem, it says, if the trends combine to devious ends. 

“The confluence of the increase in drone use and the increase in online shopping provides a situation in which a drone may be used with nefarious intent to anonymously take a package that is left on a doorstep after delivery,” the patent description reads.

IBM’s solution is to outfit packages with an IoT sensor that only triggers if it detects a change in altitude “exceeding the threshold … expected when the object is lifted away by a drone.” Once it does, the sensor periodically updates the blockchain, and the intended recipient, with the package’s altitude. 

To be clear, there’s no indication that IBM actually plans to build an operational device. And if it does, it may well swap out a blockchain for some other “secure database,” according to the patent  ... " 

AI Storytelling Companies

Not quite all the same thing, but interesting spins on the idea.  Like the idea of a story, like a contract, that could adapt readily to an audience and context.

AI Storytelling Companies Usher in New Era of Characters, Relationships
November 28, 2019319  in AITrends

AI storytelling tools are combining content creation, emotional intelligence, sentiment analysis, and video synthesis in a new category of emerging technology.    By John P. Desmond, AI Trends Editor  in AITrends

A new wave of AI Storytelling tools is ushering in an era that includes created characters who have relationships, are in stories, and can adapt to react to how audiences play.  ..... "

Harvard Robobee Powered by Soft Muscles

Back to the interest in the tiny drone.

RoboBee Powered by Soft Muscles
Harvard University John A. Paulson School of Engineering and Applied Sciences
Leah Burrows

Researchers at Harvard University's John A. Paulson School of Engineering and Applied Science (SEAS) and Harvard’s Wyss Institute for Biologically Inspired Engineering have developed a resilient RoboBee powered by soft artificial muscles that can experience collisions without being damaged. The researchers built upon electrically-driven soft actuators, which are made using dielectric elastomers—soft materials with good insulating properties that deform when an electric field is applied. Said SEAS researcher Elizabeth Farrel Helbling, the robot’s ability to absorb impact without damage “would come in handy in potential applications such as flying through rubble for search and rescue missions.”

Thursday, November 28, 2019

Talk on Explainable AI

All analytics should be explainable in business context.  And the person responsible for that part of the business should also be able to explain how and why it works.

Voices in AI – Episode 101: A Conversation with Cindi Howsen  By Byron  Reese 

On Episode 101 of Voices in AI, Byron speaks with Cindi Howsen of Thoughtspot about the direction of explainable AI and where we are going as an industry.

Listen to this episode or read the full transcript at www.VoicesinAI.com

Transcript Excerpt

 This is Voices in AI brought to you by GigaOm and I’m Byron Reese. Today my guest is Cindi Howson. She is the Chief Data Strategy Officer at ThoughtSpot. She holds a degree in English from the University of Maryland and an MBA from my alma mater, Rice University. Welcome to the show, Cindy..... "

Wednesday, November 27, 2019

Berners-Lee Launches a Contract for the Web

Admirable, but will it work?  As one of the inventors of the Web, he will be heard, but will the contract be accepted, followed?

Tim Berners-Lee Launches a 'Contract for the Web' to Govern Internet Giants, Governments
Computing
By Graeme Burton
November 25, 2019

Sir Tim Berners-Lee, recipient of the 2016 ACM A.M. Turing Award for his invention of the World Wide Web, has launched a global plan of action to govern the behavior of Internet giants and governments. The Contract for the Web, which says it aims "to make our online world safe and empowering for everyone," includes nine principles: three aimed at governments, three for companies, and three for individuals. Governments must ensure everyone can connect to the Internet, keep all of the Internet available all the time, and respect people's fundamental online privacy and data rights. Companies must provide affordable Internet access to everyone, respect and protect people's online privacy and personal data, and develop technologies that support the best in humanity and challenge the worst. Individuals must be creators and collaborators on the Web, build strong communities that respect civil discourse and human dignity, and fight to keep the Web open and a global public resource. .... " 

Google G Suite Adds AI in Beta

Always  looking for ways to integrate smarts into enterprise business designs and tasks,  so this is worth looking at.  Looks like its taking a step ahead of Amazon for Business, given the G Suite's existing infrastructure.

Google adds AI smarts to G Suite with Google Assistant and Docs updates

Google’s AI assistant can now be accessed in beta, while Smart Compose has been extended to Google Docs as well as Gmail.   By Matthew Finnegan

Google’s AI expertise has long been a strength, and the company has been steadily adding machine-learning capabilities to its G Suite platform. The latest move in that effort allows users to access certain G Suite apps with Google Assistant and introduces Smart Compose text suggestions to Google Docs.

Google first announced it would bring its AI voice assistant to G Suite earlier this year; a beta is now under way that lets users manage their Google Calendar schedules with voice commands. This includes the ability to read out calendar entries, as well as create, cancel or reschedule events. G Suite admins can sign up for the beta here.

Users can also send emails to meeting participants, kick off voice and video calls hands-free using Google’s Hangouts Meet app, and interact with the Asus-built Google Hangouts Meet hardware in conference rooms to join and exit meetings and make phone calls..... "

From the Possible Minds Conference

From 'The Edge', a huge amount of information on AI and Minds.  Below an intro, full text and video at the link.

The Possible Minds Conference

I am puzzled by the number of references to what AI “is” and what it “cannot do” when in fact the new AI is less than ten years old and is moving so fast that references to it in the present tense are dated almost before they are uttered. The statements that AI doesn’t know what it’s talking about or is not enjoying itself are trivial if they refer to the present and undefended if they refer to the medium-range future—say 30 years.  —Daniel Kahneman

INTRODUCTION   by Venki Ramakrishnan

The field of machine learning and AI is changing at such a rapid pace that we cannot foresee what new technical breakthroughs lie ahead, where the technology will lead us or the ways in which it will completely transform society. So it is appropriate to take a regular look at the landscape to see where we are, what lies ahead, where we should be going and, just as importantly, what we should be avoiding as a society. We want to bring a mix of people with deep expertise in the technology as well as broad thinkers from a variety of disciplines to make regular critical assessments of the state and future of AI. 

Venki Ramakrishnan, President of the Royal Society and Nobel Laureate in Chemistry, 2009, is Group Leader & Former Deputy Director, MRC Laboratory of Molecular Biology; Author, Gene Machine: The Race to Decipher the Secrets of the Ribosome.  

[ED. NOTE: In recent months, Edge has published the fifteen individual talks and discussions from its two-and-a-half-day Possible Minds Conference held in Morris, CT, an update from the field following on from the publication of the group-authored book Possible Minds: Twenty-Five Ways of Looking at AI. As a special event for the long Thanksgiving weekend, we are pleased to publish the complete conference—10 hours plus of audio and video, as well as a downloadable PDF of the 77,500-word manuscript. Enjoy.] 

John Brockman
Editor, Edge  ..... 

(At the link, free,  about 25 video presentations by well known thinkers and practitioners in the space of AI and the understanding of human brains and thinking   .... )

New Alexa Emotions

New complexity in voice expression is announced for Alexa.  Need to see this in useful context, but intrigued by possibilities.

Use New Alexa Emotions and Speaking Styles to Create a More Natural and Intuitive Voice Experience      By Catherine Gao

We’re excited to introduce two new Alexa capabilities that will help create a more natural and intuitive voice experience for your customers. Starting today, you can enable Alexa to respond with either a happy/excited or a disappointed/empathetic tone in the US. Emotional responses are particularly relevant to skills in the gaming and sports categories. Additionally, you can have Alexa respond in a speaking style that is more suited for a specific type of content, starting with news and music. Speaking styles are curated text-to-speech voices designed to create a more delightful customer experience for specific content. For example, the news speaking style makes Alexa’s voice sound similar to what you hear from TV news anchors and radio hosts. To learn more, check out our technical documentation for emotions here and speaking styles here.

How Alexa Emotions Work
Alexa emotions use Neural TTS (NTTS) technology, Amazon’s text-to-speech technology that enables more natural sounding speech. For example, you can have Alexa respond in a happy/excited tone when a customer answers a trivia question correctly or wins a game. Similarly, you can have Alexa respond in a disappointed/empathetic tone when a customer asks for the sports score and their favorite team has lost. Early customer feedback indicates that overall satisfaction with the voice experience increased by 30% when Alexa responded with emotions. Check out the following examples and compare them to the neutral tone:  .... ' 

Stopping RoboCalls

A very common annoyance for many of us.  A new means to address the problem.

How Your Phone Company Aims to Stop Robocalls in Spectrum IEEE

The STIR/SHAKEN protocol will stop robocallers from exploiting a caller ID loophole to spoof phone numbers    By Jim McEachern and Eric Burger

Have you ever received a phone call from your own number? If so, you’ve experienced one of the favorite techniques of phone scammers.

Scammers can “spoof” numbers, making it seem as though the phone call in question is coming from a local number—which can include your own—thereby obscuring the call’s true origin. If you answer the call, you’ll most likely be treated to the sound of a robotic voice trying to trick you into parting with some money.

One of us (McEachern) is a principal technologist for the standards organization Alliance for Telecommunications Industry Solutions (ATIS), and the other (Burger) was until recently the chief technology officer for the U.S. Federal Communications Commission. But you don’t need us to tell you that robocalls are a pandemic. According to a report by the caller ID company Hiya, there were 85 billion robocalls globally in 2018.

RoboKiller, one company that has created an anti-spam-call app, estimates that Americans received 5.3 billion robocalls in April 2019 alone, or nearly 4,000 every second. And not only are scam calls annoying, they’re costly. In 2018, phone scams tricked Americans out of an estimated US $429 million. Sadly, these numbers are on an upward trend. ..... 

Towards Better Forecasting: Less Noise

A forecast prediction is key for any business decision. Short intro, then the entire podcast follows at the link.

Wharton’s Barbara Mellers and Ville SatopÓ“Ó“ from INSEAD discuss their research on the impact of noise on forecast accuracy.

From predicting the weather to possible election outcomes, forecasts have a wide range of applications. Research shows that many forces can interfere with the process of predicting outcomes accurately — among them are bias, information and noise. Barbara Mellers, a Wharton marketing professor and Penn Integrates Knowledge (PIK) professor at the University of Pennsylvania, and Ville SatopÓ“Ó“, assistant professor of technology and operations management at INSEAD, examined these forces and found that noise was a much bigger factor than expected in the accuracy of predictions. The professors recently spoke with Knowledge@Wharton about their working paper, “Bias, Information, Noise: The BIN Model of Forecasting.” (Listen to the podcast at the top of this page.)

An edited transcript of the conversation follows.

Knowledge@Wharton: In your paper, you propose a model for determining why some forecasters and forecasting methods do better than others. You call it the BIN model, which stands for bias, information and noise. Can you explain how these three elements affect predictions?

Ville SatopÓ“Ó“: Let me begin with information. This describes how much we know about the event that we’re predicting. In general, the more we know about it, the more accurately we can forecast. For instance, suppose someone asked me to predict the occurrence of a series of future political events. If I’m entirely ignorant about this, I don’t really follow politics, I don’t follow the news, I barely understand the questions you’re asking me, I would predict around 50% for these events.

On the other hand, suppose I follow the news and I’m interested in the topic. My predictions would be then more informed, and hence they would be not around 50% anymore. Instead, they would start to tilt in the direction of what would actually happen.

At the extreme case, we could think of me having some sort of a crystal ball that would allow me to see into the future. This would make me perfectly informed, and hence I would predict zero or 100% for each one of the events, depending on what I see in the crystal ball. This just illustrates how information can drive our predictions. It introduces variability into them that is useful because it is based on actual information. Because of that, it correlates with the outcome.   .... " 

Tuesday, November 26, 2019

Google Announces Ambient Mode on Some Phone Devices

What they have called an 'intent based' rather than 'App based' design, for some phones now, and likely to happen on all their devices. I would imagine this would work quite well, but better for some kinds of task-based proactive,  alert, need or goal approaches.   Might work well for team situations where there are shared goals and needs to coordinate efforts via an AI of some type.    This could mean that the device might proactively choose tasks or data to be shown, providing an element of direction and management?  I would consider the potential for workplace applications quite interesting, and even controversial.

" ... Ambient Mode brings a proactive Google Assistant experience to your Android phone. Arvind Chandrababu, product manager on the Assistant for Mobile team, shares how it easily shares the information that matters most to you.    https://www.youtube.com/watch?v=vaRzJo_19nY

From9to5 Google:

"... Ambient Mode lets you “turn your phone into a digital photo frame, control music, other smart home devices and more.” Enabled during charging, it is officially “available on select devices on Android O [8.0] and above. According to Ars Technica’s Ron Amadeo, devices from Sony and OEM Transsion will also receive Ambient Mode with wider availability next week. Lenovo already showed off two tablets with this experience. One way to think about this new mode is as a stock always-on display for OEMs that do not have their own custom implementations.  ... " 

Technology and the Future of Utilities

In the Cisco Blog, a good overview of how tech is helping redesign and enable energy utilities.  Making them more efficient and secure.   

Energy - Oil & Gas and Utilities
The Future of Utilities
By Wes Sylvester

As we power through our #FutureofPublicSector blog series, today we make a stop at the future of energy as we explore the energy sources of the future and how technology is enabling greater outcomes at each stage of the energy continuum.  

For centuries, society has relied on hydrocarbons like oil, coal and natural gas to generate energy. These fossil fuels are burned at power plants to create electricity which then moves through our communities using a complex grid of electricity substations and power lines. Higher voltage electricity can be moved more efficiently although lower voltage electricity is safer for everyday use. Transformers at substations work to adjust electricity voltages at different stages of the journey from the power plant to homes and businesses where they power our days. 

Sparking the Next Generation of Energy
Not only are these fossil fuels depleting, they generate air pollution and greenhouse gases that are directly contributing towards climate change. As a result, the entire energy landscape is evolving: from where energy comes from (generation), how it gets to communities (transmission) and how it gets to homes and businesses (distribution).

The future of energy is shifting in a definitively renewable direction as more countries, companies and consumers push for a move away from using fossil fuels to generate energy: driving the adoption of more natural resources like solar and wind power. To survive today and prepare for tomorrow, energy utility companies must focus on delivering a more reliable and efficient service, generated from more sustainable and renewable sources. 

Here’s a look at how technology is helping to solve these challenges in the electric utilities industry: 
.... " 

Will IOT Reinvent the Supply Chain?

We always thought so, starting with the use of the simplest IOT, RFID tagging.  Though it may ultimately require new kinds of ledgers, contracts and modeling that allow advanced modeling of what is happening in the SC and predict  the risk of what may happen.   Some interesting stats of use here.

Will IoT reinvent the supply chain? in Retailwire by Tom Ryan    With further expert comment

While IoT is promising to reinvent the in-store experience with smart shelves, robots and other connected devices, the big early payoff appears to be back in the supply chain.

According to PWC’s “2019 Internet of Things Survey,” supply chain and asset management are retailers’ top priorities for active IoT projects. Almost half (49 percent) of retail respondents indicated they are already benefiting from using IoT solutions to improve their supply chain. Thirty-eight percent expect to see value within two years.

PWC writes, “IoT solutions can monitor and report the exact location, environment, and handling of shipments from a factory or fulfillment center to a retail store or customer destination. This capability offers retailers real-time insights into the handling of an order while it’s en route, while also spotlighting any delays.”  .... " 

P-Hacking as Malpractice

In the early days we used to have rooms full of statisticians and economists, and we were trained in the craft as well.  Yet I don't ever remember the term 'P-value' being used in any serious context.   I just reviewed a notebook we used then for technical standards, and the term is never mentioned. That has changed considerably.  The following a non technical view of the topic, and how it has expanded and exploded.  How do you specify 'malpractice'?

We're All 'P-Hacking' Now
Christie Aschwanden  IDEAS 11.26.2019 09:00 AM in Wired
An insiders' term for scientific malpractice has worked its way into pop culture. Is that a good thing? ...  "

Taking AI and ML from Research to Production

Good, though too short piece from O'Reilly.  I add, its 90% the same as getting any kind of important  IT into production.

Moving AI and ML from research into production

Dean Wampler discusses the challenges and opportunities businesses face when moving AI from discussions to production.

In this interview from O’Reilly Foo Camp 2019, Dean Wampler, vice president at Lightbend, talks about moving AI and machine learning into real-time production environments. .... " 

Alexa to be Seen in Much Smaller, Simpler Devices

Will be interested to see where this emerges, with voice, or delivering new kinds of skills?

Alexa is coming to low-spec devices like light switches and thermostats
A Cortex-M processor and 1MB of RAM is all you need.
By Steve Dent, @stevetdent in Engadget

Amazon's Alexa voice assistant has migrated to a lot of devices of late, including eyeglasses, ear buds and microwave ovens. Now, the company has revealed that it will run on devices with as little as 1MB of memory and a cheap Cortex-M processor. That means you can expect to see Alexa on all kinds of relatively dumb devices from lightbulbs to toys ..... "

And further in Techcrunch.

And reported fromthe AWS conference by Venturebeat:
Amazon brings Alexa to AWS IoT Core devices   By Kyle Wiggers

Supercomputers at the Exascale

Continued work on faster and better solutions to hard problems.   Still think its mostly about designing and delivering better software in context and to scale.

Intel and Argonne National Lab on 'exascale' and their new Aurora supercomputer
The scale of supercomputing has grown almost too large to comprehend, with millions of compute units performing calculations at rates requiring, for the first time, the exa prefix — denoting quadrillions per second. How was this accomplished? With careful planning... and a lot of wires, say two people close to the project.Having noted the news that Intel and Argonne National Lab were planning to take the wrapper off a new exascale computer called Aurora (one of several being built in the U.S.) earlier ... "

Deepfakes being Addressed

Continued look at the process of validation.

Tech Companies Step Up Fight Against 'Deepfakes'
The Wall Street Journal
By Betsy Morris

Companies such as Facebook, Twitter, and Google are working to slow the spread of maliciously doctored content, known as deepfakes, ahead of the 2020 election. The tools used to create deepfake content are improving so quickly that soon it will be difficult to detect deepfakes. Google recently issued an update to its policy prohibiting the use of deepfakes in political and other advertisements, and Twitter is considering identifying manipulated photos, videos, and audio shared on its platform. Meanwhile, Facebook, Microsoft, and Amazon are working with more than a half-dozen universities on a Deepfake Detection Challenge to accelerate research into new ways of detecting and preventing media manipulation to mislead others. Said Twitter’s Yoel Roth, “The risk is that these types of synthetic media and disinformation undermine the public trust and erode our ability to have productive conversations about critical issues.”  .... "

Ahold Testing Cashierless

More on Ahold's working on cashierless stores.

Ahold has a go at cashierless store format    By Dan Berthiaume in CSA

A major supermarket conglomerate is taking a page from Amazon’s grocery playbook.

Ahold Delhaize USA, which operates supermarket banners including Food Lion, Giant, and Stop & Shop, is piloting a new frictionless store environment. Called “Lunchbox” and developed by the company’s Retail Business Services subsidiary, the format enables customers to scan in, shop, and walk out without having to stop at any type of checkout terminal.

Currently being tested at Retail Business Services’ office in Quincy, Mass., Lunchbox is powered by a Retail Business Services proprietary app that admits shoppers to the store and charges them for purchases. Payment services such as PayPal, Venmo, Apple Pay, and Google Pay are integrated into the app’s wallet.   .... "

Monday, November 25, 2019

3D Printing 'Living' Materials

Implications here fascinating, more details at the link.

3D Printing Technique Produces 'Living' 4D Materials
UNSW Newsroom
By Caroline Tang
November 19, 2019

Researchers at Australia's University of New South Wales (UNSW) Sydney and New Zealand's University of Auckland have combined three-dimensional (3D) and four-dimensional (4D) printing with a chemical process designed to create polymers to generate "living" resin. The researchers' controlled polymerization technique utilizes visible light to produce an environmentally friendly plastic or polymer. UNSW's Cerille Boyer said, "Our new method ... allows us to control the architecture of the polymers and tune the mechanical properties of the materials prepared by our process ... [and] also gives us access to 4D printing and allows the material to be transformed or functionalized." UNSW's Nathaniel Corrigan added that the system can finely control the 3D-printed material's molecules, so it can reversibly change shape and its chemical/physical properties under certain conditions. The researchers said the technique could be used to generate self-repairing and reusable objects, as well as biomedicines.    .... " 

Very Fast PhotoGrammetry for Agriculture

Once again, very fast acquisition of accurate maps, in agriculture could be used to quickly detect changes in plantings by location and predict changes.  Our own needs were to determine changes that would influence harvesting plans.  Another use for drones.

Army Photogrammetry Technique Makes 3D Aerial Maps in Minutes   By Devin Coldewey in TechCrunch

Researchers at the U.S. Army's Geospatial Research Laboratory in Virginia have developed a highly efficient photogrammetric method that can turn aerial imagery into accurate three-dimensional (3D) surface maps in near-real time without any human oversight. The Army’s 101st Airborne Division tested the system by flying a drone over Fort Campbell in Kentucky. The system was able to map a mock city used for training exercises. "Whether it's for soldiers or farmers, this tech delivers usable terrain and intelligence products fast," said Quinton King, a manager at TechLink, the Defense Department's commercial tech transfer organization.   ... "

Automated Program Repair

Not too far off from Machine Programming. Or programming from pre-set templates,  which we often did in enterprise.  Another argument for the demise of hand programming.  Much more at the link. 

Automated Program Repair
By Claire Le Goues, Michael Pradel, Abhik Roychoudhury

Communications of the ACM, December 2019, Vol. 62 No. 12, Pages 56-65
10.1145/3318162

Alex is a software developer, a recent hire at the company of her dreams. She is finally ready to push a highly anticipated new feature to the shared code repository, an important milestone in her career as a developer. As is increasingly common in development practice, this kind of push triggers myriads of tests the code must pass before becoming visible to everyone in the company. Alex has heavily tested the new feature and is confident it will pass all the tests automatically triggered by the push. Unfortunately, Alex learns the build system rejected the commit. The continuous integration system reports failed tests associated with a software package developed by a different team entirely. Alex now must understand the problem and fix the feature manually.

What if, instead of simply informing Alex of the failing test, the build system also suggested one or two possible patches for the committed code? Although this general use case is still fictional, a growing community of researchers is working on new techniques for automated program repair that could make it a reality. A bibliography of automated program repair research has been composed by Monperrus. 

In essence, automated repair techniques try to automatically identify patches for a given bug,a which can then be applied with little, or possibly even without, human intervention. This type of work is beginning to see adoption in certain, constrained, practical domains. Static bug finding tools increasingly provide "quick fix" suggestions to help developers address flagged bugs or bad code patterns, and Facebook recently announced a tool that automatically suggests fixes for bugs found via their automatic testing tool for Android applications.  .... " 

Automotive Industry Warned Against Hacking

Expect hacking everywhere, especially where it can be effectively leveraged by finance or influence.

FBI warns automakers they’re being targeted by hackers  By Duncan Riley in SiliconAngle

The U.S. Federal Bureau of Investigation has sent a notice to automotive manufacturers warning them that they’re being targeted by hackers.

First reported Wednesday by CNN, the notice warned that hackers were known to be attempting to compromise auto industry computer systems using sophisticated techniques.

Previous attacks “have resulted in ransomware infections, data breaches leading to the exfiltration of personally identifiable information, and unauthorized access to enterprise networks,” the FBI claims. “The automotive industry likely will face a wide range of cyber threats and malicious activity in the near future as the vast amount of data collected by internet-connected vehicles and autonomous vehicles become a highly valued target for nation-state and financially motivated actors.”  .... '

Google's Explainable AI

Looks to be well done, in Beta,  but worth looking at:

Explainable AIBETA

Tools and frameworks to deploy interpretable and inclusive machine learning models.

Try it free View documentation

Understand AI output and build trust
Explainable AI is a set of tools and frameworks to help you develop interpretable and inclusive machine learning models and deploy them with confidence. With it, you can understand feature attributions in AutoML Tables and AI Platform and visually investigate model behavior using the What-If Tool. It also further simplifies model governance through continuous evaluation of models managed using AI Platform.


Design interpretable and inclusive AI
Build interpretable and inclusive AI systems from the ground up with tools designed to help detect and resolve bias, drift, and other gaps in data and models. AI Explanations in AutoML Tables and AI Platform provide data scientists with the insight needed to improve data sets or model architecture and debug model performance. The What-If Tool lets you investigate model behavior at a glance.

Deploy AI with confidence
Grow end-user trust and improve transparency with human-interpretable explanations of machine learning models. When deploying a model on AutoML Tables or AI Platform, you get a prediction and a score in real time indicating how much a factor affected the final result. While explanations don’t reveal any fundamental relationships in your data sample or population, they do reflect the patterns the model found in the data.

Streamline model governance
Simplify your organization’s ability to manage and improve machine learning models with streamlined performance monitoring and training. Easily monitor the predictions your models make on AI Platform. The continuous evaluation feature lets you compare model predictions with ground truth labels to gain continual feedback and optimize model performance..... " 

Also covered in SiliconAngle.

Sunday, November 24, 2019

Drones Do the Work of 500 Farmers

Our own work with pulp forests could also have used drones,  as well as the idea of AI detecting patterns of growth and harvest planning.    And for control of fires and remapping harvesting schedules.    The estimate of labor saving here is remarkable.  Continue to work with agricultural examples.

Drones That Do the Work of 500 Farmers Are Transforming Palm Oil
Bloomberg    By Anuradha Raghu

Commercial flying drones are automating the harvesting and maintenance of palm oil farms in Malaysia and Indonesia. Drones can spot fires, collect data on whether crops have sufficient water and nutrients, and detect leakages in irrigation systems. William Tao at drone-based service provider Insight Robotics said just one drone can capture images of approximately 2,500 hectares (more than 6,000 acres) of oil palms daily, adding that many plantation owners use artificial intelligence to analyze the huge numbers of drone images they receive in a matter of hours, rather than weeks. The data captured by drones' cameras is helpful in determining the environmental impact of palm oil and palm farms.  ... " 

Waves Enterprise Blockchain

Been examining a number of applications of smart contracts recently, including those more closely connected to specific goal oriented and regulatory agreements and tasks.  With strong identity and process security.   Here another example.  See my 'smart contracts' tag below for more examples.

Waves Enterprise blockchain unveils major updates and hundreds of smart contracts per minute  By Kyt Dotson in SiliconAngle

Fast-transaction blockchain distributed ledger provider Waves Enterprise, an extension of Waves Platform AG technology, Thursday announced major upgrades to its network that adds significant improvements, putting it into the same class as the corporate blockchain Hyberledger Fabric.

Waves Enterprise, now in full Version 1.0, offers what the company says a powerful universal blockchain solution aimed at corporations and the public sector. Key features added in this update include containerized smart contracts, greatly improved performance, an updated application programming interface for developers and an improved user interface for users.

“There is now a transition to a new generation of IT systems and a new system of interaction between companies and even people,” said Alexander Ivanov, founder of Waves. “We are talking about the interaction between companies, which can be transferred to new tracks. This became possible with the advent of blockchain. This technology is designed to shape ecosystems in which participants trust each other.”

This is a major release for Waves Enterprise and deploys features that implement important new functions for enterprise clients. To start, Waves Enterprise has implemented an authorization service that will adhere to enterprise-level security using the oAuth 2.0 specification. Previously the Waves API was open, thus accessible publicly and available for anyone.   ... "

Smart Cities Based on Good Models

Here energy is the starting point, but other aspects of city resource use, transportation processes, risk analyses and other goal and future predictive models should also be included with a base foundational descriptive model to work from.   

Modeling Every Building in America Starts with Chattanooga
Oak Ridge Leadership Computing Facility
Rachel Harken

A team of researchers at the U.S. Department of Energy’s Oak Ridge National Laboratory (ORNL) has developed a building energy model that automatically extracts high-level building data from publicly available information. The team demonstrated its new approach by using the Oak Ridge Leadership Computing Facility's (OLCF's) Cray XK7 Titan supercomputer to model the energy usage of every building serviced by the Electric Power Board (EPB) of Chattanooga, TN. The supercomputer found the electric utility could save between $11 million and $35 million a year by adjusting electricity usage during critical peak times. Said ORNL’s Joshua New, "We're not just creating these models and doing what-if analyses in the blind. We have an error rate for every building on how closely we're matching that 15-minute energy use."  ... "

Alphabet X Everyday Robot

Early tasks will have these robots sorting trash.   In practice a tough job, with vision and other sensory aspects.   Learning also means adapting, so the hardest thing might be to adapt to a changing stream.    Adapting is a form of maintenance, and every ML project I have worked has under designed that part of the problem.   So looking forward to more from this effort.

Alphabet X’s “Everyday Robot” project is making machines that learn as they go in Technology Review

The news: Alphabet X, the company’s early research and development division, has unveiled the Everyday Robot project, whose aim is to develop a “general-purpose learning robot.” The idea is to equip robots with cameras and complex machine-learning software, letting them observe the world around them and learn from it without needing to be taught every potential situation they may encounter.   ....  "

Impacts of RPA

Some good thoughts, also some  obvious.   In our own experience we first tried to model actual process exactly and directly, but soon found it was not possible for many reasons. I disagree too that these have to be low value or complexity tasks,  ....

RPA Impacts Employee And Customer Experiences — And That’s A Big Deal   By Kate Leggett, Vice President, Principal Analyst

Customer service organizations use robotic process automation (RPA) as a tactical and short-term approach to digitize common agent tasks. There are two forms of RPA: unattended and attended RPA. A task can start with an agent and be supported by attended automation, which can kick off unattended RPA to complete the process.

Customer service leaders use RPA to:

Standardize work to better serve customers. RPA automates agent tasks within rules-based processes such as launching apps, cutting and pasting from different apps, and basic computations. This makes agent actions more consistent and increases their throughput. Return on investment is easy to quantify, as brands know what every second of their agents’ time costs.

Uplevel employees’ confidence so that they can better nurture customers. RPA automates repetitive, low-value tasks that interfere with core agent activities: call wrap-up tasks, call notes, and data entry. RPA allows agents to focus on adding customer value, solving customer problems, and strengthening customer relationships.

Speed up agent work to improve customer experiences. RPA robots can perform tasks four to five times faster than agents, streamlining inquiry capture and resolution and improving handle times and service-level agreements.

Deliver actionable business insights to better align with customer expectations. RPA reduces manual errors, which translates to higher-quality data. RPA robots also interact with legacy systems to uncover data that was previously too labor-intensive to extract. This lets organizations mine broader and more reliable data sets to reveal new insights. .... "

Knowledge Graphs in Industry at Scale

How knowledge Graphs are done, used in industry.  Emphasizing realistic scale.

Industry-Scale Knowledge Graphs: Lessons and Challenges
By Natasha Noy, Yuqing Gao, Anshu Jain, Anant Narayanan, Alan Patterson, Jamie Taylor
Communications of the ACM, August 2019, Vol. 62 No. 8, Pages 36-43  10.1145/3331166

Knowledge graphs are critical to many enterprises today: They provide the structured data and factual knowledge that drive many products and make them more intelligent and "magical."

In general, a knowledge graph describes objects of interest and connections between them. For example, a knowledge graph may have nodes for a movie, the actors in this movie, the director, and so on. Each node may have properties such as an actor's name and age. There may be nodes for multiple movies involving a particular actor. The user can then traverse the knowledge graph to collect information on all the movies in which the actor appeared or, if applicable, directed.

Many practical implementations impose constraints on the links in knowledge graphs by defining a schema or ontology. For example, a link from a movie to its director must connect an object of type Movie to an object of type Person. In some cases the links themselves might have their own properties: a link connecting an actor and a movie might have the name of the specific role the actor played. Similarly, a link connecting a politician with a specific role in government might have the time period during which the politician held that role.

Knowledge graphs and similar structures usually provide a shared substrate of knowledge within an organization, allowing different products and applications to use similar vocabulary and to reuse definitions and descriptions that others create. Furthermore, they usually provide a compact formal representation that developers can use to infer new facts and build up the knowledge—for example, using the graph connecting movies and actors to find out which actors frequently appear in movies together.

This article looks at the knowledge graphs of five diverse tech companies, comparing the similarities and differences in their respective experiences of building and using the graphs, and discussing the challenges that all knowledge-driven enterprises face today. The collection of knowledge graphs discussed here covers the breadth of applications, from search, to product descriptions, to social networks:

Both Microsoft's Bing knowledge graph and the Google Knowledge Graph support search and answering questions in search and during conversations. Starting with the descriptions and connections of people, places, things, and organizations, these graphs include general knowledge about the world.

Facebook has the world's largest social graph, which also includes information about music, movies, celebrities, and places that Facebook users care about.

The Product Knowledge Graph at eBay, currently under development, will encode semantic knowledge about products, entities, and the relationships between them and the external world.

The Knowledge Graph Framework for IBM's Watson Discovery offerings addresses two requirements: one focusing on the use case of discovering nonobvious information, the other on offering a "Build your own knowledge graph" framework. .... "

Saturday, November 23, 2019

RPA Usage in Banking

A good area to explore, since its methods and processes are better defined.  Our own work in RPA-like AI worked best in financial areas.  Also were the easiest to maintain over time, being more stable.

RPA in Banking – Use-cases, Benefits and Steps       By Mitul Makadia 

It is no secret that today, banks and other financial institutions have to evolve continually to provide the best customer experience to the users and remain competitive in the saturated financial sector. With massive counter competition from virtual banking solutions, banks are under immense pressure to boost efficiency and optimize the resources. The scarcity of skilled resources, a sudden surge in personnel costs, and the need to improve process efficiencies are some of the other challenges that the banking and financial sector face today. This has led to the adoption of Robotic Process Automation (RPA) in banking and finance.

In this blog, we are going to discuss various aspects of robotic process automation in financial services along with its benefits, opportunities, implementation strategy, and use cases.

Robotic Process Automation In Banking
Robotics in banking & finance is primarily defined as the use of a powerful robotic process automation software to –   .... " 

Whats ahead for Machine Programming

A long time idea, and likely to emerge strongly.   Hand coding will eventually be rare.    Podcast on the topic below

Machine Programming: What Lies Ahead?

Podcast and transcript at the link
Intel’s Justin Gottschlich discusses how machine programming is at an inflection point.

Imagine software that creates its own software. That is what machine programming is all about. Like other fields of artificial intelligence, machine programming has been around since the 1950s, but it is now at an inflection point.

Machine programming potentially can redefine many industries, including software development, autonomous vehicles or financial services, according to Justin Gottschlich, head of machine programming research at Intel Labs. This newly formed research group at Intel focuses on the promise of machine programming, which is a fusion of machine learning, formal methods, programming languages, compilers and computer systems.

In a conversation with Knowledge@Wharton during a visit to Penn, Gottschlich discusses why he believes the historical way of programming is flawed, what is driving the growth of machine programming, the impact it can have and other related issues. He was a keynote speaker at the PRECISE Industry Day 2019 organized by the PRECISE Center at Penn Engineering. 


Following is an edited transcript of the conversation.

Knowledge@Wharton: Given the buzz around AI, a lot of people are familiar with machine learning. However, most of us don’t have a clue about what “machine programming” means. Could you explain the difference between the two?

Justin Gottschlich: At the highest level, machine learning can be considered a subset of artificial intelligence. There are many different types of machine learning techniques. One of the most prominent at present is called “deep neural networks.” This has contributed a lot towards the tremendous progress that we’re seeing over the last decade. Machine programming is about automating the development and maintenance of software. You can think of machine learning being a subset of machine programming. But in addition to using machine learning techniques, which are approximate types of solutions, in machine programming we also use other things like formal program synthesis techniques that provide mathematical guarantees to ensure precise software behavior. You can kind of think of these two points as a spectrum. You have approximate solutions at one end and precise solutions at the other end and in between there’s a fusion of several different ways that you can combine these. Every one of these things is part of the bigger landscape of machine programming.

Knowledge@Wharton: So machine programming is when you create software that can create more software?

Gottschlich: Right.

Knowledge@Wharton: How would that happen? Could you give an example?  ....  " 

The Certainty of Having to Deal with Uncertainty

Late to this article, nicely done.

Home/Magazine Archive/December 2019 (Vol. 62, No. 12)/Uncertainty/Full Text

Uncertainty    By Peter J. Denning, Ted G. Lewis
Communications of the ACM, December 2019, Vol. 62 No. 12, Pages 26-28
10.1145/3368093

In a famous episode in the "I Love Lucy" television series—"Job Switching," better known as the chocolate factory episode—Lucy and her best-friend coworker Ethel are tasked to wrap chocolates flowing by on a conveyor belt in front of them. Each time they get better at the task, the conveyor belt speeds up. Eventually they cannot keep up and the whole scene collapses into chaos.

The threshold between order and chaos seems thin. A small perturbation—such as a slight increase in the speed of Lucy's conveyor belt—can either do nothing or it can trigger an avalanche of disorder. The speed of events within an avalanche overwhelms us, sweeps away structures that preserve order, and robs our ability to function. Quite a number of disasters, natural or human-made, have an avalanche character—earthquakes, snow cascades, infrastructure collapse during a hurricane, or building collapse in a terror attack. Disaster-recovery planners would dearly love to predict the onset of these events so that people can safely flee and first responders can restore order with recovery resources standing in reserve.

Disruptive innovation is also a form of avalanche. Businesses hope their new products will "go viral" and sweep away competitors. Competitors want to anticipate market avalanches and side-step them. Leaders and planners would love to predict when an avalanche might occur and how extensive it might be.

In recent years complexity theory has given us a mathematics to deal with systems where avalanches are possible. Can this theory make the needed predictions where classical statistics cannot? Sadly, complexity theory cannot do this. The theory is very good at explaining avalanches after they have happened, but generally useless for predicting when they will occur.  .... " 

On the State of AI and Machine Learning

Brought to my attention, a short non technical document on the topic.

The State of AI and Machine Learning    32 pages

From Figure Eight
Bridging the AI Gap Between Data Scientists and Line-of-Business Owners

01  Introduction
02  About the Survey
03  Why are Data Scientist Not 100% Satisfied in Their Jobs?
04  The Future is … Human? Machine? Cyborg?
05  Line of Business Budgets Suggest
06  Growing Importance of AI Initiatives Bridging the AI Gap
07  Crawl, Walk, Run with AI
08  Conclusion
09  References
...." 

Epidemics and use of Personal GPS Data

Recall our work with bioterrorism modeling, link below.  Not just a matter of disease epidemics.

During Epidemics, Access to GPS Data from Smartphones Can Be Crucial
Ecole Polytechnique Fédérale de Lausanne (Switzerland)
By Sandrine Perroud

Researchers at Ecole Polytechnique F̩d̩rale de Lausanne (EPFL) in Switzerland and the Massachusetts Institute of Technology have found that human mobility is a major factor in the spread of vector-borne diseases such as malaria and dengue. The researchers used mobile phone data and census models to effectively predict the spatial distribution of dengue cases in Singapore, based on data from actual reported cases in 2013 and 2014. The team also demonstrated that the types of data used in their study could be obtained without infringing on people's privacy. Said EPFL's Emanuele Massaro, "We need to think seriously about changing the law around accessing this kind of information Рnot just for scientific research, but for wider prevention and public health reasons." ... '

Friday, November 22, 2019

Locating Shooters Using Smartphone Videos

Seems quite a move forward, basically triangulation from smartphone microphones.  Examples and maps at the link.  Has been released as open source code for testing.

Carnegie Mellon System Locates Shooters Using Smartphone Videos
Carnegie Mellon University  By Byron Spice

Researchers at Carnegie Mellon University (CMU) have developed a system that can accurately locate a shooter based on video recordings from as few as three smartphones. The researchers tested the Video Event Reconstruction and Analysis (VERA) system using three video recordings from the 2017 mass shooting in Las Vegas that left 58 people dead and hundreds wounded; it correctly estimated the shooter's actual location. The system uses machine learning to match up video feeds and calculate the position of each camera based on what it is seeing; it also tracks the time delay between the sound of a shock wave caused by a bullet’s passage through the air, and the sound of the muzzle blast from the gun. Using video from three or more smartphones allows the direction from which the shots were fired to be triangulated. Said CMU’s Alexander Hauptmann, "When we began, we didn't think you could detect the crack with a smartphone because it's really short, but it turns out today's cellphone microphones are pretty good."  ... " 

Human Robot Interaction from Wired25

Will become increasingly important.  And soon much more so beyond the factory floor..

Prof. Anca Dragan Talks About Human-Robot Interaction for WIRED

Prof. Anca Dragan gave a talk as part of the WIRED25 summit, explaining some of the challenges robots face when interacting with people. First, robots that share space with people, from autonomous cars to quadrotors to indoor mobile robots, need to anticipate what people plan on doing and make sure they can stay out of the way. This is already hard, because robots are not mind readers, and yet they need access to a rough simulator of us, humans, that they can use to help them decide how to act. The bar gets raised when it’s crowded, because then robots have to also understand how they can influence the actions that people take, like getting another driver to slow down and make space for a merging autonomous car. And what if the person decides to accelerate instead? Find out about the ways in which robots can negotiate these situations in the video below.
   ..... "  

Use link  for video ..

via BAIR (UC Berkeley Artificial Intelligence Research) blog

Amazon Debuts Dash Smart Shelf

If pushing the Dash Button was too hard for you, here is the next reordering play from Amazon, the smart shelf that automatically reorders when a shelf has less than a given weight of product.   Unless you are very tidy about where things are put, this is more of an office thing.   And there too,  I have often see many less organized offices.  They have thought ahead  about temporary changes in stock. Perhaps a narrower demand profile than expected.   We did test weight triggered solutions on store shelves to signal out-of-stocks in Grocery.  This further adds another Amazon Alexa presence to the office.  In the link below, pictures of its suggested uses.

Amazon debuts the Dash Smart Shelf to gain a bigger presence in the office   By Maria Deutcher in SiliconAngle

After getting its Echo smart speakers into tens of millions of homes, Amazon.com Inc. is now hoping to establish a bigger presence in the workplace. 

The company on Thursday pulled back the curtains on the Dash Smart Shelf, an internet-connected scale for businesses that automatically reorders office supplies from Amazon when they’re about to run out. Users can place common items like printer paper, pens and notepads atop the device. When the Dash Smart Shelf senses that the weight drops below a certain level, it will purchase a fresh batch of whatever office suppliers it’s holding.

There’s an alternate setting that companies can select to have the scale notify the office manager instead of placing orders on its own. To prevent accidental purchases, Amazon has designed the onboard software to detect if supplies are temporarily removed by a worker.

The Dash Smart Shelf comes in three sizes ranging in size from 7 inches by 7 seven inches to 18 inches by 13 inches. All the versions are one-inch tall and can be powered either via a wall plug or four AAA batteries.  .... "

Bots Work Better if they Impersonate

Interesting results, with shades of the implications of 'uncanny valley' and the need to define 'success' in strong context.   Our guard is already up now when we 'meet' bots online.   Expectations are set depending up this context.  'Prisoners Dilemma' context is interesting to test, but is it often a context that humans encounter?   We do now need to know much better how humans work with machines.

Bots Are More Successful If They Impersonate Humans
By Max Planck Institute for Human Development
November 21, 2019

Researchers at the Max Planck Institute for Human Development, along with colleagues in the U.S. and the United Arab Emirates, found that bots are more successful at certain human-machine interactions, but only if they are allowed to hide their non-human identity.

The researchers asked nearly 700 volunteers in an online cooperation game to interact with a human or an artificial partner. In the game, known as the prisoner's dilemma, players can either act in their own self-interest to exploit the other player, or act cooperatively with advantages for both sides. However, some of the participants interacting with another human were told they were playing with a bot, and vice versa.

The researchers found that bots impersonating humans were more successful in convincing their gaming partners to cooperate. However, as soon as they revealed their true identity, cooperation rates decreased.

From Max Planck Institute for Human Development
View Full Article  .... "

AI Birdwatching

Seeing and  imaging continue to be the place where AI/Machine learning make the most progress. Here another tagging example.   Currently working on a horticultural example with plants.

This AI birdwatcher lets you 'see' through the eyes of a machine
by Robin A. Smith, Duke University

It can take years of birdwatching experience to tell one species from the next. But using an artificial intelligence technique called deep learning, Duke University researchers have trained a computer to identify up to 200 species of birds from just a photo.

The real innovation, however, is that the A.I. tool also shows its thinking, in a way that even someone who doesn't know a penguin from a puffin can understand.

The team trained their deep neural network—algorithms based on the way the brain works—by feeding it 11,788 photos of 200 bird species to learn from, ranging from swimming ducks to hovering hummingbirds.

The researchers never told the network "this is a beak" or "these are wing feathers." Given a photo of a mystery bird, the network is able to pick out important patterns in the image and hazard a guess by comparing those patterns to typical species traits it has seen before.

Along the way it spits out a series of heat maps that essentially say: "This isn't just any warbler. It's a hooded warbler, and here are the features—like its masked head and yellow belly—that give it away."

Duke computer science Ph.D. student Chaofan Chen and undergraduate Oscar Li led the research, along with other team members of the Prediction Analysis Lab directed by Duke professor Cynthia Rudin.

They found their neural network is able to identify the correct species up to 84% of the time—on par with some of its best-performing counterparts, which don't reveal how they are able to tell, say, one sparrow from the next.

Rudin says their project is about more than naming birds. It's about visualizing what deep neural networks are really seeing when they look at an image.  .... " 

Thursday, November 21, 2019

Informs Magazine Nov/Dec 2019

Informs Magazine:  Nov/Dec 2019  

Featured articles:

Unchaining data scientists to make DataOps work
The three biggest trends in AI and ML right now
Survey: Scaling technology innovation can double revenue growth
Recounting 2019 Analytics Society Accomplishments
Preparing your data for analytics
New research could significantly reduce flight delays
Navigating the ‘office’ politics of analytics
CEOs say artificial intelligence tops disruptive technology
Beyond Cross Industry Standard Process for Data Mining
Aviation Safety: ‘Everything looks a little different’ after 737 Max
Applications for 2020 IAAA extended to December 2
2020 Syngenta Crop Challenge in Analytics

(Much more at the link)

Emergence of Ballistic Drones

With an example video. Policing, military?....  Inevitable.

Watch a 'transforming' drone blast out of a cannon
Ballistic drones could aid emergency response teams and space exploration.   By Christine Fisher, @cfisherwrites

Researchers launched a drone from a pneumatic baseball pitching machine strapped to a truck traveling 50 miles per hour. They hope this ballistic launch method might lead to drones that are better suited for emergency response and space exploration missions.   ... " 

Predicting Driving Personalities

Seen this mentioned before.  Would also likely be used by insurance companies. Right now only be 'predicted' by accident frequency, an example of prediction of behavior, another example that will be quickly accused of predictive bias?

Predicting People's Driving Personalities  By MIT News

Self-driving cars are coming. But for all their fancy sensors and intricate data-crunching abilities, even the most cutting-edge cars lack something that (almost) every 16-year-old with a learner's permit has: social awareness.

While autonomous technologies have improved substantially, they still ultimately view the drivers around them as obstacles made up of ones and zeros, rather than human beings with specific intentions, motivations, and personalities.

But recently a team led by researchers at MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) has been exploring whether self-driving cars can be programmed to classify the social personalities of other drivers, so that they can better predict what different cars will do — and, therefore, be able to drive more safely among them.

In "Social Behavior for Autonomous Vehicles," the scientists integrate tools from social psychology to classify driving behavior with respect to how selfish or selfless a particular driver is.

Specifically, they use something called social value orientation (SVO), which represents the degree to which someone is selfish ("egoistic") versus altruistic or cooperative ("prosocial"). The system then estimates drivers' SVOs to create real-time driving trajectories for self-driving cars.  .... "

Invest in Experiences

It is all about experience and what we can learn and use again.

Why Investing in Experiences Rather Than Things Is Good for Your Business
Put down your phone and start living.
By Levi KingCo-founder and CEO, Nav in Inc.

I grew up on a farm in Idaho, raised by loving parents on a budget so tight that I sometimes went to school without socks on rather than wearing the pink hand-me-down ones from my sisters.

Money was never all that important to me until I began making a lot of it and had to figure out how to spend it. Should I take a trip to Hawaii with my wife Rachel and our six amazing daughters, or buy that new Tesla I've had my eye on? A solo adventure in Australia, where I can regroup and recharge and catch up on my reading, or a bigger and better TV?

Time and time again, experiences have proven far more valuable than things. With a little experimentation, I suspect you'll learn the same lesson when it comes to spending money on your business.  ... "

Identification from Single Strand of Hair

New identification techniques.

Scientists can now identify someone from a single strand of hair
By Eva Frederick in Science Magazine 

A new forensic technique could have criminals—and some prosecutors—tearing their hair out: Researchers have developed a method they say can identify a person from as little as 1 centimeter of a single strand of hair—and that is eight times more sensitive than similar protein analysis techniques. If the new method ever makes it into the courtroom, it could greatly expand the ability to identify the people at the scene of a crime.  .... " 

Space Captures 3D LED Models of Contents

A new kind of film capture or  cinema?

LED egg Room-Sized LED Egg Captures Amazing 3D Models of People Inside It
in TechCrunch
By Devin Coldewey

Google researchers have designed a prismatic light-emitting-diode (LED) "egg," to generate remarkable three-dimensional (3D) and relightable models of people within it. The egg uses volumetric capture, in which multiple cameras in a 360-degree arrangement render a photorealistic representation of a subject in motion, while also allowing the model to be realistically illuminated by virtual light sources. This enables the model's placement in any virtual environment where lighting can change. The egg contains 331 LED lights that can produce any hue, and that shift in a specific pattern during subject capture, creating a lighting-agnostic model. When placed in the virtual setting, the models reflect the setting's lighting, and not the lighting of the egg itself.  ... "


Wednesday, November 20, 2019

Types of Machine Learning

Once again an excellent piece by Jason Brownlee,  a intro to a number of kinds of machine learning.   Abstract intro below, More at the link, and do subscribe:

14 Different Types of Learning in Machine Learning
by Jason Brownlee on November 11, 2019 in Start Machine Learning

Machine learning is a large field of study that overlaps with and inherits ideas from many related fields such as artificial intelligence.

The focus of the field is learning, that is, acquiring skills or knowledge from experience. Most commonly, this means synthesizing useful concepts from historical data.

As such, there are many different types of learning that you may encounter as a practitioner in the field of machine learning: from whole fields of study to specific techniques.

In this post, you will discover a gentle introduction to the different types of learning that you may encounter in the field of machine learning.

After reading this post, you will know:

Fields of study, such as supervised, unsupervised, and reinforcement learning.
Hybrid types of learning, such as semi-supervised and self-supervised learning.
Broad techniques, such as active, online, and transfer learning.
Let’s get started.
Types of Learning
Given that the focus of the field of machine learning is “learning,” there are many types that you may encounter as a practitioner.

Some types of learning describe whole subfields of study comprised of many different types of algorithms such as “supervised learning.” Others describe powerful techniques that you can use on your projects, such as “transfer learning.”

There are perhaps 14 types of learning that you must be familiar with as a machine learning practitioner; they are:

Learning Problems

1. Supervised Learning
2. Unsupervised Learning
3. Reinforcement Learning
Hybrid Learning Problems

4. Semi-Supervised Learning
5. Self-Supervised Learning
6. Multi-Instance Learning
Statistical Inference

7. Inductive Learning
8. Deductive Inference
9. Transductive Learning
Learning Techniques

10. Multi-Task Learning
11. Active Learning
12. Online Learning
13. Transfer Learning
14. Ensemble Learning
In the following sections, we will take a closer look at each in turn.

Did I miss an important type of learning?
Let me know in the comments below.

Learning Problems ..... " 

Improving Technical Expertise for Judges

Seen now a number attempts at applying AI type pattern solutions to law.   So Judges need to understand the implications.

How to Improve Technical Expertise for Judges In AI-Related Litigation
By Brookings Institute

Judges are tackling emerging AI issues and creating case law that will impact the future course of technological innovation.  Lawyers, as well as judges and their staff, use machine learning to improve case law searches for relevant legal authority to cite in briefs and decisions. Document production and technology-assisted reviews use AI to search for relevant documents to produce and to mine those documents for the information most important to a party's claims. Some scholars and practitioners are already using AI to predict the outcome of cases based on algorithms based on tens of thousands of prior cases.

It is vital to improve judges' ability to understand the technical issues in AI-related litigation. There are several things court systems and professional organizations should do to enhance the technical capabilities of judges. ... "

Creating Randomness

We may intuitively think that order is good and disorder is bad, but randomness is useful, for example in preparing samples of data to train learning system.    This mathematical article touches on the issue

Mathematicians Calculate How Randomness Creeps In by Marcus Woo in Quanta Mag
The goal of a 15 puzzle is to put numbered tiles in order. Now mathematicians have solved the opposite problem — how to scramble one.

You’ve probably played a 15 puzzle. It’s that frustrating yet addictive game with 15 tiles and a single empty space in a 4-by-4 grid. The goal is to slide the tiles around and put them in numerical order or, in some versions, arrange them to form an image.

The game has become a staple of party-favor bags since it was introduced in the 1870s. It has also caught the attention of mathematicians, who’ve spent more than a century studying solutions to puzzles of different sizes and startling configurations.

Now, a new proof solves the 15 puzzle, but in reverse. The mathematicians Yang Chu and Robert Hough of Stony Brook University have identified the number of moves required to turn an ordered board into a random one.  ... "

Ways Microsoft wants to Change Health Care

Brought to my attention by Ryan Doherty, whose company Midmark, is always doing interesting things in the healthcare tech space.  Been following them for years.

The 4 big ways Microsoft wants to change health care
By Jackie Kimmell, Senior Analyst in Advisory.com

When you think of the Big Tech giants moving into health care, you probably think of Amazon, Google, Apple, and IBM—companies that have made clear they want to dramatically change the way America provides health care.

Dec. 5 webinar: What 'Google Health' will look like in 5 years 

But a sleeping tech giant may be waiting in the wings. After a few tentative, not-entirely-successful stabs at health care in past years, Microsoft is once again aiming to take on the industry, securing high-profile deals with the likes of Humana, Novartis, UCLA Health, and Providence St. Joseph.

But will Microsoft be successful in its new strategy? And, if so, what will it mean for health care? Let's dive into the company's four big health care bets to learn more.

Related

Cheat sheet: Cloud computing in health care 101 
1. Microsoft wants to dethrone Amazon as health care's cloud provider of choice
As health systems face a need for increased data capacity and enhanced cybersecurity, they're increasingly moving their data storage off their premises and into the cloud. Health care companies are projected to spend $11.4 billion on cloud computing in 2019—making it extremely lucrative market for Big Tech to enter.

Amazon has dominated the cloud market since it first began offering services in 2006, and its platform, AWS, still owns a 48% share of the $32.4 billion market worldwide. With a 15.5% share, Microsoft's Azure cloud comes in a distant second, but it's been gaining ground—with more than 60% growth from 2017 through 2018. While Amazon still offers the greatest depth of cloud products, Microsoft has been rushing to introduce similarly cutting-edge offerings, including machine learning, Internet of Things (IoT) features, and server-less computing to their arsenal.

And health care-specific services have been a major component of Microsoft's strategy. Through its new Microsoft Healthcare team, the company is following a "blueprint" aimed at securely bringing more health data to the cloud and creating services to process that information in new ways, according to Peter Lee, head of Microsoft Healthcare....  " 

Understanding Impact of AI on Labor

Thoughful piece with some supporting visuals.

From The Proceedings of the National Academy of Sciences (PNAS).   https://pnas.org

Toward Understanding the Impact of Artificial Intelligence on Labor

Morgan R. Frank,  View ORCID ProfileDavid Autor, James E. Bessen, Erik Brynjolfsson, Manuel Cebrian, David J. Deming, Maryann Feldman, Matthew Groh, José Lobo, Esteban Moro, Dashun Wang,  View ORCID ProfileHyejin Youn, and  View ORCID ProfileIyad Rahwan

PNAS April 2, 2019 116 (14) 6531-6539; first published March 25, 2019 https://doi.org/10.1073/pnas.1900949116
Edited by Jose A. Scheinkman, Columbia University, New York, NY, and approved February 28, 2019 (received for review January 18, 2019)
Article Figures & SI Info & Metrics PDF

Abstract
Rapid advances in artificial intelligence (AI) and automation technologies have the potential to significantly disrupt labor markets. While AI and automation can augment the productivity of some workers, they can replace the work done by others and will likely transform almost all occupations at least to some degree. Rising automation is happening in a period of growing economic inequality, raising fears of mass technological unemployment and a renewed call for policy efforts to address the consequences of technological change. 

In this paper we discuss the barriers that inhibit scientists from measuring the effects of AI and automation on the future of work. These barriers include the lack of high-quality data about the nature of work (e.g., the dynamic requirements of occupations), lack of empirically informed models of key microlevel processes (e.g., skill substitution and human–machine complementarity), and insufficient understanding of how cognitive technologies interact with broader economic dynamics and institutional mechanisms (e.g., urban migration and international trade policy).

Overcoming these barriers requires improvements in the longitudinal and spatial resolution of data, as well as refinements to data on workplace skills. These improvements will enable multidisciplinary research to quantitatively monitor and predict the complex evolution of work in tandem with technological progress. Finally, given the fundamental uncertainty in predicting technological change, we recommend developing a decision framework that focuses on resilience to unexpected scenarios in addition to general equilibrium behavior. .... "

Full PDF, 9 Pages

See also:  Brookings: Putting Workers in the Future of Work.