/* ---- Google Analytics Code Below */

Wednesday, June 03, 2020

Amazon Streamlines Building Smart Home Alexa Skills

Nice idea, have often found myself groping for some skill name.  Should have been done sooner.

Amazon Streamlines Building Smart Home Alexa Skills
 by Eric Hal Schwartz in Voicebot.ai

Alexa developers can now combine their apps with smart home devices. Amazon had been piloting the Multi-Capability Skills feature for some time, but the option is now generally available to developers.

Until now, an Alexa developer would need one app to handle smart home capabilities of a device, and another with a different name for features that Alexa’s smart home API didn’t support. With Multi-Capability Skills, both sides are combined into a single voice app, handling both custom skills and the smart home skills built into Alexa. It basically makes an Alexa skill flexible enough to handle custom commands within the existing framework of the voice app. Most smart home devices have an on and off switch, for instance, but a command to change lighting colors is only useful for the relevant devices. With the new feature, both aspects can be included under one Alexa skill.

“With MCS, customers no longer need to search for or enable multiple skills to access all the features of their Alexa-connected device,” Amazon explained in its announcement. “MCS removes the friction of customers needing to remember different skill names, allowing customers to access all the expanded smart home features with a single invocation name. For example, by building a multi-capability skill, Dyson enabled its customers to interact more naturally with their Alexa-connected devices. Customers can control their Dyson fans with commands like “Alexa, set the fan speed to 5,” or “Alexa, set Oscillation to wide,” and set night modes and quiet modes in their daily routines, all features previously not available in a single skill experience.” .... '

Lowe's Goes Virtual for Pro Home Improvement

Most intrigued about how the knowledge is being stored, delivered,utilized.      There are different levels of expertise embedded in 'Pro', so will this context be included?   Ultimately essentially.

Lowe’s ‘virtually’ goes on the job for home improvement pros   by George Anderson in Retailwire

Lowe’s is introducing a new tool that will enable carpenters, electricians, plumbers and other construction professionals to meet with customers to discuss projects without having to go to their homes.

Lowe’s for Pros JobSIGHT makes use of video, computer vision and augmented reality tech to help pros evaluate projects so they can provide quotes to consumers on a wide variety of repair and home improvement projects. Pros using the tool chat directly with homeowners and are able to conduct tasks, such as determining product serial numbers and product details. They can use an on-screen laser pointer and augmented reality quick-draw tools to work through the consultation with homeowners. When the virtual meeting is complete, pros receive a one-page summary including video and audio, hi-res photos and notes for follow-up.

Lowe’s is making Pros JobSIGHT free to trade professionals through Oct. 31. Those who sign up for the program also save five percent on the chain’s everyday prices and are eligible for zero-interest purchases using their business accounts with Lowe’s. Extended payment terms are also available.  ... '

AR and Improved Online Shopping

We spent some time examining this proposition, but did not find that AR provided significant results in engagement and sales, except in very narrow domains. Here new studies of interest with new tech.

AR Can Improve Online Shopping, Study Finds
Cornell Chronicle
E.C. Barrett

Researchers at Cornell University, Iowa State University, and Virginia Polytechnic Institute found that online shopping could be enhanced by allowing consumers to try on garments virtually via Augmented Reality (AR). The goal is to reduce the expense and carbon footprint of bracket shopping, in which shoppers order an item in multiple sizes and colors, and send back those they find unsuitable. The AR system requires a computer, telephone, or tablet screen reflecting the shopper and their physical backgrounds, with selected garments overlaid; study participants assessed the AR garments for size, fit, and performance, followed by physical try-ons. Evaluating the fit in AR was problematic, but shoppers' responses to the AR and actual garments were positive. Cornell's Fatma Baytar said, "We can expect that as these technologies evolve, people will trust online shopping more." ... '

CVS Health Testing Nuro Delivery

This particular Nuro delivery systems is being tested by a number of companies.  See a number of images at the tag.

CVS Health Tests Self-Driving Vehicle Prescription Delivery
Associated Press
Tom Murphy
May 28, 2020

CVS Health will test prescription delivery via self-driving vehicles to customers in Houston beginning this month, in partnership with the Nuro robotics company. A spokesperson for the drugstore chain said prescriptions will be delivered within an hour of ordering from a Houston-area store; customers will have to confirm their identity in order to unlock the vehicle to obtain their delivery. Customers can select the Nuro delivery option when they fill their prescriptions online, and track the vehicle's progress through a Nuro portal. Federal regulators earlier this year granted Nuro temporary approval to operate autonomous delivery vehicles on public roads for the first time without human occupants. .... "

Systems of Insight, Analytics in Context

Precisely what I have been suggesting for some time. The results of analytics need to be plugged into business need.    As the Computerworld Article states it:  " .... Businesses want to use data to understand customers, but they can’t do that without harnessing insights and consistently turning data into effective action   ... " .  In order to do this you need to know where the insight plugs in.  Which means it helps to know how your business operates to begin with,  to understand its effect.  Not always as simple as it may seem.    One approach is to understand your business with a process model.  That is rare in business today, and often rejected as requiring too much effort.   The insight should be understood in process context.    Taking this further, the logic in the process model can also be modeled, leading to a cognitive model.

Tuesday, June 02, 2020

Simulating the Market

Quite a claim, often mentioned in early AI 'tests'?  Can it work?   Is simulation sufficiently complex proxy for the market?

AI stock trading experiment beats market in simulation  by Chinese Association of Automation in TechExplore

Researchers in Italy have melded the emerging science of convolutional neural networks (CNNs) with deep learning—a discipline within artificial intelligence—to achieve a system of market forecasting with the potential for greater gains and fewer losses than previous attempts to use AI methods to manage stock portfolios. The team, led by Prof. Silvio Barra at the University of Cagliari, published their findings on IEEE/CAA Journal of Automatica Sinica.

The University of Cagliari-based team set out to create an AI-managed "buy and hold" (B&H) strategy—a system of deciding whether to take one of three possible actions—a long action (buying a stock and selling it before the market closes), a short action (selling a stock, then buying it back before the market closes), and a hold (deciding not to invest in a stock that day). At the heart of their proposed system is an automated cycle of analyzing layered images generated from current and past market data. Older B&H systems based their decisions on machine learning, a discipline that leans heavily on predictions based on past performance.....  "

More information: Silvio Barra, Salvatore Mario Carta, Andrea Corriga, Alessandro Sebastian Podda and Diego Reforgiato Recupero, "Deep Learning and Time Series-to-Image Encoding for Financial Forecasting," IEEE/CAA J. Autom. Sinica, vol. 7, no. 3, pp. 683-692, May 2020. www.ieee-jas.org/en/article/do … 109/JAS.2020.1003132

Alexa now has an Everywhere Intercom

Been a long-time user of Alexa as an intercom, but you had to select a destination.   Now this new capability is nice, specially useful if you have a large multi-floor house or connected outside space.  Previously you could broadcast, but only one-way.   I could also see this as a means of asking the opinion of people in the house, as is shown in the example, and recording them in a list.  Maybe a business use?   The latter is not done automatically now. 

Now all your home’s Alexa devices work like an intercom  in Engadget

Amazon's 'Drop In' feature now works across the entire house.
Amazon Alexa users can now use the “Drop In” feature to talk with all of their Echo devices at once, Amazon announced on its blog. Previously, Drop In messages could only be sent to one other Alexa-enabled device at a time -- a user with an Alexa device in the bedroom could “drop in” on a device in the kitchen and have a two-way conversation.

Now, you can use a device to send a message to all Echo devices in the house at once. This could be helpful with asking group questions like, “Does anyone want anything from the grocery store?” according to the Amazon blog. To start a group Drop In conversation, you can ask Alexa to “Drop In everywhere.”  ... " 

Update:  Later I read that they have also added an 'Everywhere' option to the reminder feature,  'Set a reminder at 5 PM to dress for dinner' ... Everywhere'

Leveraging Unlabeled Data

The first step in using data is to make sure we know what the data is.   Surprisingly this can often be an issue.   Seen it a number of times in the real world.    How was the data gathered, protected, updated, maintained,  shared, preprocessed .... ?    If we don't know how it was precisely identified, we don't know what it is.   Taking it further, what data do we need to make this data useful?   What is the metadata, and how has it been found?   Has it been usefully labeled?

This article takes it farther yet.   Efforts are underway to construct synthetic data for further and future use. Examples of robot control and speech recognition and analysis, healthcare learning, explainability and causality data is brought up.  Which made me think, all those efforts need to be carefully labelled too, to make use feasible.

Leveraging Unlabeled Data   By Chris Edwards
Communications of the ACM, June 2020, Vol. 63 No. 6, Pages 13-14

Despite the rapid advances it has made it over the past decade, deep learning presents many industrial users with problems when they try to implement the technology, issues that the Internet giants have worked around through brute force.

"The challenge that today's systems face is the amount of data they need for training," says Tim Ensor, head of artificial intelligence (AI) at U.K.-based technology company Cambridge Consultants. "On top of that, it needs to be structured data."

Most of the commercial applications and algorithm benchmarks used to test deep neural networks (DNNs) consume copious quantities of labeled data; for example, images or pieces of text that have already been tagged in some way by a human to indicate what the sample represents.

The Internet giants, who have collected the most data for use in training deep learning systems, have often resorted to crowdsourcing measures such as asking people to prove they are human during logins by identifying objects in a collection of images, or simply buying manual labor through services such as Amazon's Mechanical Turk. However, this is not an approach that works outside a few select domains, such as image recognition.

Holger Hoos, professor of machine learning at Leiden University in the Netherlands, says, "Often we don't know what the data is about. We have a lot of data that isn't labeled, and it can be very expensive to label. There is a long way to go before we can make good use of a lot of the data that we have."

To attack a wider range of applications beyond image classification and speech recognition and push deep learning into medicine, industrial control, and sensor analysis, users want to be able to use what Facebook's chief AI scientist Yann LeCun has tagged the "dark matter of AI": unlabeled data.

"The problem I see now is that supervising with high-level concepts like 'door' or 'airplane' before the computer even knows what an object is simply invites disaster."

In parallel with those working in academia, technology companies such as Cambridge Consultants have investigated a number of approaches to the problem. Ensor sees the use of synthetic data as fruitful, using as one example a system built by his company to design bridges and control robot arms that is trained using simulations of the real world, based on calculations made by the modeling software to identify strong and weak structures as the DNN makes design choices. .... "

Research on Human - AI Teaming

Good piece on the topic.    Not too different from teaming with other methods, like analytics   But here there may be higher expectations and hints of expected 'autonomy'.   I would suggest and add that there should be more embedded risk analysis considered, mostly due to sometimes overblown expectations of such methods.  Humans will necessarily always be in the loup.

Teaming Up with Artificial Intelligence   By Bennie Mols in CACM

Daniel S. Weld of the University of Washington in Seattle was part of a team that analyzed 20 years of research on the interactions between people and artificial intelligences.

Creating a good artificial intelligence (AI) user experience is not easy. Everyone who uses autocorrect while writing knows that while the system usually does a pretty good job of acthing and correcting errors, it sometimes makes bizarre mistakes. The same is true for the autopilot in a Tesla, but unfortunately the stakes are much higher on the road than when sitting behind a computer.

Daniel S. Weld of the University of Washington in Seattle has done a lot of research on human-AI teams. Last year, he and a group of colleagues from Microsoft proposed 18 generally applicable design guidelines for human-AI interaction, which were validated through multiple rounds of evaluation.

Bennie Mols interviewed Weld about the challenges of building a human-AI dream team:

What makes a human-AI team different from a team of a human and a digital system without AI?

First of all, AI systems are probabilistic: sometimes they get it right, but sometimes they err. Unfortunately, their mistakes are often unpredictable. In contrast, classical computer programs, like spreadsheets, work in a much more predictable way.

Second, AI can behave differently in subtly different contexts. Sometimes the change in context isn't even clear to the human. Google Search might give different auto-suggest results to different people, based on their previous behavior, which was different.

The third important difference is that AI systems can change over time, for example through learning.

How did your research team arrive at the guidelines for human-AI-interaction?

We started by analyzing 20 years of research on human-AI interaction. We did a user evaluation with 20 AI products and 50 practitioners, and we also did expert reviews. This led to 18 guidelines divided over four phases of the human-AI interaction process: the initial phase, this is before the interaction has started; the phase during interaction; the phase after the interaction, in case the AI system made a mistake; and finally, over time. During the last phase, the system might get updates, while humans might evolve their interaction with the system. .... " 

Astronomy Methods for Business Analytics?

Brought to attention by some of my astro colleagues, the effort is considerable.  Could businesses also construct such a 'survey' of how they operate?   Which could lead to a determination of where data might be used, needed?

The Vera C. Rubin Observatory, currently under construction in Chile, will conduct a vast astronomical survey of our dynamic Universe starting in 2022. They plan to collect 500 petabytes of image data by observing the skies continuously for 10 years and produce nearly instant alerts for objects that change in position or brightness every night. In addition to astronomical data, their dataset will include DevOps, IoT, and real-time monitoring data.

In this latest Data Science Central webinar, Dr. Angelo Fausti will demonstrate:

●     How a time-series database has the versatility to address their needs
●     How they created a solution to enhance visibility across their organization and improve actionable insights
●     How they pull software development and sensor data from their telescope, camera and observatory IoT devices

Dr. Angelo Fausti, Software Engineer - Vera C. Rubin Observatory

Hosted by:
Sean Welch, Host and Producer - Data Science Central

--- --------------------------------------------------------------------------

LSST Project Mission Statement
LSST’s mission is to build a well-understood system that provides a vast astronomical dataset for unprecedented discovery of the deep and dynamic universe. .... 

Monday, June 01, 2020

State of China AI

The Art of AI
By Project Syndicate
Kai-Fu Lee.

Sinovation CEO and chairman Kai-Fu Lee says the best role for artificial intelligence in the future is to free humans and resources for well-paying jobs in care-giving and creative fields.

As the world enters a new decade, research and development into artificial intelligence and its many applications are barreling forward, and nowhere more so than in China. Although popular narratives tend to focus on the threats posed by AI, the truth is that many of the technology's dangers have been overhyped, and its promises neglected.

A leading figure in the Chinese tech scene and in artificial-intelligence development globally, Kai-Fu Lee earned a Ph.D. in computer science from Carnegie Mellon University in 1988 before serving in executive roles at Apple, SGI, Microsoft, and Google, where he was president of Google China. Now the chairman and CEO of Sinovation Ventures in Beijing, he is the author of AI Superpowers: China, Silicon Valley, and the New World Order. Here, he discusses the global AI race, the current state of the field, and what may – and should – come next.

Project Syndicate: As someone who long worked for U.S. companies and now oversees a tech venture capital firm, you're deeply familiar with the world's two main settings for AI development and research. What are the trade-offs of each R&D environment? What advantages does China offer over the U.S., and what must policymakers change or improve to achieve China's goal of catching up to and surpassing the U.S.?

Kai-Fu Lee: There is now a clear U.S.-China AI duopoly. AI in China is rising rapidly, boosted by several structural advantages: huge data sets, a young army of technical talent, aggressive entrepreneurs, and strong and pragmatic government policy. The attitude in China can be summarized as pro-tech, pro-experimentation, and pro-speed, all of which puts the country on track to becoming a major AI power.

From Project Syndicate
View Full Article

On Technology Adoption

On tech adoption, measurement and useful stage models of prediction diffusion.

Technology Adoption
By Peter J. Denning, Ted G. Lewis
Communications of the ACM, June 2020, Vol. 63 No. 6, Pages 27-29

Technology adoption is accelerating. The telegraph was first adopted by the Great Western Railway for signaling between London Paddington station and West Drayton in July 1839, but took nearly 80 years to peak (1920).a The landline telephone took 60 years to reach 80% adoption, electric power 33 years, color television 15 years, and social media 12 years.b The time to adoption is rapidly decreasing with advances in technology.

When we develop new technology, we would dearly like to predict its future adoption. For most technologies, total adoptions follow an S curve that features exponential growth in number of adopters to an inflection point, and then exponential flattening to market saturation. Is there any way to predict the S curve, given initial data on sales?

Technology adoption means that people in a community commit to a new technology in their everyday practices. A companion term diffusion means that ideas and information about a new technology spread through a community, giving everyone the opportunity to adopt. Adoption and diffusion are not the same. Here we are interested in adoption as it is manifest in sales of technology. Adoption models attempt to estimate two quantities that affect business decisions whether to produce technology. One is the total addressable market N, the number of people who will ultimately adopt. The other is t*, the time of the inflection point of the S curve.

It would seem that to develop a model of the S curve we would need a model of the underlying process by which technology is produced and sold. Three process models are common:

Pipeline: an idea flows through the stages of invention, prototyping, development, marketing, and sales, finally being incorporated into the market-place as a product people buy.
Funnel: similar to pipeline but the pipeline begins with multiple ideas and each stage winnows the number passed to the next stage until finally one product emerges into the marketplace. This model aims to compensate for the high failure rate of ideas. If failure rate is 96% (a common estimate), the funnel-pipeline must be seeded with 25 ideas so that there will be one survivor to the final stage.
Diffusion-Adoption: ideas are treated as innovation proposals that spread through a social community, giving each person the opportunity to adopt it or not.

Unfortunately, there are important innovations that are not explained by some or all of these models. For example, spontaneous innovations do not follow the pipeline or funnel models, and many diffusions do not result in adoption. Moreover, the models are unreliable when used as ways to organize projects—they explain what happened in the past but offer little guidance on what to do in the immediate future. Many organizations manage their internal processes according to one of these models. People in these organizations frequently experience a "Fog of Uncertainty" when something unanticipated comes up in one of the stages and it is not obvious what to do.  .... " 

Healthcare Medical Virtual Assistant

An example of assistants for healthcare.

Healthcare AI Startup Phelix.ai Raises $1M for Medical Virtual Assistant
By Eric Hal Schwartz in Voicebot.ai

Healthcare AI developer Phelix.ai has closed a $1 million seed round of funding led by Well Health Technologies. Phelix’s technology is used to run virtual assistants that can handle making appointments, filling out paperwork, and other administrative tasks on behalf of healthcare providers.

Toronto-based Phelix.ai essentially provides a virtual assistant for clinics and other healthcare providers. The AI can run a virtual call center, engaging with patients through online chatbots, phone calls, text messages, and even faxes to answer questions, arrange appointments, and bill patients as needed. According to Phelix, up to three-quarters of the time taken up by paperwork and administrative tasks can be reclaimed by doctors using its technology. Well Health accounts for a quarter of the $1 million investment, although the startup has not revealed who else contributed. As part of the deal, Well Health can now also use and sublicense Phelix.ai’s technology in other areas of its business.

“We’re thrilled to receive an investment from WELL and are thoroughly impressed with WELL’s vision and commitment to digital health in Canada,” Phelix.ai CEO Hassaan Ahmed said in a statement. “We look forward to being a part of WELL’s expanding portfolio of OSCAR compatible apps that are designed to better connect and make doctors’ practices more efficient.” ... 

AI and Machine Learning DevOps

Aiming at a higher level of automation for quicker and more reliable for development and delivery. Really not all that different from the use of emergent analytical technology.

How AI and Machine Learning are Evolving DevOps
Artificial intelligence and ML can help us take DevOps to the next level through identifying problems more quickly and further automating our processes.

The automation wave has overtaken IT departments everywhere making DevOps a critical piece of infrastructure technology. DevOps breeds efficiency through automating software delivery and allowing companies to push software to market faster while releasing a more reliable product. What is next for DevOps? We need to look no further than artificial intelligence and machine learning.

Most organizations quickly realize the promise of AI and machine learning, but often fail to understand how they can properly harness them to improve their systems. That isn’t the case with DevOps. DevOps has some natural deficiencies that are difficult to solve without the computing power of machine learning and artificial intelligence. They are key to advancing your digital transformation. Here are three areas where AI and machine learning are advancing DevOps. ... "

IOTA Smart Contracts

Brought to my attention, the use of smart contracts as is foreseen in the IOTA system we have been examining for use.  Most of the detail is at the link.

Introduction to IOTA Smart Contracts
I want to thank my colleagues in the IOTA Foundation who provided input and feedback for this article. In particular, Eric Hop, who headed the Qubic project and now joins IOTA Smart Contracts, and Jake Cahill, who is responsible for most of the wording.

IOTA Smart Contracts is an ongoing effort by the IOTA Foundation. The goal of this article is to inform the community about what we are doing and where we are heading with IOTA Smart Contracts. It also presents an opportunity for the community to begin contributing to the project with questions and feedback.

Recently, Eric Hop presented the Qubic project in his article The State of Qubic, and explained our decision to focus exclusively on smart contracts for the time being. Naturally, these developments raised many questions from the community about how IOTA Smart Contracts relate to Qubic’s vision.

This article will answer those questions, providing some context and a technical introduction to IOTA Smart Contracts. Although many aspects of IOTA Smart Contracts were derived from the Qubic project, in many ways it is a standalone project in its own right. We believe the direction we are taking has lots of potential and we are now taking the steps necessary to prove this potential in practice.

What is a Contract?
Before we define a smart contract, it is important to first understand what a legal contract is.

Legal contracts are non-deterministic agreements that are subject to complex legal systems. The laws surrounding contracts vary depending on a number of factors, such as the country in which all parties entered into the contract. The most important word here is “non-deterministic”. This means that contracts are often ambiguous, and their subjective interpretation can lead to disputes. .... " 

Gap Adding More Robots

Warehouse Robotics expands.

Gap Rushes in More Robots to Warehouses to Solve Virus Disruption   By Reuters

Gap Inc. is deploying warehouse robots more quickly amid the coronavirus pandemic, which has resulted in more online orders and fewer staff to fulfill them due to social distancing rules.

The U.S. apparel chain had reached a deal to more than triple its number of warehouse robots to 106 by the fall, but it called on Kindred AI to deliver the robots earlier.

Kindred has deployed 10 of the eight-foot-tall robotic stations — each of which can handle the work of four people — to Gap’s warehouse near Nashville, TN and another 20 near Columbus, OH.  Kindred will deliver the final robots to four of Gap's five U.S. facilities by July.

Gap and Kindred said the robots are meant to complement, not replace, human workers.

From Reuters

Sunday, May 31, 2020

Searching Websites the Way You Want

Interesting for non-programmers, to use API's.  At the link some technical examples.

Searching Websites the Way You Want
Adam Conner-Simons
May 18, 2020

Researchers at the Massachusetts Institute of Technology's Computer Science & Artificial Intelligence Laboratory (CSAIL) have developed ScrAPIr, a tool that enables non-programmers to access, query, save, and share Web data application programming interfaces (APIs). Traditionally, APIs could only be accessed by users with strong coding skills, leaving non-coders to laboriously copy-paste data or use Web scrapers that download a site's webpages and search the content for desired data. To integrate a new API into ScrAPIr, a non-programmer only needs to fill out a form telling the tool about certain aspects of the API. Said MIT’s David Carger, “APIs deliver more information than what website designers choose to expose it, but this imposes limits on users who may want to look at the data in a different way. With this tool, all of the data exposed by the API is available for viewing, filtering, and sorting.” ... ' 

Anti Wal-Mart Shoplifting AI by Everseen

Does it work?  Am a frequent user of self checkout systems and am always thinking of the implications vs other methods.  Here a piece from Wired.

Walmart Employees Are Out to Show Its Anti-Theft AI Doesn't Work in Wired

The retailer denies there is any widespread issue with the software, but a group expressed frustration—and public health concerns.

IN JANUARY, MY coworker received a peculiar email. The message, which she forwarded to me, was from a handful of corporate Walmart employees calling themselves the “Concerned Home Office Associates.” (Walmart’s headquarters in Bentonville, Arkansas, is often referred to as the Home Office.) While it’s not unusual for journalists to receive anonymous tips, they don’t usually come with their own slickly produced videos.

The employees said they were “past their breaking point” with Everseen, a small artificial intelligence firm based in Cork, Ireland, whose technology Walmart began using in 2017. Walmart uses Everseen in thousands of stores to prevent shoplifting at registers and self-checkout kiosks. But the workers claimed it misidentified innocuous behavior as theft, and often failed to stop actual instances of stealing.

They told WIRED they were dismayed that their employer—one of the largest retailers in the world—was relying on AI they believed was flawed. One worker said that the technology was sometimes even referred to internally as “NeverSeen” because of its frequent mistakes. WIRED granted the employees anonymity because they are not authorized to speak to the press.

The workers said they had been upset about Walmart’s use of Everseen for years, and claimed colleagues had raised concerns about the technology to managers, but were rebuked. They decided to speak to the press, they said, after a June 2019 Business Insider article reported Walmart’s partnership with Everseen publicly for the first time. The story described how Everseen uses AI to analyze footage from surveillance cameras installed in the ceiling, and can detect issues in real time, such as when a customer places an item in their bag without scanning it. When the system spots something, it automatically alerts store associates.

“Everseen overcomes human limitations. By using state-of-the-art artificial intelligence, computer vision systems, and big data we can detect abnormal activity and other threats,” a promotional video referenced in the story explains. “Our digital eye has perfect vision and it never needs a day off.”

In an effort to refute the claims made in the Business Insider piece, the Concerned Home Office Associates created a video, which purports to show Everseen’s technology failing to flag items not being scanned in three different Walmart stores. Set to cheery elevator music, it begins with a person using self-checkout to buy two jumbo packages of Reese’s White Peanut Butter Cups. Because they’re stacked on top of each other, only one is scanned, but both are successfully placed in the bagging area without issue. ... "  (More detail behind paywall) 

Why Consumers Are Willing to Share Personal Information on Smartphones

Intriguing difference between phones and laptops. 

Why Consumers Are Willing to Share Personal Information on Smartphones

Wharton’s Shiri Melumad speaks with Wharton Business Daily on Sirius XM about why consumers share personal information on smartphones.

Nearly everyone has experienced some version of phubbing, a term to describe being snubbed by someone who is more engrossed in their smartphone screen than the conversation or activity taking place in front of them. These powerful little devices have changed virtually everything about human communication, including the way we interact with each other. New research from Wharton marketing professors Shiri Melumad and Robert Meyer finds that people are more willing to share deeper and more personal information when communicating on a smartphone compared with a personal computer. In their paper, “Full Disclosure: How Smartphones Enhance Consumer Self-Disclosure,” the professors explain that it’s the device that makes all the difference. Smartphones are always at hand, and their tiny screens and keypads require laser-focused attention, which means the user is more likely to block out other concerns.

The findings are important for marketers looking to make the most out of user-generated content, especially the kind that can be shared with other potential customers. “The more personal and intimate nature of smartphone-generated reviews results in content that is more persuasive to outside readers, in turn heightening purchase intentions,” the professors write in their paper. Melumad recently joined the Wharton Business Daily radio show on Sirius XM to discuss the research. (Listen to the podcast at the top of this page.)

An edited transcript of the conversation follows. ..."

Microsoft Builds Supercomputer for OpenAI

Some useful hints here about what is being contemplated.   And, of course, big companies like MS, Google, Amazon, Apple, IBM ....  have the access to huge amounts of data to work with, and exposure to rich problem types too.   So expect new things from them.  Note the statement that we are nowhere near 'AGI'  (Artificial General Intelligence)   yet, have been asked that several times lately.

Microsoft Just Built a World-Class Supercomputer Exclusively for OpenAI  By Jason Dorrier in SingularityHub

Last year, Microsoft announced a billion-dollar investment in OpenAI, an organization whose mission is to create artificial general intelligence and make it safe for humanity. No Terminator-like dystopias here. No deranged machines making humans into paperclips. Just computers with general intelligence helping us solve our biggest problems.

A year on, we have the first results of that partnership. At this year’s Microsoft Build 2020, a developer conference showcasing Microsoft’s latest and greatest, the company said they’d completed a supercomputer exclusively for OpenAI’s machine learning research. But this is no run-of-the-mill supercomputer. It’s a beast of a machine. The company said it has 285,000 CPU cores, 10,000 GPUs, and 400 gigabits per second of network connectivity for each GPU server.

Stacked against the fastest supercomputers on the planet, Microsoft says it’d rank fifth.

The company didn’t release performance data, and the computer hasn’t been publicly benchmarked and included on the widely-followed Top500 list of supercomputers. But even absent official rankings, it’s likely safe to say its a world-class machine.

“As we’ve learned more and more about what we need and the different limits of all the components that make up a supercomputer, we were really able to say, ‘If we could design our dream system, what would it look like?’” said OpenAI CEO Sam Altman. “And then Microsoft was able to build it.”

What will OpenAI do with this dream-machine? The company is building ever bigger narrow AI algorithms—we’re nowhere near AGI yet—and they need a lot of computing power to do it.  ... "

Saturday, May 30, 2020

Indoor Positioning for Retail Innovation

We worked on this problem for our own innovation center tests.

A Smart Indoor Positioning System for Retail Automation

For the last few years we’ve been hearing about the retail apocalypse, though we would characterize it more as a retail extinction event. The difference being that until a couple of months ago, the demise of many brick-and-mortar businesses was a long, drawn out affair. No more. COVID-19 has caused a true retail apocalypse – goodbye, JCPenney, Pier One, J Crew, and company – by hastening the death of these ailing giants. There’s obviously a knock-on effect to tech companies shopping retail automation solutions. However, for one small Silicon Valley startup spun out of MIT, the novel coronavirus brought an unexpected opportunity for its thermal-based indoor positioning system.

You’re Being Followed
We’ve been covering the automation of retail for quite some time, from cashierless stores to robotic fulfillment centers in retailers like Walmart. The real money, of course, is in marketing. But it’s no longer necessary to blast messages and advertisements across a black hole and hope a few escape the gravitational pull of customer indifference. Today’s retail tech promises marketing precision by attempting to track and predict customer behavior in real-time both online and in the physical world. 

A company like Zenreach, for example, makes a pretty good case of ROI for its WiFi hotspot marketing software that tracks how well the store’s message is doing based on how many people walk through the door. Audio beacons are another way that stores can directly track customers by using sound to locate people through their smartphones. These technologies do start to leak into the creepy zone when you realize that the devices are communicating about you and your shopping behavior at a frequency you can’t detect. They also think you’ve put on a bit too much weight with all of the COVID-19 stress eating.

In fact, there are quite a few indoor positioning systems that scientists and startups have developed to track motion and trajectory. Not all of it is directed at marketing. WiFi motion sensors, for example, use WiFi signals for various smart home applications like security and even in-home elderly care, with algorithms trained to detect falls and too many trips to the cookie jar.

Briefly About Butlr  .... "

Algorithm Selection, Design, as a Learning Problem

Good way to look at it.   Many good points, ultimately technical.

Technical Perspective: Algorithm Selection as a Learning Problem
By Avrim Blum
Communications of the ACM, June 2020, Vol. 63 No. 6, Page 86

The following paper by Gupta and Roughgarden—"Data-Driven Algorithm Design"—addresses the issue that the best algorithm to use for many problems depends on what the input "looks like." Certain algorithms work better for certain types of inputs, whereas other algorithms work better for others. This is especially the case for NP-hard problems, where we do not expect to ever have algorithms that work well on all inputs: instead, we often have various heuristics that each work better in different settings. Moreover, heuristic strategies often have parameters or hyperparameters that must be set in some way.  ... " 

To view the accompanying paper, visit doi.acm.org/10.1145/3394625

Data-Driven Algorithm Design
By Rishi Gupta, Tim Roughgarden
Communications of the ACM, June 2020, Vol. 63 No. 6, Pages 87-94

The best algorithm for a computational problem generally depends on the "relevant inputs," a concept that depends on the application domain and often defies formal articulation. Although there is a large literature on empirical approaches to selecting the best algorithm for a given application domain, there has been surprisingly little theoretical analysis of the problem.

We model the problem of identifying a good algorithm from data as a statistical learning problem. Our framework captures several state-of-the-art empirical and theoretical approaches to the problem, and our results identify conditions under which these approaches are guaranteed to perform well. We interpret our results in the contexts of learning greedy heuristics, instance feature-based algorithm selection, and parameter tuning in machine learning.

Back to Top

1. Introduction
Rigorously comparing algorithms is hard. Two different algorithms for a computational problem generally have incomparable performance: one algorithm is better on some inputs but worse on the others. How can a theory advocate one of the algorithms over the other? The simplest and most common solution in the theoretical analysis of algorithms is to summarize the performance of an algorithm using a single number, such as its worst-case performance or its average-case performance with respect to an input distribution. This approach effectively advocates using the algorithm with the best summarizing value (e.g., the smallest worst-case running time).

Solving a problem "in practice" generally means identifying an algorithm that works well for most or all instances of interest. When the "instances of interest" are easy to specify formally in advance—say, planar graphs, the traditional analysis approaches often give accurate performance predictions and identify useful algorithms. However, the instances of interest commonly possess domain-specific features that defy formal articulation. Solving a problem in practice can require designing an algorithm that is optimized for the specific application domain, even though the special structure of its instances is not well understood. Although there is a large literature, spanning numerous communities, on empirical approaches to data-driven algorithm design (e.g., Fink11, Horvitz et al.14, Huang et al.15, Hutter et al.16, Kotthoff et al.18, Leyton-Brown et al.20), there has been surprisingly little theoretical analysis of the problem. One possible explanation is that worst-case analysis, which is the dominant algorithm analysis paradigm in theoretical computer science, is intentionally application agnostic.   ....  "

Alexa Monologues

Alexa is expanding the possibilities.  Need more of this with clear understanding and response.

Amazon Upgrades Alexa’s Long-Form Speech and Vastly Expands Custom Voice Languages and Styles
Eric Hal Schwartz in Voicebot.ai
more than a dozen new voices and styles from which to choose, and developers can adjust the voice assistant to sound more natural when speaking for more than a few sentences.

A lot of interaction with Alexa involves short responses or rote lines. That starts to sound strange when the voice assistant speaks for more than a few seconds. Alexa’s new long-form speaking stye is designed to address that disconnect and make using Alexa feel as comfortable as talking to another human. Since people don’t speak the same way when uttering a sentence as they do when expounding for multiple paragraphs, the addition is likely to be popular with voice apps that read magazines, books, or transcribed conversations from a podcast out loud. For now, this style is only an option for Alexa in the United States.

“For example, you can use this speaking style for customers who want to have the content on a web page read to them or listen to a storytelling section in a game,” Alexa developer Catherine Gao explained in Amazon’s blog post about the new feature. “Powered by a deep-learning text-to-speech model, the long-form speaking style enables Alexa to speak with more natural pauses while going from one paragraph to the next or even from one dialog to another between different characters.” .... '

Why is AI so Confused by Language?

From the Elemental Blog, well worth reading through there:

Why is AI so confused by language? It’s all about mental models.  By David Ferrucci

In my last post, I shared some telling examples where computers failed to understand what they read. The errors they made were bizarre and fundamental. But why? Computers are clearly missing something, but can we more clearly pin down what?

Let’s examine one specific error that sheds some light on the situation. My team ran an experiment where we took the same first-grade story I discussed last time, but truncated the final sentence:

Fernando and Zoey go to a plant sale. They buy mint plants. They like the minty smell of the leaves.

Fernando puts his plant near a sunny window. Zoey puts her plant in her bedroom. Fernando’s plant looks green and healthy after a few days. But Zoey’s plant has some brown leaves.

“Your plant needs more light,” Fernando says.

Zoey moves her plant to a sunny window. Soon, ___________.

[adapted from ReadWorks.org]

Then we asked workers on Amazon Mechanical Turk to fill in the blank. Here’s what the workers suggested:

Friday, May 29, 2020

Steve Gibson on Why Contract Tracing Won't Work

I have inserted links below to this analysis of Apple/Google attempts at generalized software based tracking.   

See also related Bruce Schneier article:   https://www.schneier.com/blog/archives/2020/05/me_on_covad-19_.html   With considerable and often thoughtful discussion.

Steve Gibson  in Podcast:
Contact Tracing Apps R.I.P.    https://twit.tv/shows/security-now/episodes/768


Software-based Contact Tracing is Doomed

Amazon Echo Look Experiment Ends

A look at the history of and now the end of fashion advice from Echo Look. Had some people look at it but it got only rare interest and the experiment ended quickly.  Passing this along to people that had interest for closure.

Amazon Echo Look No More – Another Alexa Device Discontinued  By Brett Kinsella in voicebt.ai  

Amazon quietly introduced the Echo Look Alexa-enabled smart speaker for fashion advice in April 2017. Yesterday, the company quietly informed its few thousand users that Echo Look would be discontinued. In providing background on the latest shakeup of Amazon’s Alexa portfolio, an Amazon spokesperson shared the text of an email sent to Echo Look users yesterday saying:

“When we introduced Echo Look three years ago, our goal was to train Alexa to become a style assistant as a novel way to apply AI and machine learning to fashion. With the help of our customers we evolved the service, enabling Alexa to give outfit advice and offer style recommendations. We’ve since moved Style by Alexa features into the Amazon Shopping app and to Alexa-enabled devices making them even more convenient and available to more Amazon customers. For that reason, we have decided it’s time to wind down Echo Look. Beginning July 24, 2020, both Echo Look and its app will no longer function. Customers will still be able to enjoy style advice from Alexa through the Amazon Shopping app and other Alexa-enabled devices. We look forward to continuing to support our customers and their style needs with Alexa.” .... 

Amazon Echo Look always seemed like an experiment. It was launched in a closed, invite-only beta three years ago. A year later it was made available to the public for purchase though there was never much attention paid to the device in Amazon’s product launch events.

Echo was the first Alexa-enabled product that included a camera. However, it notably had no screen. The first smart display, Echo Show, would launch two months later and get a product refresh within 15 months of launch. Echo Look never received a formal product update and wasn’t even billed as a smart speaker. The company made clear that Echo Look could not do many of the things that were popular features on  Echo smart speakers.   .... " 

Street Lamps as a Platform for the Urban Smart City

Considerable piece on using Street lights as a platform for the urban smart city.  Have seen this posed a number of times, greet place to start, how often has it been successfully done?

Street Lamps as a Platform
By Max Mühlhäuser, Christian Meurisch, Michael Stein, Jörg Daubert, Julius Von Willich, Jan Riemann, Lin Wang
Communications of the ACM, June 2020, Vol. 63 No. 6, Pages 75-83

Street lamps constitute the densest electrically operated public infrastructure in urban areas. Their changeover to energy-friendly LED light quickly amortizes and is increasingly leveraged for smart city projects, where LED street lamps double, for example, as wireless networking or sensor infrastructure. We make the case for a new paradigm called SLaaP—street lamps as a platform. SLaaP denotes a considerably more dramatic changeover, turning urban light poles into a versatile computational infrastructure. SLaaP is proposed as an open, enabling platform, fostering innovative citywide services for the full range of stakeholders and end users—seamlessly extending from everyday use to emergency response. In this article, we first describe the role and potential of street lamps and introduce one novel base service as a running example. We then discuss citywide infrastructure design and operation, followed by addressing the major layers of a SLaaP infrastructure: hardware, distributed software platform, base services, value-added services and applications for users and 'things.' Finally, we discuss the crucial roles and participation of major stakeholders: citizens, city, government, and economy.

Recent years have seen the emergence of smart street lamps, with very different meanings of 'smart'—sometimes related to the original purpose as with usage-dependent lighting, but mostly as add-on capabilities like urban sensing, monitoring, digital signage, WiFi access, or e-vehicle charging.a Research about their use in settings for edge computing14 or car-to-infrastructure communication (for example, traffic control, hazard warnings, or autonomous driving)6 hints at their great potential as computing resources. The future holds even more use cases: for example, after a first wave of 5G mobile network rollouts from 2020 onward, a second wave shall apply mm-wave frequencies for which densely deployed light poles can be appropriate 'cell towers.'

Street lamps: A (potential) true infrastructure. Given the huge potential of street lamps evident already today and given the broad spectrum of use cases, a city's street lamps may obviously constitute a veritable infrastructure. However, cities today do not consider street lamps—beyond the lighting function—as an infrastructure in the strict sense. Like road, water, energy, or telecommunication, infrastructures constitute a sovereign duty: provision and appropriate public access must be regulated, design and operation must balance stakeholder interests, careful planning has to take into account present and future use cases and demands, maintenance, threat protection, and more. Well-considered outsourcing or privatization may be aligned with these public interests.

The LED dividend: A unique opportunity. The widespread lack of such considerations in cities is even more dramatic since a once-in-history opportunity opens up with the changeover to energy efficient LED lighting, expected to save large cities millions in terms of energy cost, as we will discuss, called 'LED dividend' in the following. Given their notoriously tight budgets, cities urgently need to dedicate these savings if they want to 'own' and control an infrastructure, which, once built, can foster innovation and assure royalties and new business as sources of city and citizen prosperity ... "

Microsoft Buying its Way into RPA

Microsoft buys it way into RPA capabilities, announced last week.  Late to the space it seems, but now a flurry of activity.  With claim of AI capabilities.

Microsoft + Softomotive: What Does This Deal Mean For RPA (Robotic Process Automation)?
Tom Taulli in Forbes

At Microsoft’s Build conference this week, CEO Satya Nadella announced the acquisition of Softomotive, which is a top RPA (Robotic Process Automation) vendor. This technology allows for the automation of repetitive and tedious processes, such as with legacy IT systems.

Keep in mind that Microsoft recently launched a new version of its own platform, called Power Automate, and also led a venture round in an AI-based RPA company, FortressIQ (here’s a Forbes.com post I wrote about it).

So let’s get a quick background on Softomotive: Founded in 2005, the company was one of the pioneers of the RPA industry. It initially focused on developing a visual scripting system, using VBScript, for desktop automation. Softomotive would then go on to evolve the platform, such as by creating ProcessRobot for larger enterprises.

As of now, there are over 9,000 customers across the globe. Interestingly enough, Microsoft will make the Softomotive’s WinAutomation application free for those who have an attended licence for Power Automate.    ... " 

Simulating Loaded Dice

Intriguing.  We spent lots of time updating ow we generated random numbers,  sometimes laboriously checking internal random number generators.    I can think of ways this could be used to generate numbers more clearly understandable to decision makers.    Since many real systems use numbers that are 'loaded' by context.

 Algorithm quickly simulates a roll of loaded dice    by Steve Nadis, Massachusetts Institute of Technology in TechExplore

A new algorithm, called the Fast Loaded Dice Roller (FLDR), simulates the roll of dice to produce random integers. The dice, in this case, could have any number of sides, and they are “loaded,” or weighted, to make some sides more likely to come up than others. Credit: Jose-Luis Olivares, MIT

The fast and efficient generation of random numbers has long been an important challenge. For centuries, games of chance have relied on the roll of a die, the flip of a coin, or the shuffling of cards to bring some randomness into the proceedings. In the second half of the 20th century, computers started taking over that role, for applications in cryptography, statistics, and artificial intelligence, as well as for various simulations—climatic, epidemiological, financial, and so forth.

MIT researchers have now developed a computer algorithm that might, at least for some tasks, churn out random numbers with the best combination of speed, accuracy, and low memory requirements available today. The algorithm, called the Fast Loaded Dice Roller (FLDR), was created by MIT graduate student Feras Saad, Research Scientist Cameron Freer, Professor Martin Rinard, and Principal Research Scientist Vikash Mansinghka, and it will be presented next week at the 23rd International Conference on Artificial Intelligence and Statistics.

Simply put, FLDR is a computer program that simulates the roll of dice to produce random integers. The dice can have any number of sides, and they are "loaded," or weighted, to make some sides more likely to come up than others. A loaded die can still yield random numbers—as one cannot predict in advance which side will turn up—but the randomness is constrained to meet a preset probability distribution. One might, for instance, use loaded dice to simulate the outcome of a baseball game; while the superior team is more likely to win, on a given day either team could end up on top.... "

Thursday, May 28, 2020

AI Making Personality Distinctions via Images

Intriguing paper, but I have my doubts that you can determine useful personality distinctions this way.

Artificial intelligence can make personality judgments based on photographs
6 days ago

National Research University Higher School of Economics
Russian researchers from HSE University and Open University for the Humanities and Economics have demonstrated that artificial intelligence is able to infer people's personality from 'selfie' photographs better than human raters do. Conscientiousness emerged to be more easily recognizable than the other four traits. Personality predictions based on female faces appeared to be more reliable than those for male faces. The technology can be used to find the 'best matches' in customer service, dating or online tutoring.

The article, "Assessing the Big Five personality traits using real-life static facial images," will be published on May 22 in Scientific Reports.

Physiognomists from Ancient Greece to Cesare Lombroso have tried to link facial appearance to personality, but the majority of their ideas failed to withstand the scrutiny of modern science. The few established associations of specific facial features with personality traits, such as facial width-to-height ratio, are quite weak. Studies asking human raters to make personality judgments based on photographs have produced inconsistent results, suggesting that our judgments are too unreliable to be of any practical importance. ... 

Kachur, A., Osin, E., Davydov, D., Shutilov, K., & Novokshonov, A. (2020). Assessing the Big Five personality traits using real-life static facial images. Scientific Reports. https://www.nature.com/articles/s41598-020-65358-6

Wearable Vitamin Sensor

Another wearable sensor example. 

Wearable Sensor Tracks Vitamin C Levels in Sweat
UC San Diego News Center
By Alison Caldwell

A team of University of California, San Diego (UCSD) researchers developed a new wearable sensor that monitors vitamin C levels in perspiration, which could offer a highly personalized option for users to track daily nutritional consumption and dietary compliance. The wearable is an adhesive patch that includes a system to stimulate sweating, and an electrode sensor to rapidly detect vitamin C concentrations. The flexible electrodes contain the enzyme ascorbate oxidase, which converts vitamin C to dehydroascorbic acid; the resulting rise of oxygen triggers a current that the device measures. UCSD's Juliane Sempionatto said, "Ultimately, this sort of device would be valuable for supporting behavioral changes around diet and nutrition."  ... '

Samsung Health on Newer TVs

Preloading TV's with software that addresses current conditions, like stay at homework places.
Good way to drive TV sales.  No real mention of 'assistant' functions.   No mention of Samsung's Bixby.  See the ability of having cross function enabled in the system, a possible assistance application.

Samsung Health Now Available as a Comprehensive In-Home Fitness and Wellness Platform on 2020 Samsung Smart TVs

With free access to Samsung Health, 5,000 hours of content on the TV and over 250 instructive videos from barre3, Calm, Fitplan, Jillian Michaels Fitness, obé fitness, and Echelon available
Samsung Electronics announced today that its Samsung Health platform is now available on 2020 Samsung Smart TV models. Designed to revolutionize the concept of at-home workouts, Samsung Health is a user-centric wellness platform that goes beyond fitness. It is a companion that syncs across various digital devices – smartphones, wearables and now Samsung Smart TVs. With Samsung Health, users will be able to enjoy free premium classes, start new wellness routines and even get the whole household moving with family challenges and more – all from the comfort of home.

“The whole intention of Samsung Health is to motivate our consumers to live healthier lives by meeting them wherever they are, across Samsung platforms,” said Won-Jin Lee, Executive Vice President of Service Business at Samsung Electronics. “We knew that to do this, we needed to develop a user-centric and immersive platform that offered a variety of in-home fitness and wellness options. Given the current climate, we hope that the launch of Samsung Health makes it easier for our consumers to prioritize their physical and mental wellbeing on a daily basis.”   ... '

Beware of Assumptions

Bob Herbold tells a good story about the danger of  assumptions for leaders.

There is a powerful lesson for leaders here:

Beware of Assumptions – Regularly isolate key assumptions that are being used, constantly probe the basis for those assumptions, and experiment appropriately.

Also, thank goodness the America’s Cup team had an out-spoken sceptic in the crew.  We all need those kinds of people on our team! .... "

Predictive Maintenance Driving 3D Printing

A long term interest and application area.  You could do a good job of prediction, but had to have a complex array of replacement parts in inventory.   Here, for the right kind of of application, the inventory could be minimized.  Could the parts even be produced to address certain kinds of predictive degradation by altering manufacturing design? 

Army 3D Printing Study Shows Promise for Predictive Maintenance
U.S. Army Research Laboratory
May 19, 2020

A study by researchers at the U.S. Army's Combat Capabilities Development Command (CCDC) Army Research Laboratory (ARL), the National Institute of Standards and Technology, CCDC Aviation and Missile Center, and Johns Hopkins University detailed a method for monitoring the performance of three-dimensionally (3D)-printed parts. The technique uses sensors to detect and track the wear and tear of 3D-printed maraging steel (known to possess superior strength and toughness without losing ductility), to help forecast degradation or malfunctions that warrant replacement. ARL's Todd C. Henry said the study was as much about understanding the specific performance of a 3D-printed material as it was about understanding the ability to monitor and detect the performance and degradation of 3D-printed materials.

Wednesday, May 27, 2020

Bot Activity During Coronavirus

Don't know what to fully make of this.    How accurate is the machine learning of the model working to identify bots?.  Looking for the Carnegie piece supporting this to get an idea.  Here is one CMU article which covers the research.

Researchers: Nearly Half Of Accounts Tweeting About Coronavirus Are Likely Bots   By Bobby Allyn

Computer scientists at Carnegie Mellon University have determined that nearly half of all Twitter accounts spreading messages about the COVID-19 pandemic are likely bots. The team analyzed more than 200 million tweets discussing the virus since January, and found about 45% were sent by accounts that behave more like computerized bots than humans. In addition, the researchers identified more than 100 false narratives about the novel coronavirus that bot-controlled accounts are spreading on the platform. The researchers used a bot-hunter tool to flag accounts that post messages more often than is humanly possible, or which claim to be in multiple countries within a period of a few hours. Said Carnegie Mellon researcher Kathleen Carley, "We're seeing up to two times as much bot activity as we'd predicted based on previous natural disasters, crises, and elections."   ... '

Kubernetes Workflows

This presentation was just brought to my attention. Especially useful when maintenance will be required, and it should always be built in for serious operations.

The Evolution of Distributed Systems on Kubernetes
Bilgin Ibryam takes us on a journey exploring Kubernetes primitives, design patterns and new workload types.

Bilgin Ibryam is a product manager and a former architect at Red Hat. In his day-to-day job, he works with customers from all sizes and locations, helping them to be successful with adoption of emerging technologies through proven and repeatable patterns and practises. His current interests include enterprise blockchains, cloud-native data and serverless.

About the conference
Software is changing the world. QCon empowers software development by facilitating the spread of knowledge and innovation in the developer community. A practitioner-driven conference, QCon is designed for technical team leads, architects, engineering directors, and project managers who influence innovation in the  ... "

Talk: The ACM Code of Ethics vs Snake Oil and Dodgy Development

And directly related to the last post, note the excellent archive of past talks linked to below.

Register Now: "Leveraging the ACM Code of Ethics Against Ethical Snake Oil and Dodgy Development"

Register now for the upcoming ACM TechTalk "Leveraging the ACM Code of Ethics Against Ethical Snake Oil and Dodgy Development,"   presented on Monday, June 8 at 12:00 PM ET/9:00 AM PT by Don Gotterbarn, Professor Emeritus at East Tennessee State University and Co-Chair, ACM Committee on Professional Ethics (COPE); and Marty Wolf, Professor at Bemidji State University; Co-Chair, ACM Committee on Professional Ethics (COPE). Keith Miller, Professor at the University of Missouri – Saint Louis, will moderate the questions and answers session following the talk. Continue the discussion on ACM's Discourse Page.   You can view our entire archive of past ACM TechTalks on demand at https://learning.acm.org/techtalks-archive.

The Ethics of Dark Patterns

A look at 'Dark Patterns',  had not heard the term.  Linking to ACM look at ethics by people that  build such interfaces.

Dark Patterns: Past, Present, and Future in ACMQueue
The evolution of tricky user interfaces
Arvind Narayanan, Arunesh Mathur, Marshini Chetty, and Mihir Kshirsagar

Dark patterns are user interfaces that benefit an online service by leading users into making decisions they might not otherwise make. Some dark patterns deceive users while others covertly manipulate or coerce them into choices that are not in their best interests. A few egregious examples have led to public backlash recently: TurboTax hid its U.S. government-mandated free tax-file program for low-income users on its website to get them to use its paid program;9 Facebook asked users to enter phone numbers for two-factor authentication but then used those numbers to serve targeted ads;31 Match.com knowingly let scammers generate fake messages of interest in its online dating app to get users to sign up for its paid service.13 Many dark patterns have been adopted on a large scale across the web. Figure 1 shows a deceptive countdown timer dark pattern on JustFab. The advertised offer remains valid even after the timer expires. This pattern is a common tactic—a recent study found such deceptive countdown timers on 140 shopping websites. ... "

Touch Sensors

Touch sensors are advancing.     We sought for understanding and adjusting how how products felt via a sensor.

OmniTact: A Multi-Directional High-Resolution Touch Sensor
Akhil Padmanabha and Frederik Ebert    May 14, 2020

Touch has been shown to be important for dexterous manipulation in robotics. Recently, the GelSight sensor has caught significant interest for learning-based robotics due to its low cost and rich signal. For example, GelSight sensors have been used for learning inserting USB cables (Li et al, 2014), rolling a die (Tian et al. 2019) or grasping objects (Calandra et al. 2017).

The reason why learning-based methods work well with GelSight sensors is that they output high-resolution tactile images from which a variety of features such as object geometry, surface texture, normal and shear forces can be estimated that often prove critical to robotic control. The tactile images can be fed into standard CNN-based computer vision pipelines allowing the use of a variety of different learning-based techniques: In Calandra et al. 2017 a grasp-success classifier is trained on GelSight data collected in self-supervised manner, in Tian et al. 2019 Visual Foresight, a video-prediction-based control algorithm is used to make a robot roll a die purely based on tactile images, and in Lambeta et al. 2020 a model-based RL algorithm is applied to in-hand manipulation using GelSight images.

Unfortunately applying GelSight sensors in practical real-world scenarios is still challenging due to its large size and the fact that it is only sensitive on one side. Here we introduce a new, more compact tactile sensor design based on GelSight that allows for omnidirectional sensing, i.e. making the sensor sensitive on all sides like a human finger, and show how this opens up new possibilities for sensorimotor learning. We demonstrate this by teaching a robot to pick up electrical plugs and insert them purely based on tactile feedback.  ... "

Vizient and IBM Watson Health

Been following for some time how Watson is being embedded in real world business process and problem solving.   Here another example in healthcare. Note the replacement of some of IBM's performance improvent offerings.

Vizient Inc. Partners with IBM Watson Health to Help Support Healthcare Providers’ Performance Improvement Needs

IRVING, Texas (BUSINESS WIRE), May 26, 2020 - Vizient Inc., the largest member-driven healthcare performance improvement company in the United States, today announces it has entered into a partnership with IBM Watson Health. The partnership will allow each company to help better serve its customers in areas including performance benchmarking, strategic planning, and clinical and operational performance.

Through the partnership agreement, IBM Watson Health is withdrawing the following offerings from its portfolio and offering a transition path for its clients to Vizient’s analytics portfolio:

IBM ActionOI®: Vizient will support the ActionOI functionality through Vizient Operational Data Base (ODB) which provides operational benchmarking plus services for peer networking.
IBM CareDiscovery®: Vizient will offer Vizient Clinical Data Base (CDB) which provides clinical performance improvement plus OPPE reporting and new peer networking opportunities focused on operational and clinical performance improvement.
IBM Market Expert®: Vizient will offer Vizient Sg2 Market Edge, which integrates strategic growth intelligence with advanced analytics to enable full continuum strategic planning. ... " 

Tuesday, May 26, 2020

Refraction AI Contactless Food Delivery

Here is an excerpt from Spectrum IEEE on robotic solutions.   Featuring Refraction AI, a  contactless food delivery system.   Plus a number of videos of related solutions, go to the link below to see them.

Video Friday: Robot Startup Refraction AI Testing Contactless Food Delivery   By Evan Ackerman, Erico Guizzo and Fan Shi

Refraction AI, founded by University of Michigan researchers, has developed a three-wheeled autonomous vehicle called REV-1 that is providing delivery services from local restaurants in Ann Arbor.

Editor's Picks
Self-driving delivery vehicle developed by Unity Drive Innovation (UDI)
Robot Vehicles Make Contactless Deliveries Amid Coronavirus Quarantine
UVD Robots ultraviolet disinfecting robots deployed to China to fight coronavirus outbreak
Autonomous Robots Are Helping Kill Coronavirus in Hospitals

Dutch Police Training Eagles to Take Down Drones
Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

ICRA 2020 – June 01, 2020 – [Virtual Conference]
RSS 2020 – July 12-16, 2020 – [Virtual Conference]
CLAWAR 2020 – August 24-26, 2020 – Moscow, Russia
ICUAS 2020 – September 1-4, 2020 – Athens, Greece
Let us know if you have suggestions for next week, and enjoy today’s videos.

Refraction AI, a University of Michigan startup that began delivering food in late 2019, says its pilot deployment of five “Rev-1” robots is doing four times as many runs since the COVID-19 crisis began. The small fleet of delivery robots helps keep employees and patrons safer by limiting human to human contact while also helping restaurants save money on delivery services due to the lower cost of Refraction AI’s service.  .... "

Ideo Designer Designing Smarts to Compete with Apple

We worked with Ideo in some of our shopping labs and problem solving spaces.   So the emergence of the emergence of a new smart speaker from an Ideo designer was of interest.   Look forward to seeing what this looks like.  Audio only, or an aim at the assistant-enabled markets from Amazon and Google, Samsung and Baidu too?

Ex-Apple designer to launch a product that competes with Apple itself

Christopher Stringer worked on many of Apple’s biggest hits. Now, he’s getting ready to release a new series of smart speakers that could compete with Apple HomePod and Sonos.
By Mark Wilson in Fastcompany

Christopher Stringer was a designer at Ideo, who helped create Dell’s hit ’90s design language, before he got the call from Jony Ive in 1995 to join Apple. Stringer went on to become a key figure in one of the most influential industrial design teams in history, launching dozens of products from PowerBooks to the iPhone.

In 2017, Stringer left Apple to build something new. According to a new report in the Financial Times, Stringer is now raising money to launch a product that will compete with Apple itself.

Stringer’s startup is called Syng. He cofounded it with Damon Way, who launched the tech protector brand Incase, and Afrooz Family, a master coder and former sound engineer at Apple who worked on the HomePod. Set up in Venice Beach, Los Angeles, Syng describes itself as “a future of sound company,” and it’s working on a new series of smart speakers, dubbed “Cell,” to rival the HomePod and Sonos, with the first of the line coming out in the fourth quarter of 2020. The Cell will undoubtedly be an impressive piece of machinery: Syng has lured designers from Apple, Nest, and Nike.  ....  " 

Basic, Non-technical look at Quantum Computing

Looks to be a good, basic, non technical introduction, and also links to further free courses.

Quantum Computing for the Newb
Introductory Concepts
By Amelie Schreiber

In this article, we will give a basic introduction to our free course material for quantum computing. This is an introductory course for the absolute beginner. If you are not an expert in computer programming or mathematics, this course is for you! It introduces all of the basic math needed for quantum computing and the basic programming skills you need will also be introduced. Best of all, everything is in interactive, online, Jupyter notebooks. So there is no need to download anything or learn terminal. If you already feel comfortable with Python, then much of this will be a breeze! You can focus more on the concepts from quantum physics such as state vectors, quantum gates represented as matrices, and quantum circuit diagrams. To get to the material, follow this Github pages link. ... " 

Wal-Mart Grounding Jet.com

No new news, but some of the motivations provided of interest,    Would they have thought differently if they knew of the virus coming?  Will it fundamentally change how people buy?

Walmart to ground Jet.com
By Dan Berthiaume - 05/19/2020  in CSA

Walmart is winding down a digitally-native retailer it bought for $3 billion in 2016.

The discount giant will cease operating its Jet.com e-commerce subsidiary at an unspecified date. Walmart announced its intention to shutter Jet.com in a two-sentence statement in its first quarter earnings report: “Due to continued strength of the Walmart.com brand, the company will discontinue Jet.com,” Walmart said. “The acquisition of Jet.com nearly four years ago was critical to accelerating our omni strategy.”

On the company's quarterly earnings call with analysts, Walmart CEO Doug McMillion credited the acquisition of Jet.com as  “jump-starting the progress we have made the last few years.”  He cited the growth of Walmart’s curbside pickup, delivery to the home and expansion of categories beyond groceries, including apparel and home decor.  McMillion also said "we're seeing the Walmart brand resonate regardless of income, geography or age."  .... "\

Google Letting You Buy with Voice

This has been  possible directly with Amazon Alexa for a long time, I use it fairly often, but the claim here is that Google Assistant  is much more secure, using a 'voiceprint'.     Security like that might also enable secure business and healthcare transactions as well.  Google assistant already detects the language you are using and adapts the response. 

Google is working on voice confirmation for purchases with Assistant
Buy stuff with your voice.

Rachel England, @rachel_england  in Engadget

Plasma Jets may Propel Aircraft

Some of the interesting details in the link below.

Plasma Jets May One Day Propel Aircraft
Plasma thrusters could help jet planes fly without fossil fuels
By Charles Q. Choi in Spectrum IEEE Energy

Jet planes may one day fly without fossil fuels by using plasma jets, new research from scientists in China suggests.

A variety of spacecraft, such as NASA’s Dawn space probe, generate plasma from gases such as xenon for propulsion. However, such thrusters exert only tiny propulsive forces, and so can find use only in outer space, in the absence of air friction.

Now researchers have created a prototype thruster capable of generating plasma jets with propulsive forces comparable to those from conventional jet engines, using only air and electricity.

An air compressor forces high-pressure air at a rate of 30 liters per minute into an ionization chamber in the device, which uses microwaves to convert this air stream into a plasma jet blasted out of a quartz tube. Plasma temperatures could exceed 1,000 °C.  ... " 

Facebook's Blenderbot

 Good indication of what Facebook has been doing in this space..  From O'Reilly: 

Facebook open-sources BlenderBot
Facebook AI has open-sourced BlenderBot, an open domain chatbot. It’s said to “feel more human,” because it blends a diverse set of conversational skills—including empathy, knowledge, and personality—together in one system. The model has 9.4 billion parameters, which is 3.6 times more than the largest existing system. Here’s the complete model, code, and evaluation setup.  

Monday, May 25, 2020

Embedding Machine Learning into RPA Process

Something we did, but with BPM models and process flow.   Makes sense because you can better understand the context involved.  Process models, even simple visualizations, can help sell the model, get useful data, and promote the contextual design and value.  Rules are understandable, but algorithms are usually not to decision makers.

Small ML is the next big leap in RPA
Instead of doing big ML projects, embed ML into your day-to-day RPA work and be amazed.   By Eljas Linna in TowardsDataScience

The boom in robotic process automation (RPA) over the past few years has made it pretty clear that business processes in nearly every industry have an endless amount of bottlenecks to be resolved and efficiency improvements to be gained. Years before the full surge of RPA, McKinsey already estimated the annual impact of knowledge work automation to be around $6 trillion in 2025.

Having followed the evolution of RPA from python scripts towards generalized platforms, I’ve witnessed quite a transformation. The tools and libraries available in RPA have improved over time, each iteration widening the variety of processes that can be automated and improving the overall automation rates further. I believe the addition of machine learning (ML) in the everyday toolbox of RPA developers is the next huge leap in the scope and effectiveness of process automation. And I am not alone. But there’s a catch. It will look very different from what all the hype would lead you to believe.

Why even care about machine learning?
Imagine RPA without if-else logic or variables. You could only automate simple and completely static click-through processes. As we gradually add in some variables and logic, we can start automating more complex and impactful processes. The more complex the process you want to automate, the more logic rules you need to add, and the more edge cases you need to consider. The burden on the RPA developer’s rule system grows exponentially. See where I’m going with this?  ... " 

Could You Run your Business as a Sim Game?

The overall idea here was pitched to us, especially the supply chain aspects of such a sim.  Since we were already doing related analytics, you could see how such a system could mimic, diagnose and propose solutions for the real world.  In particular to mimic and address unusual conditions.  We thought there was a place to include gamification of the system's reaction to that.    But there was considerable funding required to make it work, and we were not ready for that, so we said no.   Apparently, according to the article it never got a real client.  Note the health systems example, could that be useful today?     Worth a thought.

Go read this incredible history of the SimCity studio’s forgotten business games division  Maxis Business Simulations was strange, ambitious, and doomed  .... 

By Adi Robertson  @thedextriarchy  ...  in TheVerge. ... 

Evolution of Distributed Systems on Kubernetes

Ultimately in delivery,  workflow design is key, here a presentation on the topic.

Kubernetes  is an open-source container-orchestration system for automating application deployment, scaling, and management ... 

The Evolution of Distributed Systems on Kubernetes

Bilgin Ibryam takes us on a journey exploring Kubernetes primitives, design patterns and new workload types.

Bilgin Ibryam is a product manager and a former architect at Red Hat. In his day-to-day job, he works with customers from all sizes and locations, helping them to be successful with adoption of emerging technologies through proven and repeatable patterns and practises. His current interests include enterprise blockchains, cloud-native data and serverless.

About the conference
Software is changing the world. QCon empowers software development by facilitating the spread of knowledge and innovation in the developer community. A practitioner-driven conference, QCon is designed for technical team leads, architects, engineering directors, and project managers who influence innovation in the  ... " 

Useful Plans vs the Activity of Planning

Below the intro to former IBMer  Irving Wladawsky-Berger article on planning vs plans ...  much more at the link.  I like especially the differences when you meet with different kinds of events and seek cures for them.  Resilience has to address all of them meaningfully,  risk management is one key tool.

Even When Plans Are Useless, Planning Is Indispensable

“As scientists race to develop a cure for the coronavirus, businesses are trying to assess the impact of the outbreak on their own enterprises,” wrote MIT professor Yossi Sheffi in a February 18 article in the Wall Street Journal.  “Just as scientists are confronting an unknown enemy, corporate executives are largely working blind because the coronavirus could cause supply-chain disruptions that are unlike anything we have seen in the past 70 years.”

Sheffi is Director of the MIT Center for Transportation and Logistics.  He’s written extensively on the critical need for resilience in global enterprises and their supply chains, - including The Power of Resilience and The Resilient Enterprise, - so they can better react to major unexpected events.  Covid-19 is the kind of massively disruptive event he had in mind when he wrote those books.

While learning from historical precedents is always a good idea, recent supply chain disruptions - the 2003 outbreak of SARS in Asia, the 2011 Fukushima nuclear disaster, or the 2011 Thailand floods, - were very different from our current pandemic.  Those events were much more localized, lasted a relatively short time, and they mostly impacted supply, not demand.  The impact of Covid-19 is much bigger, affecting consumer demand as well as supply chains all over the world, and likely to last quite a bit longer.  “Today’s supply chains are global and more complex than they were in 2003,” with factories all over the world affected by lockdowns and quarantines.  Apple, for example, works with suppliers in 43 countries.   ... "    (much more below in the article) 

AI is Watching you Work

Expect much more of this in the future, starting with aggregated group data, but soon later individually.  Can then be readily matched to goal achievements and predicted over time.

AI Is Watching You Work, With Mixed Results

As AI monitors more workers, in call centers and on the manufacturing floor, the technology is challenged to deliver empathy for humans.
By AI Trends Staff 

Advances in AI and sensors are providing new ways to digitize manual labor, giving managers new insights and potentially new leverage on employees 

Many jobs in manufacturing require a dexterity and creativity that robots and software are unlikely to match any time soon. However, manufacturing jobs are likely to change based on how it seems best to work with the AI going forward. 

An experiment has been going on at an auto parts manufacturing plant in Battle Creek, Mich. since 2017 to capture worker movements all day long in the hopes of identifying bottlenecks in production. The plant of Denso, the global auto parts manufacturer, has been piping video into machine learning software from startup Drishti, according to a recent account in Wired. 

“In the past, we would take a line that was struggling and bring a bunch of people down with stopwatches to try to make it better,” stated Tony Huffman says, a production supervisor at the plant. The Drishti system logs the “cycle time” for every worker all day, for every shift. Plant managers analyze the data to look for sometimes subtle bottlenecks. “Everything flows better and is smoother,” Huffman stated.   ... "