/* ---- Google Analytics Code Below */

Tuesday, June 30, 2020

On Contact Tracing Apps

Intriguing piece I have only scanned so far.  What got me most interested is that the tracing aspect included a considerable human element, at least if you were not willing to do lots of sharing of location information that could well be seen as very privacy invasive, her a considerable, non-technical look at the challenge.   

Everything You Have Read About Contact Tracing Apps Is Wrong  in Knowledge@Wharton

Technology sounds like an attractive solution to contact tracing, but apps are at best a minor supplement to a large effort. In this opinion piece, Lyle Ungar writes that “we should be taking best practices from call centers, where human callers are supported by chatbots and information systems, supplemented with privacy-respecting apps on people’s phones that allow them to share information more easily and accurately. In the end, contact tracing is not an app, but a combined effort between technology, human tracers, and the general population.” Ungar is a machine learning researcher and professor of computer and information science at the University of Pennsylvania.

Contact tracing is key to reopening society. Best estimates put widespread vaccination in the U.S. more than two years in the future, and we can’t safely resume public life until we can identify who has been exposed to COVID-19, test them for the disease, and isolate them if they are sick. The U.S. has far too few human contact tracers, with states planning to hire only a tiny fraction of the estimated 180,000 contract tracers needed. Contract tracing apps have been proposed as one way to mitigate this problem. People are worried about their privacy; they should be even more worried about whether the apps will help. Even expert articles in the Journal of the American Medical Association (JAMA), underestimate the challenges.

Most of the discussion of contact tracing focuses on exposure notification apps (i.e., TraceTogether, PwC), which use Bluetooth signals to identify individuals who may have been exposed to the novel coronavirus. The most widely supported protocols (by Google and Apple) respect privacy; they broadcast and receive random numbers from your phone, but don’t reveal your name or phone numbers. Google and Apple do not allow the apps to share geolocation or other private information. Such apps are only effective in relatively tight communities, such as universities, where high adoption rates can be achieved. In a general community, where adoption is voluntary, adoption rates are vastly lower (the highest adoption rates are 32% in Australia and 38% in Iceland) and so the apps are virtually useless.  ... "

Seeing Around the Corner

The general idea has been brought up a number of times.  Bouncing radar and cooperative systems can get more data, and if we interpret it to detect dangers, we can react.   Even to things around the corner. Back to the better sensors and smarter reactions to get better results.   As suggested much work to do to get all the data and reactions  in place

Radar Allows Cars to Spot Hazards Around Corners
Princeton University
John Sullivan
June 25, 2020

Researchers at Princeton University, Germany's Ulm University and University of Kassel, Mercedes-Benz, and Canadian software firm Algolux have developed an automated radar system that will enable cars to look around corners and spot oncoming hazards. The system employs Doppler radar to bounce radio waves off surfaces at an angle to detect occluded objects and determine if they are moving or stationary. Radar's spatial resolution is relatively low, but the researchers think they could design algorithms to read the data to allow the sensors to operate. The team used artificial intelligence techniques to refine processing and interpret the images, focusing on background noise rather than usable information to distinguish objects. Princeton’s Felix Heide said, “In terms of integration and bringing it to market, it requires a lot of engineering. But the technology is there, so there is the potential for seeing this very soon in vehicles.”  .. " 

Banning Facial Recognition

What I have been thinking.  Specifically, not generally accountable.   Even if the major players stop doing this, others quickly will.

Bans on Facial Recognition Are Naïve. Hold Law Enforcement Accountable for Its Abuse
by Osonde A. Osoba and Douglas Yeung

June 17, 2020

The use of facial recognition technology has become a new target in the fight against racism and brutality in law enforcement. Last week, IBM announced it was leaving the facial recognition business altogether, and its CEO questioned whether it was an appropriate tool for law enforcement. Within days, Microsoft and Amazon each announced that they would—at least temporarily—prohibit law enforcement agencies from using their facial recognition software. The Justice in Policing Act, also introduced in Congress last week, would specifically ban facial recognition analysis of officers' body camera footage.

Facial recognition technologies—with the assumptions of their developers embedded in their code—often perform poorly at recognizing women, older people, and those with darker skin. There's little question that these flaws exist. But banning facial recognition isn't necessarily the best response. ...

Smart Columbus Connects Everyday Vehicles

Yet another example of using more sensors and connections to build intelligent management of vehicles within large systems.   Just underway.

Pilot Test Begins for Tech to Connect Everyday Vehicles
IEEE Spectrum
Sandy Ong
June 19, 2020

Columbus, OH, will launch a connected-vehicle pilot program in July, with up to 1,800 public and private vehicles fitted with special onboard units and dashboard-mounted head-up displays. These vehicles will be able to receive messages from traffic lights at 113 intersections. The goal of the project is to study the impacts of connectivity on safety and traffic flow. The pilot is part of the Smart Columbus initiative, which was rolled out after the city was awarded $40 million through the U.S. Department of Transportation's 2015 Smart City Challenge. The University of Michigan's Transportation Research Institute's Debra Bezzina said connected-vehicle technology could save billions a year by preventing as much as 80% of unimpaired car crashes. It also could result in more efficient traffic management and greener commuting.  ... "

Quantum Computers, Quantum Sensors

Sensors gather measurement data, which is crucial to learning hypothesis about complex systems.  Here an introduction to the concept of a quantum sensor.

Quantum sensor technology
“From quantum sensors to quantum computers”
Fraunhofer Research News / June 02, 2020

Prof. Oliver Ambacher, head of the Fraunhofer Institute for Applied Solid State Physics IAF in Freiburg, on the potential applications of quantum sensor technology and the role it plays in quantum computing.

Prof. Oliver Ambacher, head of the Fraunhofer Institute for Applied Solid State Physics in Freiburg.
Components based on diamond plates don't rely on extensive cooling: they will enable future quantum computers to be used at room temperature.

Mr. Ambacher, your institute is conducting extensive research in the field of quantum sensor technology. What’s so special about this technology?

Ambacher: Quantum sensors are very small and sensitive, so they can measure the tiniest of signals with extreme spatial resolution. This is important, for example, for testing nanoelectronic circuits in the semiconductor industry. Currently, it can be very difficult to troubleshoot circuits that don’t work. We are developing diamond-based quantum sensors, which additionally have the advantage that they function at room temperature. This is an important prerequisite for market acceptance of quantum sensors.

In what other areas might quantum sensors also be of interest?
Quantum sensor technology can also be used in medicine. Currently, brain waves are measured with the aid of superconductors, which require complex cooling with liquid helium and very large machines. With quantum sensors, it could one day be possible to perform these measurements using only a thin sensor film on the head instead of a bulky cap. This would not only put less strain on patients, but the procedure would also be more cost-effective for hospitals.

When might we see large-scale use of quantum sensors?
Superconducting sensors are already being used in medicine to detect brain injuries or brain tumors, or to measure brain activity following a stroke. Quantum sensor technology has already caught on in nanoelectronics, too, for performing fault analysis on modern electronic circuits. We are actually already in the second generation of quantum technology, and now we are asking ourselves things like: Where is there still room for additional improvement? What does industry need? What other applications can we develop? ..."

Monday, June 29, 2020

IOT Devices in the Hotel Room

Intriguing aspects of IOT in hospitality.   Has been less in the area of assistants than I expected.

Shedding Light (and Sound) on Hidden IoT Devices in Your Next Hotel Room

Carnegie Mellon University Human-Computer Interaction Institute  By Daniel Tkacik

A study by researchers at Carnegie Mellon University (CMU)'s Human-Computer Interaction Institute and CyLab Security and Privacy Institute, working with colleagues from China’s Xi'an Jiaotong University, explored methods for detecting hidden Internet of Things (IoT) devices, using light and sound. The authors considered three locator/detector designs—placing a light-emitting diode (LED) on a device; placing an LED and beeping mechanism on a device; and a contextualized picture that showed the device in position, taken by the hospitality host. Participants pinpointed devices much faster with locator designs than without them, and about two-thirds of participants preferred the LED-plus-beeper design. CMU's Jason Hong said, "Our hope is that the findings in this paper can help industry and policymakers in adopting the idea of locators for IoT devices ... " 

A Consortium for Confidential Computing

Intriguing new effort:

The Confidential Computing Consortium (CCC) brings together hardware vendors, cloud providers, and software developers to accelerate the adoption of Trusted Execution Environment (TEE) technologies and standards.

Join the Consortium as a Member

A common, cross-industry way of describing the security benefits, risks, and features of confidential computing will help users make better choices for how to protect their workloads in the cloud. Of the three data states, “in use” has been less addressed because it is arguably the most complicated and difficult. This is a major change to how computation is done at the hardware level and how we structure programs, operating systems and virtual machines. This cross-industry collaboration will accelerate this transformation in security in the enterprise. To get involved, please email us info@confidentialcomputing.io. ..." 

SiliconAngle gives some perspective:

Accenture, AMD, Facebook and Nvidia sign up to advance ‘Confidential Computing’  By Mike Wheatley in SiliconAngle

An industry group called the Confidential Computing Consortium, which aims to standardize the way that data is encrypted while in use, said today it’s adding 10 more members to its community.  ...
The consortium is working to standardize on a method for ensuring that data can be encrypted while it’s being processed in-memory, without exposing it to other parts of a computer system. Although methods for encrypting data at rest and data in transit have already been widely adopted in the technology industry, there’s still no reliable way to secure information as it’s actually being used.

That may soon change, though, as the consortium is advancing the development of an open-source framework called the Open Enclave Software Development Kit. It’s used to build Trusted Execution Environments for applications that can run on multiple kinds of computing architectures.   .... "

Building Brain-Like Synapses

And related to the last post, some work is progressing on doing functional mimicry:

Engineers design a device that operates like a brain synapse
Ion-based technology may enable energy-efficient simulations of the brain’s learning process, for neural network AI systems.   By David L. Chandler | MIT News Office

Teams around the world are building ever more sophisticated artificial intelligence systems of a type called neural networks, designed in some ways to mimic the wiring of the brain, for carrying out tasks such as computer vision and natural language processing. ...."

Artificial Neural Nets More similar to Brains than we thuoght

In our earliest uses of neural methods we actually worked with some brain scientists.  And they were quick to point out that neural methods, though inspired by the brain, were nothing like them.   So our thoughts of biomimicry were dangerous.    But it seems this idea has changed some,  but in a glance at this non technical piece, there is still much we do not know.  But is the brain a reasonable place to start for general AI?

Artificial neural networks are more similar to the brain than they get credit for  By Ben Dickson in BDTechtalks

This article is part of our reviews of AI research papers, a series of posts that explore the latest findings in artificial intelligence.

Consider the animal in the following image. If you recognize it, a quick series of neuron activations in your brain will link its image to its name and other information you know about it (habitat, size, diet, lifespan, etc…). But if like me, you’ve never seen this animal before, your mind is now racing through your repertoire of animal species, comparing tails, ears, paws, noses, snouts, and everything else to determine which bucket this odd creature belongs to. Your biological neural network is reprocessing your past experience to deal with a novel situation.

Our brains, honed through millions of years of evolution, are very efficient processing machines, sorting out the ton of information we receive through our sensory inputs, associating known items with their respective categories. ... " 

Amazon Honeycode: Nocode App Builder

Expect to see more NoCode and visual app building to drive more applications.

AWS Launches a No-Code Mobile and Web App Builder in Beta: Amazon Honeycode   by Steef-Jan Wiggers in Infoq

Recently, AWS announced the beta release of Amazon Honeycode, a fully managed service allowing customers to build mobile and web applications without writing any code quickly. With Amazon Honeycode, customers can build applications that are tailored specifically for tasks that involve tracking data over time and notifying users of changes, routing approvals, and facilitating interactive business processes.

For over a year now, the tech company has been working on the no-code service Honeycode. Earlier, the service had the project name 'AWS for Everyone' intending to allow anyone with little to no software development experience in an organization build simple business applications without the dependency of the IT department. Now the service is available in AWS as a managed service with the name Honeycode, where users can leverage a visual application builder to create interactive web and mobile applications backed by a powerful AWS-built database. Moreover, the applications users build can range in complexity from a task-tracking application for a small team to a project management system that manages a sophisticated workflow for multiple teams or departments.

Within the Honeycode service there is a visual builder accessible once a user logs in. Inside the visual builder, the user can choose to import a CSV file (data) into a blank workbook or use one of the pre-defined templates or start from scratch. When selecting a pre-defined template in the visual builder, the data model, business logic, and applications are pre-defined and ready-to-use out of the box.    ..."

Blockchains for Clinical Data

Example of blockchain use in healthcare data applications.

South Korean Government Turns to Blockchain Tech to More Reliably Store Clinical Diabetes Data
Jun 29, 2020 at 08:46 UTC in Coindesk

The South Korean government wants to develop a blockchain registry to help analyze, anonymize and store clinical data for diabetes.

Blockchain startup Sendsquare has been selected by the government to develop a proof-of-concept project for the nation, which has around 3.6 million people with diabetes, the company announced on Friday.

The startup will team with clinical experts and practitioners from Seoul’s KyungHee University Medical Center to begin analyzing nine years-worth of diabetes clinical data previously collected by the center.

“Storing and collaborating work across a large volume of data using centralized services has proven unwieldy and subject to issues of data loss, duplication and manipulation,” according to KyungHee Medical Center’s Professor Suk Chon.  ... "

Robot Disinfecting a Food Bank


Would seem to be a very efficient use of robotics.  Considerable detail and video about the process at the link.

CSAIL robot disinfects Greater Boston Food Bank
Using UV-C light, the system can disinfect a warehouse floor in half an hour — and could one day be employed in grocery stores, schools, and other spaces.
Watch Video

By Rachel Gordon | MIT CSAIL

Sunday, June 28, 2020

Amazon Wants a Big Piece of Self-Driving/Hailing

Been watching the extension of the smart home to the car by Amazon, interesting, but still weakly done.   Now are they really seeing 'self driving' as a means of selectively delivering the customer to the product, as opposed the the other way around?  A kind of ultimate 'smart-market'.  Will follow.

Amazon Shakes Up the Race for Self-Driving—and Ride-Hailing in Wired
The ecommerce giant is buying Zoox, which is designing autonomous robotaxis. Beware Uber and Lyft. 

UBER CEO DARA Khosrowshahi says his company wants to be the “Amazon for transportation.” Friday, Amazon made clear that it intends to be the Amazon for transportation.

The ecommerce giant said it had agreed to acquire Bay Area–based autonomous vehicle company Zoox, a deal reportedly worth more than $1 billion. (Amazon did not respond to WIRED's queries.) Since its founding in 2014, Zoox has been known for its technical chops, its secretiveness, and its sky-high ambition. While Alphabet's Waymo is focusing on self-driving tech and leaving the car building to places like Detroit, Zoox has stuck to its plan to design a robotaxi from the ground up—and operate a ride-hail service. In 2018, it showed off its first prototype vehicles, which look like sensor-laden golf carts on steroids. The company has also been testing its software on more conventional-looking Toyota Highlanders in San Francisco, where it is learning to handle chaotic city streets. ... " 

Knowledge Management

Important thoughts, though I think it is still rarely done very well.

Lauren Trees identifies trends in enterprise knowledge sharing and collaboration, researches cutting-edge ways to improve knowledge flow, and shares the findings with APQC’s members and the knowledge management community at large.

A knowledge management strategy is useful because it gives your KM effort a concrete purpose and target to work toward. Many organizations set well-meaning but elusive goals for KM such as “to break down siloes” or “to build a more collaborative culture.” These are good intentions, but they’re too vague to craft a meaningful initiative around—after all, what does it really mean to have a collaborative culture, and how do you know when you have one? A KM strategy makes you document the step-by-step actions that will help the organization achieve its expansive KM vision, along with the inputs required and the measures that will indicate success.

Given the volatility we’re currently experiencing, you may be tempted to throw up your hands in despair with it comes to plans and targets. What’s the point, when everything will just change again anyway? But ironically, a well-defined KM strategy is more important than ever.  ... " 

If you articulate what you want to achieve with KM, you’re less likely to get blown off course by every storm. You simply adjust your tools and tactics in line with current reality while chipping away at your established goals. For example, if your communities of practice must transition to all-virtual operations, their overarching mission hasn’t changed—they’re just using different means to get to the same end. And even if communities need to assume new duties, having the strategy laid out makes it easier to shift gears without losing focus.    ... "

Synthetic Data to Support Research

In a much less complex sense we used a method, connected to real sensors on systems, to construct sample datasets.  The resulting ML could then be used as a testbed for proposed solutions.   We were not worried in this case about the security of the data, but I can see how that could be effectively used.

Producing Data Synthetically to Help Treat Diseases Like Covid-19
Aalto University
June 25, 2020

Finnish Center for Artificial Intelligence (FCAI) researchers have developed a machine learning-based method that can synthetically generate research data and help in designing better treatments for diseases like Covid-19. The team based a newly released application on the technique, allowing academics and companies to exchange data while maintaining the privacy of individuals involved in the research. Researchers enter the original dataset into the app, which produces a synthetic dataset that can be shared securely with other scientists and companies. FCAI investigators are currently using synthetic data to construct a model that anticipates whether a subject's coronavirus test is positive or negative, based on certain biomarkers.  ... " 

Saturday, June 27, 2020

Misconceptions about the Use of RPA

NIcely put, nontechnical piece.  In particular I liked the comments about its difference between BPM modeling models.   All users considering its use should follow up on these modeling ideas:

Robotic Process Automation: 6 common misconceptions
By Daniel Schmidt of Kofax

What false expectations are raised using RPA in companies?

The advantage of Robotic Process Automation (RPA) is that it automates repetitive, remedial tasks and frees employees to work on higher value tasks. But many companies believe RPA will enable them to automate even the most complex Business Process Management (BPM) activities, although there are much more suitable solutions available. The following overview shows which other misconceptions companies frequently use to counter RPA solutions.  .. " 

Verified, Intentified Phone Calling is Advancing

Will we soon get the inferred reason for a phone call?  Its getting a start already, in IOS I get already get 'likely spam', and who it is likely to be from, but why not inferring reasonable intent?

Really a kind of machine learning, from calls, texts and Contact info, and call message text, the data is there and growing.

Verified Calls for the Google phone app will let you know why a business is calling

Another addition to make calls more simple and seamless  By Caleb Potts in Android Police

Even though we don't generally use our phones for calling as much these days, the actual phone part of your phone is still important. Google has long worked to make phone calls less annoying with features like automatic call screening and spam detection. Now, it looks like a new Verified Calls feature is rolling out to help consumers know why a business is calling them before they pick up.

According to Google's support page, Verified Calls is a feature that helps users learn more about incoming calls before answering. Unlike call screening, which can be initiated by the user on any incoming call, Verified Calls only come from businesses that have gone through Google's approval process. When a call that meets the criteria is placed from an approved business, the user will see the business name and logo, as well as the reason for the call.  ... "

A Way to Consider and Onboard AI

Thoughtful results from a survey from multiple countries.  A look at AI in the form of Assistant, Monitor, Coach and Teammate.  Reasonable ways to think of their use and implications.

A Better Way to Onboard AI   by Boris Babic , Daniel L. Chen , Theodoros Evgeniou and Anne-Laure Fayard in the HBR

In a 2018 Workforce Institute survey of 3,000 managers across eight industrialized nations, the majority of respondents described artificial intelligence as a valuable productivity tool.
t’s easy to see why: AI brings tangible benefits in processing speed, accuracy, and consistency (machines don’t make mistakes because they’re tired), which is why many professionals now rely on it. Some medical specialists, for example, use AI tools to help make diagnoses and decisions about treatment.

But respondents to that survey also expressed fears that AI would take their jobs. They are not alone. The Guardian recently reported that more than 6 million workers in the UK fear being replaced by machines. These fears are echoed by academics and executives we meet at conferences and seminars. AI’s advantages can be cast in a much darker light: Why would humans be needed when machines can do a better job?  ... "

Platt Retail Institute on Robot Adoption in Retail

Some new thoughts about robotics being used a changed context.

Via Platt Retail Institute by Northwestern 

There has been growing interest in robotic automation in retail and supply chain for some time. COVID-19 will accelerate adoption. The Retail Analytics Council recently hosted a roundtable discussion regarding the state of robot adoption trends in this new retail world.   Eight panelists were asked to respond to a series of questions. Generally, the questions probed the participants’ thoughts on the opportunities and challenges of robot adoption.

Read the roundtable transcript: Robot Adoption Trends in a New Retail World.  

Panelists include:

Christopher Blum, Mechanical Engineer, Kroger Technology
Chris Daniels, Technology Engineer, Kroger Technology
Dr. Don High, Former Chief Scientist, Walmart
Gerry Hough, Senior Expert, Store Innovation, McKinsey & Company
Dr. Todd Murphey, Professor of Mechanical Engineering, McCormick School of Engineering and of Physical Therapy and Human Movement Sciences, Feinberg School of Medicine, Northwestern University
Wesley Rhodes, VP, R&D and Technology Transformation, Kroger
Dr. Adam Rigby, Senior R&D Scientist, Kroger Technology
Dan Whitacre, Senior Director, R & D and Technology Transformation, Kroger
Steven Keith Platt, Research Director, Research Director, RAC and Adjunct Professor, Northwestern University and moderator of this roundtable discussion ... '

Alphabet Buying North Glasses

Somewhat Unexpected perhaps,  but some indication of seriousness on the ultimate wearable. Had heard from others that North Glasses were good.

Report: Alphabet looking to buy smart glasses startup North for $180M   By  Maria Deutscher

Google LLC parent Alphabet Inc. is reportedly in advanced talks to acquire North Inc., a Canadian smart glasses startup, for about $180 million.

The discussions were first reported by Canada’s The Globe and Mail on Thursday evening. The publication’s sources indicated that Ontario-based North is in the “final stages” of hammering out an agreement with Alphabet.

Founded in 2012, North is backed by about $160 million from investors that include Intel Corp.’s venture capital arm and Salesforce.com Inc. founder Marc Benioff. The startup offers a smart glasses product called Focals that’s touted as less bulky than many other products in the category, closely resembling a regular pair of glasses. Users can view mobile notifications without looking at their phone and perform certain other tasks such as ordering an Uber ride.  ..."

Friday, June 26, 2020

Code as Evidence in Contracts, Disputes

A UK piece on the topic  of computing and dispute resolutions, code as evidence, which came up with respect to smart contracts.  Click through for useful and detailed links. 

The role of usability, power dynamics, and incentives in dispute resolutions around computer evidence  in Bentham’s Gaze by Alexander Hicks  

As evidence produced by a computer is often used in court cases, there are necessarily presumptions about the correct operation of the computer that produces it. At present, based on a 1997 paper by the Law Commission, it is assumed that a computer operated correctly unless there is explicit evidence to the contrary.

The recent Post Office trial (previously mentioned on Bentham’s Gaze) has made clear, if previous cases had not, that this assumption is flawed. After all, computers and the software they run are never perfect.

This blog post discusses a recent invited paper published in the Digital Evidence and Electronic Signature Law Review titled The Law Commission presumption concerning the dependability of computer evidence. The authors of the paper, collectively referred to as LLTT, are Peter Bernard Ladkin, Bev Littlewood, Harold Thimbleby and Martyn Thomas.

LLTT examine the basis for the presumption that a computer operated correctly unless there is explicit evidence to the contrary. They explain why the Law Commission’s belief in Colin Tapper’s statement in 1991 that “most computer error is either immediately detectable or results from error in the data entered into the machine” is flawed. Not only can computers be assumed to have bugs (including undiscovered bugs) but the occurrence of a bug may not be noticeable.  ... "

Talk: Robots as a Service in a Post Pandemic World

I had mentioned this talk, here is the recording:

Correspondent Jim Spohrer talks about robot tech in our pandemic futures.    ...

ISSIP Speaker Series: COVID-19 & Future of Work and Learning
Speaker: Jim Spohrer, Director, Cognitive Open Tech, IBM

Title: How will COVID-19 affect the need for and use of robots in a service world with less physical contact?

As AI and robotics come to the service world, including retail, hospitality, education, healthcare, and government, some jobs will go away, some new jobs will be created, and the income required for a family to thrive might be lessened.   In this creative session participants will be asked to engage in discussing three scenarios below – and the wicked problem of the bespoke impact on livelihood and jobs, which is creating uncertainty and concerns.   The groups will then report back on which scenarios they find more desirable.  .... 

Talk/sides here:  - https://youtu.be/RchxIKum_tI

-----------------
Jim Spohrer, PhD
Director, Cognitive Opentech Group (COG)
IBM Research - Almaden, 650 Harry Road San Jose, CA 95120
Innovation Champion: http://service-science.info/archives/2233

Why Your AI Project May Fail

This is true, if you don't have ready understanding/access into the local architecture, its much harder to get the data to train models in context.    And certainly also very hard to implement them into any sort of an deployed  model.  This is true whenever you hope to get anything used by a client, no necessarily just an AI project though.   To e clear though, you usually understand this fairly early on.   And people who are already there will usually tell you

The Dumb Reason Your AI Project Will Fail
by Terence Tse , Mark Esposito , Takaaki Mizuno and Danny Goh  in the HBR

Here is a common story of how companies trying to adopt AI fail. They work closely with a promising technology vendor. They invest the time, money, and effort necessary to achieve resounding success with their proof of concept and demonstrate how the use of artificial intelligence will improve their business. Then everything comes to a screeching halt — the company finds themselves stuck, at a dead end, with their outstanding proof of concept mothballed and their teams frustrated.

What explains the disappointing end? Well, it’s hard — in fact, very hard — to integrate AI models into a company’s overall technology architecture. Doing so requires properly embedding the new technology into the larger IT systems and infrastructure — a top-notch AI won’t do you any good if you can’t connect it to your existing systems. But while companies pour time and resources into thinking about the AI models themselves, they often do so while failing to consider how to make it actually work with the systems they have.

The missing component here is AI Operations — or “AIOps” for short. It is a practice involving building, integrating, testing, releasing, deploying, and managing the system to turn the results from AI models into desired insights of the end-users. At its most basic, AIOps boils down to having not just the right hardware and software but also the right team: developers and engineers with the skills and knowledge to integrate AI into existing company processes and systems. Evolved from a software engineering and practice that aims to integrate software development and software operations, it is the key to converting the work of AI engines into real business offerings and achieving AI at a large, reliable scale.  ... "

Dash Shopping Wand to be Bricked

As predicted, the Dash Wand has gone away.    I don't think I ever bought anything with it, just put it on a list to look at or buy later.    As noted below, they will be bricked ... and I won't be able to do anything with them at all.   Why not provide them as a means for getting a UPC code?    I assume they already oo that all the time anyway.   A simple service for good customers?  No.  They will still act like a nice strong refrigerator magnet,  and they can't brick that away.

Amazon Dash Wand No More – Alexa Shopping Device Discontinued  By Eric Hal Schwartz in Voicebot

Amazon is shutting down the Amazon Dash Wand on July 21, three years after the product scanner and Alexa voice assistant-powered device first debuted. The company isn’t just ending support for the product, it is remotely bricking them so they can’t be used at all and asking owners to send their Wands to Amazon’s recycling program, according to an email sent to Wand owners and shared with Voicebot. The decision continues Amazon’s consolidation around the Echo brand for Alexa with more versatile smart speakers and smart displays instead of the more narrowly-focused Dash Wand ... .'

Thursday, June 25, 2020

Virtualitics June Newsletter

I see that Virtualitics has a June Newsletter out, we helped test early beta versions.  I see you can now schedule a 30 day free immersive trial.  See my past experiences and comments on it with the tag 'virtualitics' below.

JUNE NEWSLETTER  @virtualitics.com

Software updates, performance improvements, and a number of new additions to our team.

SOFTWARE UPDATE
VIRTUALITICS IMMERSIVE PLATFORM VERSION 2020.3

→ New Features:

Improved plot customizations and enhanced visualizations include a refined tick mark system, plot titles, and management of trailing zeros

A new avatar system as well as optimizations in virtual reality increase speed and improve overall functionality across the environment

User interface improvements for increased software stability and an improved collaborative experience

If you have any questions or need assistance upgrading please contact us at support@virtualitics.com
.... "

Amazon and Counterfeiting Crimes

An area we also worked in, detecting counterfeits.  But then isn't this another cooperation with law enforcement?

Amazon forms Counterfeit Crimes Unit to tackle its fake goods problem
The unit will work with brands and law enforcement around the world.
By Mariella Moon, @mariella_moon in Engadget

Amazon has been grappling with a counterfeit problem for years to the point that it reportedly decided to be more cooperative with law enforcement a few months ago. Now, the e-commerce giant has formed a new division called the Counterfeit Crimes Unit that’s dedicated to taking down fraudsters selling fakes on its website. The unit is composed of former federal prosecutors, experienced investigators and data analysts, working together to find offenders and hold them accountable wherever they are in the world..... " 

Pollinating by Drone and Bubbles

Fascinating detail.   As a long time part time botanist, of interest. But how well can it effectively being done?  Will the use of drones interfere with natural methods?  Depends on the plants involved and their current pollination mechanisms.  A form of biomimicry.   Also a likely way path for developing smaller drones

Drone With Bubble Machine Can Pollinate Flowers Like a Bee
Pollen-carrying soap bubbles could provide a simple and effective method of artificial pollination  By Evan Ackerman in IEEE Spectrum

Researchers in Japan developed a drone equipped with a bubble maker for autonomous pollination.

The tiny biological machines that farms rely on to pollinate the flowers of fruiting plants have been having a tough time of it lately. While folks around the world are working on different artificial pollination systems, there’s really no replacing the productivity, efficiency, and genius of bees, and protecting them is incredibly important. That said, there’s no reason to also work on alternate methods of pollination, and researchers at the Japan Advanced Institute of Science and Technology (JAIST) have come up with something brilliant: pollen-infused soap bubbles blown out of a bubble maker mounted to a drone. And it apparently works really well.  ... " 

New Ways to Build Assistant Voice Apps

Look forward to the details on this, any way you can create better intelligence is good.   Note changes in training methods.

Google Assistant Upgrades Action Developer Tools to Streamline Building and Running Voice Apps
By Eric Hal Schwartz in Voicebot.ai

Google announced upgrades to the Google Assistant runtime engine on Wednesday designed to improve the speed and performance of voice apps. The tech giant revealed the improvements along with a handful of new and updated tools aimed at simplifying the process of building Google Assistant Actions.

The new Actions Builder feature sets up a central hub in the Actions console for developing and testing a new Google Assistant Action, showing visually how the AI responds to different conversational prompts and making it easier to train and debug the app. The idea is that the developer won’t have to keep going back from the console to the Dialogflow natural language understanding platform. All of the tools are in the console, making the whole process more efficient.

Google also updated the Actions SDK to assist with boosting that efficiency. The SDK puts every element of the voice app into files that a developer can export wherever they wish. That means the developer could build the voice app without needing to use the cloud, while still enabling them to move training data around. Using the files with Google’s improved command-line interface (CLI) also allows the developer to skip using any interface at all and just write and edit the app with code.
... '

Getting Pay for Data

Another project with aim at paying users for their data.  Links to our long term data as an asset view.

Andrew Yang Is Pushing Big Tech to Pay Users for Data
By The Verge
June 22, 2020

Andrew Yang wants people to get paid for the data they create on big tech platforms like Facebook and Google, and with a new project launching on Monday, he believes he can make it happen. ...

Yang's Data Dividend Project is a new program tasked with establishing data-as-property rights under privacy laws like the California Consumer Privacy Act (CCPA) all across the country. The program hopes to mobilize over 1 million people by the end of the year, focusing primarily on Californians, and "pave the way for a future in which all Americans can claim their data as a property right and receive payment" if they choose to share their data with platforms.

At the beginning of the year, the CCPA went into effect, granting consumers new control over their data online like the right to delete and opt out of the sale of their personal information. There's nothing in the law about tech companies paying for data (or, more specifically, paying them not to opt out), but Yang's new project is looking to show that the idea is popular with voters. The Data Dividend Project is betting on collective action as a means of changing the law and extending data property rights to users across the country. If this idea becomes law, Yang's team says it will work on behalf of users to help them get paid.

"We are completely outgunned by tech companies," Yang told The Verge. "We're just presented with these terms and conditions. No one ever reads them. You just click on them and hope for the best. And unfortunately, the best has not happened."  ... ' 


Wednesday, June 24, 2020

SAP and IBM Announce New Intelligence Offerings

Next steps between SAP and IBM partnership:  Digital Transformation towards the intelligent enterprise.

In Cision: PRNewswire  https://www.prnewswire.com/

IBM and SAP Announce New Offerings to Help Companies' Journey to the Intelligent Enterprise
ARMONK, N.Y. and WALLDORF, Germany, June 23, 2020 /PRNewswire/ -- IBM (NYSE: IBM) and SAP SE (NYSE: SAP) today announced their partnership's next evolution, with plans to develop several new offerings designed to create a more predictable journey for businesses to become data-driven intelligent enterprises. Over 400 businesses have modernized their enterprise systems and business processes through IBM and SAP's digital transformation partnership. 

Hitchhiking Drones

And had mentioned this novel idea as well in a recent post.  Also covered in considerable detail in IEEE Spectrum.  As noted will require some considerable design changes for public Transportation.

Delivery Drones Could Hitchhike on Public Transit to Massively Expand Their Range.  Riding on the top of public buses could make drone delivery much more efficient .... "

By Evan Ackerman in IEEE Spectrum.  ... 

Spot Robotic Dog Now Available

Its here, good piece on the rollout in IEEE Spectrum by Evan Ackerman.     Been noting some application plans here over several years.    Expensive, but if it effectively replaces a person or more, not really.   What then is our expectation of privacy from a camera wielding robot that could be quite intimidating?   And ... can you take the publicity of replacing a person or more?

Boston Dynamics' Spot Robot Dog Now Available for $74,500
For the price of a luxury car, you can now get a very smart, very capable, very yellow robotic dog
By Evan Ackerman

Boston Dynamics has been fielding questions about when its robots are going to go on sale and how much they’ll cost for at least a dozen years now. I can say this with confidence, because that’s how long I’ve been a robotics journalist, and I’ve been pestering them about it the entire time. But it’s only relatively recently that the company started to make a concerted push away from developing robots exclusively for the likes of DARPA into platforms with more commercial potential, starting with a compact legged robot called Spot, first introduced in 2016.  ... " 

A Domain-Specific Supercomputer for Training Deep Neural Networks

Good explanation of the phases of using computing power for these kinds of problems.

A Domain-Specific Supercomputer for Training Deep Neural Networks
By Norman P. Jouppi, Doe Hyun Yoon, George Kurian, Sheng Li, Nishant Patil, James Laudon, Cliff Young, David Patterson
Communications of the ACM, July 2020, Vol. 63 No. 7, Pages 67-78
10.1145/3360307

The recent success of deep neural networks (DNNs) has inspired a resurgence in domain specific architectures (DSAs) to run them, partially as a result of the deceleration of microprocessor performance improvement due to the slowing of Moore's Law.17 DNNs have two phases: training, which constructs accurate models, and inference, which serves those models. Google's Tensor Processing Unit (TPU) offered 50x improvement in performance per watt over conventional architectures for inference.19,20 We naturally asked whether a successor could do the same for training. This article explores how Google built the first production DSA for the much harder training problem, first deployed in 2017., TPU  ... " 

Price of Personal Data

Looking for the full report mentioned here, will post when I get a reference.  Back to the our long time looked at question of what the price of private data should be, and how should people be made to understand the implications?

Brits will sell their personal data for pennies  

Surprising findings from an Okta report on digital identity suggest Brits would be willing to part with valuable personal data for a surprisingly low amount  .... 
By  Alex Scroxton, Security Editor  in ComputerWeekly  ... 

Is Slow Neuroinformatics Private?

I have worked with companies which used machine learning image analysis to determine, for example, if a person was 'of age' in various locations.  To match with local regulatory provisions.    Could be done quite accurately, about 95%, but not perfectly.  Can that be privately done?

The Benefits of Slowness
Ruhr-Universitat Bochum
Meike Drießen
June 15, 2020

Neuroinformatics engineers at Ruhr-Universitat Bochum's Institute for Neural Computation in Germany have developed an algorithm that estimates an individual’s age and ethnic origin with greater than human-level accuracy. The team fed the algorithm several thousand photos of faces of different ages, sorted by age. The system disregarded features that varied between images, and only considered features that slowly changed over time. In calculating the age of the people in the photos, the algorithm outperformed even human assessment. The algorithm also estimated the correct ethnic origin of the subjects in the photos with greater than 99% probability, even though the images' average brightness was standardized, making skin color an insignificant marker for recognition. ... " 

Tuesday, June 23, 2020

IBM Research Director on Science and HPCC

Continued work with high performance computing by IBM.

IBM's research director on how tech can push science beyond the pandemic
Dario Gil, who's been nominated to the National Science Board, wants to create a "science readiness reserve" to use tech's power to solve future crises.
By Mike Murphy May 22, 2020

Update (June 23): Dario Gil has now been officially appointed to the National Science Board.

The coronavirus pandemic has ushered in new alliances between the tech industry's biggest players and government agencies as the world races to limit the spread of COVID-19 and find a cure. Dario Gil, IBM's research director, has been in the thick of everything.

Gil has been serving on the President's Council of Advisors on Science and Technology since the group was revived in 2019, and he helped launch the High Performance Computing Consortium. The group brought together supercomputing resources from some of the most powerful machines in the world to tackle 51 projects — and counting — aimed at modeling the virus and potential drugs. The experience has led Gil to ponder the broader question of how tech can unite in quieter times, helping the world to prepare for the next disaster more rigorously. "It was wonderful that we could create the [HPCC], but we had to sort of invent it on the fly," Gil said. "Why couldn't we think ahead?" .... "

Drones Changing Shape

An example of how defensive activities of Drones are also at play.

Research Leads to Army Drones Changing Shape Mid-Flight
U.S. Army
June 16, 2020

Researchers at the U.S. Army's Combat Capabilities Development Command's Army Research Laboratory and Texas A&M University helped create a tool that will enable autonomous aerial drones to change shape during flight. The tool can optimize the structural configuration of Future Vertical Lift vehicles while accounting for wing deformation due to fluid-structure interaction. Fluid-structure interaction analyses generally have high computational costs because they typically require coupling between a fluid and a structural solver. The researchers were able to reduce the computational cost for a single run by as much as 80% by developing a process that decouples the fluid and structural solvers, which offers further computational cost savings by allowing for additional structural configurations to be performed without reanalyzing the fluid.  ... "

Segway's Done

We had a Segway very early in in our future stopping store.  It turned lots of heads in those days.  We taught incoming visitors to drive them.   Were they just hype?

In FastCompany:  By Mark Wilson

Exclusive: Segway, the most hyped invention since the Macintosh, ends production  The Segway brand will no longer make its two-wheeled, self-balancing namesake. ...

Now, less than 20 years after the first Segway’s release, Fast Company has learned that the Segway brand will retire the last Segway as we know it, the Segway PT. Manufacturing at the Bedford, New Hampshire, plant will stop July 15. A total of 21 employees will be laid off as a result, while 12 will stay on temporarily to handle various matters, including warranties and repairs on the Segways that have already been sold. Five employees working on Segway Discovery scooters will remain.  ... " 

" Steve Jobs said it would be bigger than the PC  ... "  And you know he can't be wrong.

Uber Crosses the road

A good example of the complexity of data mining / machine learning data.  Via O'Reilly.

Inside Uber ATG’s Data Mining Operation: Identifying Real Road Scenarios at Scale for Machine Learning    By Steffon Davis, Shouheng Yi, Andy Li, and Mallika Chawda

How did the pedestrian cross the road?

Contrary to popular belief, sometimes the answer isn’t as simple as “to get to the other side.” To bring safe, reliable self-driving vehicles (SDVs) to the streets at Uber Advanced Technologies Group (ATG), our machine learning teams must fully master this scenario by predicting a number of possible real world outcomes related to a pedestrian’s decision to cross the road. To understand how this scenario might play out, we need to measure a multitude of possible scenario variations from real pedestrian behavior. These measurements power performance improvement flywheels for:

Perception and Prediction: machine-learned models with comprehensive, diverse, and continuously curated training examples (improved precision/recall, decreased training time, decreased compute).
Motion Planning: capability development with scenario-based requirements (higher test pass-rate, lower intervention rate).

Labeling: targeted labeling jobs with comprehensive, diverse, and continually updated scenarios (improved label quality, accelerated label production speed, lowered production cost).
Virtual Simulation: tests aligned with real-world scenarios (higher test quality, more efficient test runs, lowered compute cost).

Safety and Systems Engineering: statistically significant specifications and capability requirements aligned with the real-world (improved development quality, accelerated development speed, lowered development cost).

With the goal of measuring a scenario in the real world, let’s head to the streets to study how pedestrians cross them. ... '

Sketch to Realistic Image

Seen several attempts on this, but none that impressive. We used one to quickly map out possible process decisions.   Implications of 'fake' are always there, but if is attempting to construct and refine a sketch for clear illustration, it does not have to be that. 

Chinese Researchers Unveil AI That Can Turn Simple Sketches Into Fake Photorealistic Pictures
Daily Mail (U.K.)
James Pero

Researchers at the Chinese Academy of Sciences have created an artificial intelligence (AI) that can convert simple sketches of a face into photorealistic images, extrapolating from rough and even incomplete sketches. The DeepFaceDrawing AI analyzes a drawing's details, then checks each individual feature separately against a database of facial features to construct its own image. Said the researchers, "Our key idea is to implicitly model the shape space of plausible face images and synthesize a face image in this space to approximate an input sketch. Our method essentially uses input sketches as soft constraints and is thus able to produce high-quality face images even from rough and/or incomplete sketches." The researchers said the technology aims to help users with little drawing skill produce high-quality images. ... " 

Monday, June 22, 2020

Baidu Doing Drone Forestry Inspections with AI

Recall I worked with forestry management applications, so the need here rings true.  Note the system mentioned PaddlePaddle is open sourced.

Baidu’s deep-learning platform fuels the rise of industrial AI

PaddlePaddle lets developers build applications that can help solve problems in a wide range of industries, from waste management to health care.

by Baidu  ( In TechnologyReview.  This content was produced by Baidu.
It was not written by MIT Technology Review's editorial staff. ) 

AI is driving industrial transformation across a variety of sectors, and we’re just beginning to scratch the surface of AI capabilities. Some industrial innovations are barely noticed, such as forest inspection for fire hazards and prevention, but the benefits of AI when coupled with deep learning have a wide-ranging impact. In Southeast Asia, AI-powered forest drones have helped 155 forestry bureaus expand the range of forest inspections from 40% to 100% and perform up to 200% more efficiently than manual inspections. 

Behind these smart drones are well-trained deep-learning models based on Baidu’s PaddlePaddle, the first open-source deep-learning platform in China. Like mainstream AI frameworks such as Google’s TensorFlow and Facebook’s PyTorch, PaddlePaddle, which was open sourced in 2016, provides software developers of all skill levels with the tools, services, and resources they need to rapidly adopt and implement deep learning at scale. ... " 

Simple Economics of the Blockchain

Considerable and interesting piece, well worth reading if you are considering blockchain use. And a proposition of the Blockchain's value.    Not very technical.

Some Simple Economics of the Blockchain
By Christian Catalini, Joshua S. Gans
Communications of the ACM, July 2020, Vol. 63 No. 7, Pages 80-90  10.1145/3359552

In October 2008, a few weeks after the Emergency Economic Stabilization Act rescued the U.S. financial system from collapse, Satoshi Nakamoto34 introduced a cryptography mailing list to Bitcoin, a peer-to-peer electronic cash system "based on crypto graphic proof instead of trust, allowing any two willing parties to transact directly with each other without the need for a trusted third party." With Bitcoin, for the first time, value could be reliably transferred between two distant, untrusting parties without the need of an intermediary. Through a clever combination of cryptography and game theory, the Bitcoin 'blockchain'—a distributed, public transaction ledger—could be used by any participant in the network to cheaply verify and settle transactions in the cryptocurrency. Thanks to rules designed to incentivize the propagation of new legitimate transactions, to reconcile conflicting information, and to ultimately agree at regular intervals about the true state of a shared ledger (a blockchain)a in an environment where not all participating agents can be trusted, Bitcoin was also the first platform, at scale, to rely on decentralized, Internet-level 'consensus' for its operations. Without involving a central clearinghouse or market maker, the platform was able to settle the transfer of property rights in the underlying digital token (bitcoin) by simply combining a shared ledger with an incentive system designed to securely maintain it.

From an economics perspective, this new market design solution provides some of the advantages of a centralized digital platform (for example, the ability of participants to rely on a shared network and benefit from network effects) without some of the consequences the presence of an intermediary may introduce such as increased market power, ability to renege on commitments to ecosystem participants, control over participants' data, and presence of a single point of failure. As a result, relative to existing financial networks, a cryptocurrency such as Bitcoin may be able to offer lower barriers to entry for new service providers and application developers, and an alternative monetary policy for individuals that do not live in countries with trustworthy institutions. Key commitments encoded in the Bitcoin protocol are its fixed supply, predetermined release schedule, and the fact that rules can only be changed with support from a majority of participants. While the resulting ecosystem may not offer an improvement for individuals living in countries with reliable and independent central banks, it may represent an option in countries that are unable to maintain their monetary policy commitments. Of course, the open and "permissionless" nature of the Bitcoin network, and the inability to adjust its supply also introduce new challenges, as the network can be used for illegal activity, and the value of the cryptocurrency can fluctuate wildly with changes in expectations about its future success, limiting its use as an effective medium of exchange.

In the article, we rely on economic theory to explain how two key costs affected by blockchain technology—the cost of verification of state, and the cost of networking—change the types of transactions that can be supported in the economy. These costs have implications for the design and efficiency of digital platforms, and open opportunities for new approaches to data ownership, privacy, and licensing; monetization of digital content; auctions and reputation systems.   ..." 

Microsoft Relaunches Cortana without Alexa

Was initially impressed by the cooperation between Microsoft and Amazon, but admit I saw little from it.  Why not not have a joint effort to provide assistant intelligence?   All I saw was that you could ask Cortana things like "Ask Alexa ... "  but with little new regard for context of the question.  So you then could ask 'universal' things like the weather, wikipedia entries, states of connected systems ... but nothing that showed intelligence beyond the context of the device involved. Not very impressive beyond language processing.   And now MS has disconnected entirely, but promises continued collaboration.  Below a bit dated, but some insight to the state of assistance.

Microsoft launches Cortana app for Windows 10 without Amazon’s Alexa
By Khari Johnson in Venturebeat

Microsoft AI assistant Cortana is getting a dedicated app today for Windows 10 PCs. Unlike Cortana in the Start menu or pinned to the taskbar, the AI assistant can now function in a dedicated space users can resize, move, and interact with like any other PC program. Cortana responding to text commands in a dedicated app can be used to do things like start meetings, create reminders, ask for info from some native Microsoft apps, automatically suggest responses, and respond to questions like “Do I have an email from my boss?”

At launch, Cortana lead Andrew Shuman told VentureBeat the dedicated Cortana app will not respond to Alexa queries. In what may be the largest such partnership in the spirit of a multi-assistant world, Amazon and Microsoft partnered up in August 2018 to make Cortana available via Amazon Echo speakers and Alexa available through Windows 10. Few public steps have taken place since then to advance or deepen the partnership.  .... '


The Nature of Visual Illusions

Always have been interested in visual illusions, and here a study of probably the most famous one.  These gives us insight into how our brains and seeing apparatus work together in practice.  Shows also too that direct biomimicry may not always be best.    Image examples at the link.

Study sheds light on a classic visual illusion
Neuroscientists delve into how background brightness influences our perception of an object.

By Anne Trafton | MIT News Office

It’s a classic visual illusion: Two gray dots appear on a background that consists of a gradient from light gray to black. Although the two dots are identical, they appear very different based on where they are placed against the background.

Scientists who study the brain have been trying to figure out the mechanism behind this illusion, known as simultaneous brightness contrast, for more than 100 years. An MIT-led study now suggests that this phenomenon relies on brightness estimation that takes place before visual information reaches the brain’s visual cortex, possibly within the retina.

“All of our experiments point to the conclusion that this is a low-level phenomenon,” says Pawan Sinha, a professor of vision and computational neuroscience in MIT’s Department of Brain and Cognitive Sciences. “The results help answer the question of what is the mechanism that underlies this very fundamental process of brightness estimation, which is a building block of many other kinds of visual analyses.”  ... ' 

Sunday, June 21, 2020

IBM Introduces Watson Works with AI

Newly introduced:

Watson Works
Work safe, work smart, and ensure the health and productivity of your people in a changing workplace  ... 

Watson Works is a curated set of products that embeds Watson AI models and applications to help you:

Decide when employees can return to the workplace 
Organize and manage facilities and adhere to new protocols
Answer customer and employee questions on COVID-19
Maximize the effectiveness of contact tracing
Secure and protect your employees and organization

Schedule a consultation  ... 

Marty the Robot at Stop&Shop

Good view of what they are doing with robots at Stop@Shop. Mostly to map and feed video images and alerts to spills.  Will not talk to the shopper.  Impressive rollout.   One of the first examples of in store walking with the shopper.  Have never seen it live, but will make a note to.



Marty the Robot Rolls out AI in the Supermarket  in AI Trends
Marty the supermarket robot is among the first to travel with customers in the store, looking to avoid collisions and find spills.    By John P. Desmond, AI Trends Editor 

When six-foot-four inch Marty first rolled into Stop & Shop, the robot walked into history. Social robot experts say it is among the first instance of a robot deployed in a customer environment, namely supermarkets in the Northeast. 

Marty rolls around the store looking for spills with its three cameras. It does take the place of the human worker, called an associate, that did the same thing, but it means the associate can do something else. Doing the walk-around of the store is seen as a mundane task. 

Marty does not talk or tell jokes. Unlike Alexa, who many children in the store undoubtedly interact with at home, Marty will not respond. The robot does notify associates when it sees with its computer vision that something on the floor needs to be cleaned up, through the public address system. An associate comes over to clean it up, and presses a button on Marty that it’s done. Marty takes a picture of the cleaned-up aisle.    ..." 

Badger Technologies Rolled Out 500 Martys in 2019 

The AI in Marty is concentrated on the machine vision and the collision-avoidance navigation features, according to Tim Rowland, CEO of Badger Technologies, makers of Marty. After trials, Badger rolled out 500 multi-purpose robots into Stop & Shop and Giant/Martin’s grocery stores on the East Coast over the course of 2019. Each Marty is equipped with navigation systems, high-resolution cameras, many sensors and its software systems.   ... "

At the link, much more detail and Images.

Blogger Being Updated

I have been alerted that the underlying Blogging capability here will be updated by the end of June.  A quick test seems to show that no problems should occur, but it's possible attached resources will change.   Could cause display changes.   If you see anything amiss, inform me via the email on this page.   The plan is to move ahead seamlessly.

To Date:  20K Posts,  2,492K Reads

Oil and Gas use of AI Technology

Worked with this industry for a while. Here a good non-technical overview:

Oil & Gas Industry Transforming Itself with the Help of AI   By AI Trends Staff

The oil and gas industry is turning to AI to help cut operating costs, predict equipment failure, and increase oil and gas output.

A faulty well pump at an unmanned platform in the North Sea disrupted production in early 2019 for Aker BP, a Norwegian oil company, according to an account in the Wall Street Journal. The company installed an AI program that monitors data from sensors on the pump, flagging glitches before they can cause a shutdown, stated Lars Atle Andersen, VP of operations for the firm. Now he flies in engineers to fix such problems ahead of time and prevent a shutdown, he stated.

Aker BP employed a solution from SparkCognition of Austin, Texas.

Partnerships are forming throughout the industry. Exxon Mobil last year started a partnership with Microsoft to deploy AI programs to optimize operations in the West Texas Basin. The AI is needed to interpret data coming from millions of sensors that monitor Exxon refineries all over the globe. Total S.A., the French multinational oil and gas company, is partnering with Google to better interpret seismic data with the goal of better exploiting existing assets.

Advances in machine learning and the falling cost of data storage are factors in the move to AI in big oil. “When you mention data at this scale to data scientists, you can see them start salivating,” stated Sarah Karthigan, data science manager at ExxonMobil. The company has a database consisting of about five trillion data points. “The intent here is that we can run our plants more efficiently, more safely and potentially with fewer emissions.”

Sarah Karthigan, Data Science Manager, Exxon Mobil
With the price of oil low, oil and gas companies are looking for efficiencies. Deployment of AI in upstream operations could yield savings in capital and operating expenses of $100 billion to $1 trillion by 20205, according to a 2018 report by PwC.  .... "

Not Self Driving Cars, But Robots that Could Drive cars?

Interesting and bold challenge.    Build a robot that would autonomously drive cars.   With vision, decision making, interaction with car systems.   Adapting to various kinds of cars.   Not necessarily an android, looking like a person, with arms and legs and head and eyes.   But the equivalent. 

Its taking too long to get car based autonomy, so will this be quicker, cheaper, more adaptable?  But even having a robot navigate complex spaces, like the home, is also hard.     Lance Eliot discusses and poses an instructive list of positive and negatives.   See also his Forbes column:  https://forbes.com/sites/lanceeliot/   :

What If We Made A Robot That Could Drive Autonomously?  By Lance Eliot, the AI Trends Insider

There must be a better way, some lament.

It is taking too long, some say, and we need to try a different alternative.

What are those comments referring to?

They are referring to the efforts underway for the development of AI-based self-driving driverless autonomous cars.

There are currently billions upon billions of dollars being expended towards trying to design, develop, build, and field a true self-driving car.

For true self-driving cars, the AI drives the car entirely on its own without any human assistance during the driving task. These driverless cars are considered a Level 4 and Level 5, while a car that requires a human driver to co-share the driving effort is usually considered Level 2 and Level 3.

There is not as yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there.

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some point out).

So far, thousands of automotive engineers and AI developers have been toiling away at trying to invent a true self-driving car.

Earlier claims that progress would be fast and sweet have shown to be over-hyped and unattainable.

If you consider this to be a vexing problem, and if you have a smarmy person that you know, they might ponder the matter and offer a seemingly out-of-the-box proposition.

Here’s the bold idea: Rather than trying to build a self-driving car, why not instead just make a robot that can drive?   ... " 

Algorithmic Design for Building

Algorithms both generating data and using data for the design and construction of buildings.   Like managing pertinent metadata.

Algorithms are designing better buildings

Silvio Carta in The Conversation
Head of Art and Design, University of Hertfordshire

When giant blobs began appearing on city skylines around the world in the late 1980s and 1990s, it marked not an alien invasion but the impact of computers on the practice of building design.

Thanks to computer-aided design (CAD), architects were able to experiment with new organic forms, free from the restraints of slide rules and protractors. The result was famous curvy buildings such as Frank Gehry’s Guggenheim Museum in Bilbao and Future Systems’ Selfridges Department Store in Birmingham.

Today, computers are poised to change buildings once again, this time with algorithms that can inform, refine and even create new designs. Even weirder shapes are just the start: algorithms can now work out the best ways to lay out rooms, construct the buildings and even change them over time to meet users’ needs. In this way, algorithms are giving architects a whole new toolbox with which to realise and improve their ideas.

At a basic level, algorithms can be a powerful tool for providing exhaustive information for the design, construction and use of a building. Building information modelling uses comprehensive software to standardise and share data from across architecture, engineering and construction that used to be held separately. This means everyone involved in a building’s genesis, from clients to contractors, can work together on the same 3D model seamlessly.

More recently, new tools have begun to combine this kind of information with algorithms to automate and optimise aspects of the building process. This ranges from interpreting regulations and providing calculations for structural evaluations to making procurement more precise. .... "

Value Creation

Good piece.   Take care to carefully measure the value.

The Value of Value creation
By Marc Goedhart and Tim Koller, McKinsey Quarterly  ( link to PDF)

Challenges such as globalization, climate change, income inequality, and the growing power of technology titans have shaken public confidence in large corporations. In an annual Gallup poll, more than one in three of those surveyed express little or no confidence in big business—seven percentage points worse than two decades ago.1 Politicians and commentators push for more regulation and fundamental changes in corporate governance. Some have gone so far as to argue that “capitalism is destroying the earth.”2

This is hardly the first time that the system in which value creation takes place has come under fire. At the turn of the 20th century in the United States, fears about the growing power of business combinations raised questions that led to more rigorous enforcement of antitrust laws. The Great Depression of the 1930s was another such moment, when prolonged unemployment undermined confidence in the ability of the capitalist system to mobilize resources, leading to a range of new policies in democracies around the world.

Today’s critique includes a call on companies to include a broader set of stakeholders in their decision making, beyond just their shareholders. It’s a view that has long been influential in continental Europe, where it is frequently embedded in corporate-governance structures. The approach is gaining traction in the United States, as well, with the emergence of public-benefit corporations, which explicitly empower directors to take into account the interests of constituencies other than shareholders. .... ' 

Saturday, June 20, 2020

Sony Aibo Updated, will Greet You at the Door

Seems to be only assistant system left that makes a claim to being mobile and family friendly.   Home oriented.   With elements of a home pet emphasized.   We visited their lab early on, and that was one of the claims.  But it never seems to have reached that.  Notably similar in aims to the Kuri, now defunct.



Sony's Aibo robot will now greet you at the front door.  So, so adorable.
Nick Summers, @nisummers in Engadget

Sony’s robotic Aibo pup continues to learn new tricks. Thanks to a new software update, the android companion will now predict when you come home and sit patiently at the front door. Or that’s the idea, anyway. According to Sony’s website, you’ll first need to assign a meeting place — the entrance to your front home — by saying a phrase like “this is where you should go.” Aibo should then lower its head and ‘sniff’ the ground to indicate that it’s storing the location. If the process is successful, a door icon should appear on the map located inside the companion app.  ... "

Claims of over 100K sold in various forms, costing as much as  $2,900 each. There had seemed to be some indications it might be abandoned,  but this recent software update would say otherwise.   Glad the general idea continues, it has a place. 

Some detail in the Wikipedia:  https://en.wikipedia.org/wiki/AIBO

Patient Survey for Telemedicine

Useful to get some real experience from this event:

Doctor.com's Patient Survey Reveals Surprising Trends About Telemedicine Adoption Amid Reopenings
PR Newswire
June 16, 2020

A nationwide survey of 1,800 patients by healthcare marketing automation company Doctor.com found evidence of massive telemedicine adoption during the current pandemic, as well as growing demand for telemedicine services in the coming years. Most (83%) of the surveyed patients expect to use telemedicine after the pandemic, 55% are willing to use telemedicine to see new doctors, and 69% said "easy-to-use technology" would help them decide to make a telemedicine appointment. Moreover, 71% would consider using telemedicine services now, while 83% are likely to use such services after the pandemic. Doctor.com CEO Andrei Zimiles said, "As telemedicine becomes part of a 'new normal,' it is critical that providers begin shifting their long-term care strategies to incorporate virtual care and meet patients' rapidly evolving expectations for this channel."

High Quality Images of Moving Objects

I recall having to solve this problem for diagnosing from images of manufacturing machine parts.

Capturing Moving Subjects in Still-Life Quality
EPFL News (Switzerland)
June 18, 2020

Researchers at the Swiss Federal Institute of Technology in Lausanne (EPFL) Advanced Quantum Architecture Laboratory and the University of Wisconsin-Madison (UW-Madison) Wision Laboratory have developed a method for capturing extremely clear images of moving subjects. UW-Madison's Mohit Gupta borrowed EPFL's SwissSPAD camera, which generates two-dimensional binary images at a resolution of 512 x 512 pixels. EPFL's Edoardo Charbon said SwissSPAD captures 100,000 binary images per second, as an algorithm corrects for variations; the researchers built a high-definition image of a moving subject by combining these photos. The team aims to repeat the experiment with the MegaX camera, which Charbon said "is similar to SwissSPAD in many ways; it's also a depth-sensing camera, thus it can generate [three-dimensional] images."   ... " 

Upcoming ISSIP/CSG Talk: Robots in the Pandemic

Correspondent Jim Spohrer talks about robot tech in our pandemic futures.    I will post about this and point to the transcript.

ISSIP Speaker Series: COVID-19 & Future of Work and Learning

Speaker: Jim Spohrer, Director, Cognitive Open Tech, IBM
Title: How will COVID-19 affect the need for and use of robots in a service world with less physical contact?
Date & Time: June 24, 2020, 12:30-1:00 PM US Pacific Time, (on Zoom, info below)

Abstract: As AI and robotics come to the service world, including retail, hospitality, education, healthcare, and government, some jobs will go away, some new jobs will be created, and the income required for a family to thrive might be lessened.   In this creative session participants will be asked to engage in discussing three scenarios below – and the wicked problem of the bespoke impact on livelihood and jobs, which is creating uncertainty and concerns.   The groups will then report back on which scenarios they find more desirable. Click here for more details about this session.

More on this talk and background:  http://www.issip.org/about-issip/community/covid-19-working-group/

Recorded talk, slides:   https://youtu.be/RchxIKum_tI 

See also:  http://www.issip.org/   The International Society of Service Innovation Professionals, ISSIP 

ISSIP Newsletter: https://mailchi.mp/2f401b893caa/issip-june-2020-newsletter?e=78b83a31fb

Google Assistant and Duplex can Listen Better

This made me think a while about the implications.  It does not necessarily imply a loss of privacy, these are the actions-skills that we accept as extensions to an assistant.   Commands.   So would seem it would be very useful to office voice actions ....   So I could say 'Copy This',  and what follow would be recorded.    Or  'Translate this ... And the following would be translated.     We know that Google can do that ... just need better context operation.    As long as we can control the results.  Lately too have been impressed how the assistant can better handle my mis-stated commands.  But should that make us fear for privacy?

Google Assistant actions can now continuously listen for specific words
Kyle Wiggers@KYLE_L_WIGGERS in Venturebeat

Google today detailed new tools for partners developing on Google Assistant, its voice platform used by over 500 million people monthly in 30 languages across 90 countries. Actions Builder, a web-based integrated development environment (IDE), provides a graphical interface to show conversation flows and support debugging and training data orchestration. Continuous Match Mode allows Google Assistant to respond immediately to a user’s speech by recognizing specified words and phrases. And AMP-compliant content on smart displays like Nest Hub Max speeds up browsing via the web.

Google also revealed that Duplex, its AI chat agent that can arrange appointments over the phone, has been used to update over half a million business listings in Google Search and Google Maps to date. Back in March, CEO Sundar Pichai said Google would use Duplex “where possible” to contact restaurants and businesses so it can accurately reflect hours, pick-up, and delivery information during the pandemic. The company subsequently expanded Duplex in a limited capacity to the U.K., Australia, Canada, and Spain, adding support for the Spanish language in the last instance.  ... " 

Friday, June 19, 2020

Concept Mapping-based Platform for Learning and Knowledge Assessment.

Re connecting with Sero!, here is a repeat of a previous post:

Brought my attention by Brian Moon,  a knowledge design practitioner we have worked with.  I am a longtime proponent of concept mapping, we developed thousands.  Can this finally be a way to efficiently use such mapping to learn and analyze.?  Can it convert concept maps into knowledge maps, or at least their skeletal infrastructure?   Linking to process modeling?  Will be taking a deeper look:



Introducing Sero!, a concept mapping-based platform for learning and knowledge assessment.

We are harnessing the power of concept mapping to move well beyond outmoded assessments, exercising holistic and higher order thinking skills for formative and summative assessments. See what we're up to at https://www.serolearn.com. And take a sneak peek under the hood on our Assessor and Taker instructional pages.  .... 
 .. "

New Physics: All the Way Down?

Descriptive.  But will it show us something really predictively new?

THE THIRD CULTURE
Computation All the Way Down In the Edge
A Conversation with Stephen Wolfram

We're now in this situation where people just assume that science can compute everything, that if we have all the right input data and we have the right models, science will figure it out. If we learn that our universe is fundamentally computational, that throws us right into the idea that computation is a paradigm you have to care about. The big transition was from using equations to describe how everything works to using programs and computation to describe how things work. And that's a transition that has happened after 300 years of equations. The transition time to using programs has been remarkably quick, a decade or two. One area that was a holdout, despite the transition of many fields of science into the computational models direction, was fundamental physics.

If we can firmly establish this fundamental theory of physics, we know it's computation all the way down. Once we know it's computation all the way down, we're forced to think about it computationally. One of the consequences of thinking about things computationally is this phenomenon of computational irreducibility. You can't get around it. That means we have always had the point of view that science will eventually figure out everything, but computational irreducibility says that can't work. It says that even if we know the rules for the system, it may be the case that we can't work out what that system will do any more efficiently than basically just running the system and seeing what happens, just doing the experiment so to speak. We can't have a predictive theoretical science of what's going to happen.

STEPHEN WOLFRAM is a scientist, inventor, and the founder and CEO of Wolfram Research. He is the creator of the symbolic computation program Mathematica and its programming language, Wolfram Language, as well as the knowledge engine Wolfram|Alpha. His most recent endeavor is The Wolfram Physics Project. He is also the author, most recently, of A Project to Find the Fundamental Theory of Physics. ...."  More.

Money Reimagined?

Are we there yet?   Still I think major regulatory issues ahead.  Good piece:

Money Reimagined: Ethereum’s Renaissance Creates an Opportunity – And a Major Test   Michael J. Casey in Coindesk

If, in the compressed time of blockchain existence, the “crypto winter” of 2018-2019 was Ethereum’s Dark Ages, then we’re now in its Renaissance. 

But it’s an open question whether the blockchain platform and its enthusiastic community can take the wider world into the next era: the decentralized equivalent of the industrial revolution. 

As Ethereum prepares to celebrate the five-year anniversary of its mainnet launch on July 31, billions of dollars in value rest on that question. Specifically, on whether the all-important Ethereum 2.0 scaling project can be successfully launched and integrated into its existing architecture.  

By most measures, the Ethereum ecosystem is undergoing an impressive growth spurt. Record-breaking “gas” usage for smart contract and payment executions has now put Ethereum’s daily transaction fees totals above those of bitcoin. A strong rally in the price of ether (ETH) means Ethereum’s native token is among just a few leading cryptocurrencies, including bitcoin, Cardano’s ADA and Stellar’s XLM, to have more or less shaken off the sharp crypto selloff seen in March. And the amount of second-tier value locked into Ethereum smart contracts is ballooning, with total daily value transfers on Ethereum reaching that of Bitcoin in April. ... "