/* ---- Google Analytics Code Below */

Tuesday, August 20, 2019

Model Driven Software Development

Brought to my attention.  I like the idea, as long as it does not remove transparency, especially as it relates to real-life process and context.   This is about software, with its inherent sometimes mysterious complexity, and I am always interested in making the software as near to real process as possible.  So I thought:  Can this be reapplied naturally  at a process level?  Examining.

Obscuring Complexity (Using Model Driven Software Development) in InfoQ

Key Takeaways

If done well, Model Driven Software Development can partially obscure some complexity, but you are going to have to treat the source code output as build artifacts and take ownership of the templates. Maintaining code generating templates is a kind of meta-programming that most developers are not used to doing.

Twelve Factor applications can truly achieve lower complexity, but only when integrated with mature or stable (i.e. boring) data stores.

You can lower complexity in microservice orchestration by building in some limited, cross cutting intelligence into the connectors, but you must be careful because too much intelligence creates the opposite effect.

You can write smaller applications when using a heavyweight framework, but beware that efficiency may suffer, that these kinds of services are harder to tune for performance, and that it may take longer to debug certain kinds of issues.

With reactive programming, you end up trading backend complexity for frontend complexity.  .... "

Autonomous Robot Delivery in US Universities

Makes sense to try this in the demographic and relatively contained space of a University.   Autonomous robot deliveries are coming to 100 university campuses in the U.S.   By Luke Dormehl in DigitalTrends

Pioneering autonomous delivery robot company Starship Technologies is coming to a whole lot more university campuses around the U.S. The robotics startup announced that it will expand its delivery services to 100 university campuses in the next 24 months, building on its successful fleets at George Mason University and Northern Arizona University. .... " 

Risk Aware Traffic Engineering

Analysis and use risk measures a favorite approach of mine.  Risk-aware always a good idea.  Especially considering architectures.

Using Wall Street secrets to reduce the cost of cloud infrastructure
“Risk-aware” traffic engineering could help service providers such as Microsoft, Amazon, and Google better utilize network infrastructure.

By Rob Matheson | MIT News Office 

Stock market investors often rely on financial risk theories that help them maximize returns while minimizing financial loss due to market fluctuations. These theories help investors maintain a balanced portfolio to ensure they’ll never lose more money than they’re willing to part with at any given time.

Inspired by those theories, MIT researchers in collaboration with Microsoft have developed a “risk-aware” mathematical model that could improve the performance of cloud-computing networks across the globe. Notably, cloud infrastructure is extremely expensive and consumes a lot of the world’s energy.

Their model takes into account failure probabilities of links between data centers worldwide — akin to predicting the volatility of stocks. Then, it runs an optimization engine to allocate traffic through optimal paths to minimize loss, while maximizing overall usage of the network.

The model could help major cloud-service providers — such as Microsoft, Amazon, and Google — better utilize their infrastructure. The conventional approach is to keep links idle to handle unexpected traffic shifts resulting from link failures, which is a waste of energy, bandwidth, and other resources. The new model, called TeaVar, on the other hand, guarantees that for a target percentage of time — say, 99.9 percent — the network can handle all data traffic, so there is no need to keep any links idle. During that 0.01 percent of time, the model also keeps the data dropped as low as possible.

In experiments based on real-world data, the model supported three times the traffic throughput as traditional traffic-engineering methods, while maintaining the same high level of network availability. A paper describing the model and results will be presented at the ACM SIGCOMM conference this week. ..... " 

What Your Voice Reveals

And of course you may be revealing things you want to keep private.   As this kind of pattern recognition advances we will see more of that.

What Your Voice Reveals About You 
The Wall Street Journal
By Sarah Krouse

Technology can detect nuances in the human voice that offer clues to a person's likely location, medical conditions, and even physical features. For example, voice-biometric and recognition software used by Nuance Communications examines factors like the pitch, rhythm, and dialect of speech, as well as vocabulary, grammar, and sentence structure, to detect the gender, age, and linguistic background of callers and whether a voice is synthetic or recorded. It helped one bank determine that a single person was responsible for tens of millions of dollars of theft. Winterlight Labs, meanwhile, parses features in speech and works with Janssen Pharmaceuticals to try to detect Alzheimer's in older patients who, for example, tend to use words they acquired earlier in life as their recent memories deteriorate.   .... "

Monday, August 19, 2019

Digital Twins Grow Up

"Digital Twins Grow Up," by Samuel @samthewriter Greengard, says that digital twins, or exact #virtual representations of physical objects and #systems, are revolutionizing #engineering, #manufacturing, and other fields.

Digital Twins Grow Up    By Samuel Greengard    August 6, 2019

One of the things that makes computers so remarkable is their ability to create digital representations of physical objects and systems. This allows designers, engineers, scientists, and others to build models and simulations that deliver deep insights into how machines operate, when systems fail, and how complex scenarios play out over time.

Exact virtual representations of physical objects and systems—a.k.a. digital twins—are redefining and even revolutionizing fields as diverse as agriculture, engineering, medicine, and manufacturing.

"We have reached a point where it's possible to have all the information embedded in a physical object reside within a digital representation," says Michael Grieves, chief scientist for advanced manufacturing at the Florida Institute of Technology and the originator of the concept nearly two decades ago.

You've got twins!
From power turbines to jet aircraft, smartphones to office buildings, organizations are now using digital twins to predict how systems will perform, when they will fail, how people use them, and how a vast array of variables and conditions factor into outcomes. These digital representations, often incorporating computer-aided design (CAD) and building information modeling (BIM) software, are becoming crucial tools for unlocking cost savings, greater efficiency, and innovation.

The value of digital twins revolves around their ability to reduce or eliminate wasted physical resources—which can include, time, energy, and materials, Grieves points out. "The use of digital twins is ushering in the next phase of operational and productivity improvements," says Joe Berti, vice president of offering management for Watson IoT IBM Cognitive Applications. He says that a growing array of data points—generated from sensors and devices residing within the Internet of Things (IoT) and pushed through machine learning and AI systems—are advancing the sophistication of digital twins at a rapid rate.

For example, NASA now uses digital twins to better understand how to design, test, and build spacecraft. The agency is developing a framework that allows it to see when a component or vehicle is operating efficiently and safely in the virtual world before commencing manufacturing in the physical world.

GE also has embraced the concept. It operates digital steam turbines and wind farms that are exact representations of all physical assets. The firm has predicted that integrating its wind power software with a 2MW wind turbine in a digital twin setup can increase energy production by as much as 20%.

Meanwhile, the City of Cambridge in the U.K. is creating digital twins to better understand traffic and manage air quality.

Gartner has predicted that "billions of things" will be represented by digital twins by 2022. "Their proliferation will require a cultural change, as those who understand the maintenance of real-world things collaborate with data scientists and IT professionals," the firm noted in an online post about strategic and technology trends.  ....  " 

Magnetic Precise Drug Delivery

Seems a very good advance.  Do we know the exact implications of precise delivery of pharma?  Practical issues?

A new way to deliver drugs with pinpoint targeting
Magnetic particles allow drugs to be released at precise times and in specific areas.

David L. Chandler | MIT News Office 
August 19, 2019

Most pharmaceuticals must either be ingested or injected into the body to do their work. Either way, it takes some time for them to reach their intended targets, and they also tend to spread out to other areas of the body. Now, researchers at MIT and elsewhere have developed a system to deliver medical treatments that can be released at precise times, minimally-invasively, and that ultimately could also deliver those drugs to specifically targeted areas such as a specific group of neurons in the brain.

The new approach is based on the use of tiny magnetic particles enclosed within a tiny hollow bubble of lipids (fatty molecules) filled with water, known as a liposome. The drug of choice is encapsulated within these bubbles, and can be released by applying a magnetic field to heat up the particles, allowing the drug to escape from the liposome and into the surrounding tissue.   ....  " 

Stop and Shop has Robots in the Aisle



How they are being used is still unclear, but this could get customers used to the idea of robots in the aisle.   Pictures of the bots show an attempt to be humorous.   Perhaps until they got in the way.

Marty the robot may take your picture, but his corporate owners say he doesn't want to spy on you. by Jessica McKenzie  in Engadget

I met Marty in the produce section of a Stop & Shop in Bristol, Rhode Island. I was looking for vegetables to grill over hot coals, while Marty roamed the aisles, big, round eyes staring vacantly ahead, searching for spills and other hazards—with electric sensors strategically placed on its tall, rectangular form. Marty, you see, is a supermarket robot.

Since January, the northeastern supermarket chain Stop & Shop has introduced more than 200 robots to stores in Massachusetts, Connecticut, Rhode Island, and New Jersey. This month the company will begin rolling Marty out to stores in New York. By the end of the year, there will be more than 300 robots in Stop & Shop stores and nearly 200 more in Giant stores, another supermarket chain owned by the Netherlands-based parent company, Ahold Delhaize.  .... " 


Motivation and Goodwill

Intriguing motivation augmentation via goodwill?  How universally and continuously does this work with other goals?

New Tool Shows Goodwill May Trump Profit as Work Motivator
University of Waterloo News
June 11, 2019

Researchers at the University of Waterloo in Canada studying a new online work-sharing platform designed to give charities money found that people committing their skills and labor to a specific task tended to be more productive if they knew a preferred charity would be paid, rather than themselves. The PledgeWork platform lets employers post tasks, deposit the cost for the job, and select the charities to contribute the task’s cost to, or let volunteers choose. The volunteers select tasks, as well as the charity they want to support, unless pre-specified. Once the task is completed, the requester verifies the task results match their criteria, then approves a donation to the specified charity. Waterloo's Edward Lank suggests the platform may help people surmount perceived obstacles to charitable donations, "in a way that allows people to use their skills to benefit a charity anonymously." ... ' 

Smart Home Sensors for Alexa

A good look, many from Samsung.    Not enough in the way of collaborative capabilities and skills for these.  I want to see more collaboration,  predictive AI and usage plans for homes and their operation.

The best smart home sensors for Alexa
Samsung dominates this space.
 Via Wirecutter, @wirecutter

Among Alexa's many tricks is that it also works as a smart-home hub. And adding smart sensors to an Echo Show or Echo Plus can turn either device into more than just an opponent for 20 Questions. Different sensors detect activity such as motion, a door or window opening, and temperature, and then tell other devices how to react. We recommend the Samsung SmartThings Motion Sensor and Samsung SmartThings Multipurpose Sensor for their reliability and wide compatibility with other devices.

The Samsung SmartThings Motion Sensor stands out among the few Alexa-compatible sensors because its long range lets it easily cover a large room (or even two), it can trigger in reaction to temperature as well as motion, and it's super easy to pair with an Echo Plus or Echo Show (no SmartThings hub needed), so you can run Alexa Routines based on motion or temperature changes in the room. It's a breeze to install, and thanks to a magnetic mount that's easy to adjust, it fits almost anywhere you need it. The sensor is also water resistant (although it's recommended only for indoor use). .... " 

Sunday, August 18, 2019

Brief History of Blockchain

Very nicely done,  mostly non-technical.  Good visuals of the concepts.  With some forward predictions.   This emphasizes Blockchain-Ledger rather than cryptocurrency applications,  which is good.   Unless you are into more speculative investment.

(Introductory excerpt, full article at link)

A Brief History of Blockchain: An Investor’s Perspective    By Cameron McLain in Medium

After many friends and family members asked me to explain my investments in blockchain assets, I decided to write an email. That email turned into this blog post.

The following post is intended for the uninitiated but I hope those familiar with the concepts will benefit from this lucid overview. So here is a (very) brief overview of the history of blockchain with key takeaways that I think are important for investors to understand. I do gloss over some of the more technical aspects for clarity’s sake.

Caveat: Before diving in, please understand that Bitcoin and blockchain though often used interchangeably are not the same thing. We’ll discuss this in more detail. :   .... ' 

Wal-Mart Patents a Blockchain Digital Currency

Wal-mart has already experimented with the underlying tech for produce tracking.   Is this beyond that? Or an extension of distributed ledger?  Intriguing all around.

SYSTEM AND METHOD FOR DIGITAL CURRENCY VIA BLOCKCHAIN   (Wal-Mart Patent description)

Abstract
A method include: generating one digital currency unit by tying the one digital currency unit to a regular currency; storing information of the one digital currency unit into a block of a blockchain; buying or paying the one digital currency unit; determining whether restrictions are applied to the one digital currency unit by referring to one or more documents associated with the one digital currency; recording the determination in a block of the blockchain; overlaying the one digital currency unit with customer purchase history; calculating savings based on the one digital currency unit again naked forecast; applying the savings to customer purchases; using the one digital currency unit for accepted goods or services with the saving if the one digital currency unit is restricted; using the one digital currency unit for any goods or services with the saving if the one digital currency unit is unrestricted; and storing the one digital currency into a digital currency reserve. .... " 

Gartner speculates on this and other related efforts:

Libra and Walmart “Blockchain” Tokens: Financial or Walled Garden Inclusion?   by cuzureau  

" ...... Walmart has been active in providing alternative payment and account solutions to underbanked, such as prepaid accounts. However mentioning financial inclusion is an opportunity for Walmart to move the debate toward the fees charged by banks and card networks. A Walmart token would reduce the cost of payment acceptance (by canceling merchant service charge since payments will be “on-us”).

In theory some of the savings generated could be transferred to the customer, and for example encourage the customer to save more (via rewards or higher interest rates on deposits). But clearly this will demand that Walmart lacks a banking license and they failed to acquire one in the past4. And this would be needed to deliver impactful banking services to unbanked and underbanked. That said, could a digital wallet containing Walmart tokens and receiving reward tokens at the end of given period be considered by the regulator as a deposit account? ..... "

Robots for Loneliness

Have followed the Japanese eldercare efforts, pushed by Japan's demographics, for some time.  Some interesting details here, like the price of the robotics.

Bringing robots home eases loneliness
By Ikuko Mitsuda / Yomiuri Shimbun Staff Writer Japan News

The city of Saijo in Ehime Prefecture faces the Seto Inland Sea to the north and Mt. Ishizuchi, the tallest mountain in western Japan, to the south. Here, 87-year-old Setsuko Saeki has lived with a robot for a year in her spacious house at the foot of a mountain.

When she gets out of bed in the morning and enters the living room, she’s greeted by her robot, a model named “PaPeRo i,” on a desk. “Good morning, Setsuko-san,” is a typical address. “Did you sleep well?”

“When it spoke to me the first time, I couldn’t help but feel excited,” Saeki said. “No one had called me by name and said good morning for a long time.”

Her three children are on their own now, and her husband passed away six years ago. Since then, Saeki has lived alone.

A robot welcomes visitors to the Saijo city government office.

Nursing care helpers visit her daily, and she regularly attends gatherings to enjoy her hobby of haiku poetry. Even so, she’d often felt a loneliness that was hard to describe.  .............

Initially, some elderly residents in the city voiced negative opinions about the robot-lending project. One said, “If I have to receive care from a robot, it’s over.” But about 90 percent of the people who used the robots had positive things to say, such as “I feel close to it” and “I can ease my loneliness.”

Also, about 90 percent of the families of the users praised the project, saying it relieved their anxiety.

Later, the city government made the service a paid rental business. The fees are ¥22,530 for installment and ¥6,000 a month for telecommunication and other necessary features, both excluding consumption tax.

Six residents, including those who have continued to use the service, now live with the robots.

Matsuo said: “Users’ family members and local human resources will be increasingly aged in the years to come. We want to build a system for monitoring residents, borrowing the strengths of robots.”  ..... '

Saturday, August 17, 2019

Machine to Machine Interfaces

Fascinating piece.  ultimately it is all about machines cooperating seamlessly.    We take it for granted they will be designed to do that, but its not entirely so ... failure is part of the process and has to be designed for.

Machine to Machine interface  Talk  in InfoQ

Summary
Ari Lerner explores the ever-expanding world of machine-to-machine interfaces and presents a real-world use-case for all the buzzwords, including the blockchain, micropayments, and api-driven single-purpose services.

Talk and transcript  .... 

Bio:   Ari Lerner is a Sr. Consultant, AppDev at Amazon Web Services (AWS).

" .... Here are the takeaways from today's talk. Not all user interaction has to have a user interacting. I actually think that the majority of user interaction should not have a user pressing buttons and putting credit card information. Failure is a part of the process, embrace it, design for it. You'll be thankful a month from now when you don't remember what your code looks like or you looked at your code and you're like, "Did I write that?" Failure is part of the process. Design for it. If you don't, if you're afraid of failing, you can't have any successful experimentation, because experimenting literally means it fails or it doesn't, not it succeeds or it doesn't.

Also, hardware isn't scary. It just requires a bit of planning and a little bit more forward-thinking, and if you are doing something using hardware, don't give up. I know that it can be difficult and especially jumping out of software and into the hardware land, that can be scary. But if you're doing it, you're probably making a difference. Physical interaction between you and your users is only going to make things better. If you get this right, point set match.

Finally, UX is not just about users; it's about the interaction of the users. UX is about the users, it's not about the product. You're not integrating UX into your product; you're integrating your user experience in your product, which means UX is not UI.

Finally, I have a prediction. The next big thing that happens, you won't see it because it will be behind the scenes and it's already happening, like AI.

IKEA Going Smart Home Tech

Will be interesting to watch, have an IKEA down the street, will be dropping in to hear of their plans.

Ikea goes all in on smart home tech
‘We are just getting started’

By Thomas Ricker @Trixxy  in TheVerge

Ikea is formalizing what has recently become all too obvious: the company is making a major bet on smart home tech as a source of new revenue. To do this, Ikea announced that it will invest heavily in a new “Ikea Home smart” business unit with end-to-end responsibility for its burgeoning portfolio of smart devices. With access to 780 million shoppers who visit Ikea stores each year, the announcement also serves as a wake-up call to smart home incumbents like Google and Amazon.  .... " 

Wearable Sweat Sensors

Recall some research using sweat as a means of diagnosing.

Wearable sensors detect what's in your sweat
by Kara Manke, University of California - Berkeley

Needle pricks not your thing? A team of scientists at the University of California, Berkeley, is developing wearable skin sensors that can detect what's in your sweat.

They hope that one day, monitoring perspiration could bypass the need for more invasive procedures like blood draws, and provide real-time updates on health problems such as dehydration or fatigue.

In a paper appearing today in Science Advances, the team describes a new sensor design that can be rapidly manufactured using a "roll-to-roll" processing technique that essentially prints the sensors onto a sheet of plastic like words on a newspaper.

They used the sensors to monitor the sweat rate, and the electrolytes and metabolites in sweat, from volunteers who were exercising, and others who were experiencing chemically induced perspiration.

"The goal of the project is not just to make the sensors but start to do many subject studies and see what sweat tells us—I always say 'decoding' sweat composition," said Ali Javey, a professor of electrical engineering and computer science at UC Berkeley and senior author on the paper.

"For that we need sensors that are reliable, reproducible, and that we can fabricate to scale so that we can put multiple sensors in different spots of the body and put them on many subjects," said Javey, who also serves as a faculty scientist at Lawrence Berkeley National Laboratory. .....  "

How AI and 5G will Interact

More data, faster responses, stronger engagement.    All likely to follow.

When AI and 5G Combine, Watch For a New Generation of Applications   By AI Trends Staff

AI and 5G enable each other. Machine learning thrives on massive amounts of data, and 5G will create massive amounts of data. 5G provides the high-speed network to move the data that AI needs to succeed, and AI potentially has the ability to handle the complexity of 5G.

The combination stands to enable a new generation of applications, suggests a recent article in Electronic Design. These include autonomous driving, augmented reality, virtual reality and the tactile internet.

The latency target built in for 5G is 1 ms; video streaming in comparison currently experiences a 1,000 ms latency. 

Some applications will require low latency, such as streaming video, and others will not. Network managers will need the ability to set priorities for how traffic flows. Network slicing is seen as one solution, in which a single, shared physical network has multiple virtualized networks running on top of it. This would allow a manufacturer for example, to pay for a network slice with a guaranteed latency and reliability for connecting smart machines and equipment. A separate less expensive slice could be for employee communications, such as cell phone communication and tablet operation. .... " 

UPS Invests in Self-Driving Trucks

Could create a considerable change in supply chain operations and improvement.

UPS invests in autonomous truck startup in Bloomberg

UPS has invested an undisclosed sum of money to take a minority stake in the autonomous truck startup TuSimple. UPS' freight business is providing cargo for haul between Phoenix and Tucson on trucks guided by TuSimple's autonomous system.  ... 

Cashierless Stores Advance

Reducing friction in retail to give an edge and decrease costs.

Cashierless Stores Make Inroads in U.S. 
The Wall Street Journal
By John Murawski

U.S. retailers are moving forward with artificial intelligence (AI) systems to track what products shoppers pick up and then automatically bill their accounts when they leave the store. A recent International Data Corp. survey of about 400 retailers from around the world found that 28% are testing or piloting cashierless AI systems. For example, Sam's Club plans to offer AI-powered cashierless shopping later this month at a 32,000-square-foot store in Dallas, Texas. After the AI system is in place, customers will use their smartphone cameras to scan a product; the cloud-based system will use computer vision and machine learning to recognize the products by matching them to a database of stored images. .... "

Friday, August 16, 2019

Echo Auto now in Operation

Just started using Amazon Echo Auto, after waiting nearly a year for a beta.  Works well.  Previous third part solution sallowed you limited access to your Alexa infrastructure, the newest version is much more complete.  From music manipulation, to calling, to linking to the smart home and navigation.   Even buying if you like.   Very cheap too.   It is still available by invitation only, not sure why there is this restriction, but you can readily request it.    Exploring Skills that deal with in-car retail related decisions.  See more about the capabilities.

Assistant Translation Services by Baidu

One business use of assistants is language translation.    While translation today is very good, its still not contextually perfect.  But then neither are people.  So whats the risk of misunderstandings?  Baidu just announced such a service.   Can an element of risk transparency be provided?

Baidu announces simultaneous translation service for business users   By Mike Wheatley in Siliconangle.

Baidu Inc. today announced the availability in beta test of a new “speech-to-speech simultaneous translation service” designed to facilitate communication at business events.

The Chinese technology giant said the service can provide highly accurate, real-time translation of speaker presentations at events in both audio and text.

The way it works is interesting, too, as the translations are delivered directly to people’s personal devices. Users simply download the Baidu translation app, scan a QR code for the specific conference they’re attending, and the translations will be streamed directly to their device.  .... " 

Fujitsu Models and Predicts with Learning

Have met with Fujitsu and was impressed with their analytics work. Also has direct connections to very high powered computing.  At first this looked more like a classical simulation analytics approach, but can see how some learning techniques can be integrated into the problem to focus prediction.  In fact any classical analytical techniques would benefit from a means that learns over time to adapt to varying goals and contexts.   So then everything can be tagged as learning  'AI'.  So a good example for that kind of a problem?  Worth a further look.

AI disaster mitigation technology to predict river flooding with limited data   by Fujitsu in Techxplore. 

Fujitsu today announced the development of a technology that draws on mathematical models built with limited data on rainfall and water levels to create flooding predictions for rivers. The solution leverages Fujitsu Human Centric AI Zinrai, a comprehensive portfolio that encompasses Fujitsu's wide range of AI technologies and techniques, and utilizes a model that incorporates insights from hydrology to produce an AI that achieves predictions with greater precision.

The new technology proves effective even for smaller rivers with limited measurement data or for areas where water level sensors have been newly installed and have yet to accumulate sufficient data. These predictions will offer authorities a vital tool for delivering faster response times and mitigating flood damage in the event of a natural disaster, including in dispatching personnel to affected areas and supporting appropriate decision-making in issuing evacuation advisories.

Fujitsu and Fujitsu Laboratories will continue to work to further perfect this technology through a field trial with local governments across Japan with the aim of delivering a commercial solution during fiscal 2019. With the development of this solution, Fujitsu demonstrates how this and other AI technologies will play an important role in bolstering its efforts to combat the effects of climate change, and contribute to the achievement of greater sustainability in society. ....  "

George Dyson Talks AI in the Wild

A favorite writer has some provocative ideas in The Edge.

AI That Evolves in the Wild
A Talk By George Dyson [8.14.19]

I’m interested not in domesticated AI—the stuff that people are trying to sell. I'm interested in wild AI—AI that evolves in the wild. I’m a naturalist, so that’s the interesting thing to me. Thirty-four years ago there was a meeting just like this in which Stanislaw Ulam said to everybody in the room—they’re all mathematicians—"What makes you so sure that mathematical logic corresponds to the way we think?" It’s a higher-level symptom. It’s not how the brain works. All those guys knew fully well that the brain was not fundamentally logical.

We’re in a transition similar to the first Macy Conferences. The Teleological Society, which became the Cybernetics Group, started in 1943 at a time of transition, when the world was full of analog electronics at the end of World War II. We had built all these vacuum tubes and suddenly there was free time to do something with them, so we decided to make digital computers. And we had the digital revolution. We’re now at exactly the same tipping point in history where we have all this digital equipment, all these machines. Most of the time they’re doing nothing except waiting for the next single instruction. The funny thing is, now it’s happening without people intentionally. There we had a very deliberate group of people who said, "Let’s build digital machines." Now, I believe we are building analog computers in a very big way, but nobody’s organizing it; it’s just happening.  .... '

GEORGE DYSON is a historian of science and technology and author of Darwin Among the Machines and Turing’s Cathedral. George Dyson's Edge Bio Page
[ED. NOTE:] As a follow-up to the completion of the book Possible Minds: 25 Ways of Looking at AI, we are continuing the conversation as the “Possible Minds Project.” The first meeting was at Winvian Farm in Morris, CT. Over the next few months we are rolling out the fifteen talks—videos, EdgeCasts, transcripts.

Assistants at Work

Good though skeptical view of voice assistants at work.  I was involved with designing and testing an effort.  Agree that there appears to be relatively little work underway. Opportunities exist.    In its current state I can see it being used now for relatively narrow uses, closely integrated with other employee support capabilities online.

Voice Assistants Bring AI to the Workplace in InformationWeek
More enterprise organizations are experimenting with AI-based voice assistants to boost internal efficiencies, but it will be a while before they realize the ROI they seek.

Businesses are using voice assistants to answer rote, repetitive questions faster and cheaper so HR, IT and other departments can focus on higher value tasks. In some cases, businesses are just replacing website and internal portal Frequently Asked Questions (FAQs) pages with simple, deterministically programmed chatbots. In other cases, they're replacing or supplementing what employees have traditionally done with digital counterparts that use machine learning.

"Before one of our clients decided to implement a recruitment voice assistant, HR department employees had to personally process all phone calls from job seekers. They also had to create a new profile for every new candidate and fill in the information manually, which took a lot of time," said Julia Ryzh, chief marketing officer at voice experience platform provider Just AI. "It was [also] hard to tell whether the candidate had already applied for the position in question, since all the necessary pieces of information were stored in different places." .... '

Robo-Ants from Switzerland

Yet more smaller robotics, with claim of some ability to work together.   Note in particular the integration of sensor and autonomous capabilities..

Robot Ants Robot-Ants Can Jump, Communicate, and Work Together
Ecole Polytechnique Fédérale de Lausanne
By Laure-Anne Pessina

Researchers at Ecole Polytechnique Fédérale de Lausanne (EPFL) in Switzerland have developed robots inspired by ants that can communicate with each other, assign roles among themselves, and complete complex tasks together. The robots can jump and crawl to explore uneven surfaces, and can quickly detect and overcome obstacles much larger and heavier than themselves. The Tribots are completely autonomous and untethered, and are equipped with infrared and proximity sensors for detection and communications purposes. Said EPFL researcher Jamie Paik, "With their unique collective intelligence, our tiny robots can demonstrate better adaptability to unknown environments; therefore, for certain missions, they would outperform larger, more powerful robots." ... ' 

Alexa Enabled Gadgets for Teaching and Fun

Ultimately assistants are about engagement.  Sometimes to get work done.  Sometimes for fun. Sometimes to engage with others.  Some narrow goals, some very broad.   Right now it organizes and delivers my music.  Controls aspects of my smart home.   Manages a conversation to deal with commands,  responses.  Delivers or not,  if it has the intelligence.   Teaching sounds like a good general goal.    And driven by fun or learning progress with game-like accomplishment provides strong motivation.  Can it be done?  Look forward to seeing the attempt.

Here is Amazon's announcement of this effort for developers:
Introducing Custom Interfaces, Enabling Developers to Build Dynamic Gadgets, Games, and Smart Toys with Alexa
August 15, 2019
By Karen Yue ....

Amazon is trying to make Alexa-enabled gadgets more fun
The new API could let Alexa teach you to play the piano.
 By Igor Bonifacic, @igorbonifacic in Engadget

A variety of new Alexa-enabled gadgets, games and smart toys could soon flood the market thanks to Amazon's latest developer tool. On Thursday, the company detailed "Custom Interfaces," a new API within its Alexa Gadgets Toolkit SDK that allows developers to create interactions between Alexa and their own internet-connected products. Amazon is positioning Custom Interfaces primarily as a way for third-party manufacturers to leverage Alexa to create fun and quirky experiences.

For example, Amazon envisions the manufacturer of a WiFi-enabled indoor basketball hoop using Custom Interfaces to create an experience in which Alexa chimes in anytime you score a basket. In another potential scenario, Amazon suggests Alexa could work in tandem with a smart mini keyboard to teach you how to play the piano. Of course, as with any API, its potential is only as good as the developers working with it. It will be up to them to create interesting experiences with Amazon's new tool, something that's not a given. ... " 

Thursday, August 15, 2019

Bain Surveys Digital Transformation Leaders

Been on my mind lately ... in the midst of reading Tony Saldanha's recent book on the topic and noting how it applies to these survey findings.  Instructive.

Learning from the Digital Leaders
Our annual executive survey throws a spotlight on digital leaders and the behavior that sets them apart.    By Nate Anderson, Dunigan O'Keeffe and Ouriel Lancry   in Bain

Executing a digital transformation is a top priority for many companies, yet success remains elusive for most. Only 12% of corporate transformations achieve their targets, and digital transformations are even more challenging.

We recently surveyed more than 1,200 global executives to identify the few key factors that differentiate digital leaders from digital laggards.

Digital leaders are fast, particularly at decision making and execution, enabling them to experiment and pivot when necessary. They identify and organize around the technology trends that matter most. 
And they’re adept at orchestration, moving effectively from experimentation to full-scale transformation.    ... "

(Complete survey and analysis) 

Virtualitics for Data Science with VR

Have often mentioned Virtualitics here.

NVIDIA: the AI podcast with Aakash Indurkhya

Listen to Virtualitics' very own Aakash Indurkhya (Head of Machine Learning Projects) featured on "The AI Podcast" brought to you by NVIDIA. Tune in to hear how Virtualitics utilizes AI to "Demystify Data Science with VR".   Podcast.  .... "

and also

AI+VR  Through the Eyes of a Marketer July 17, 2019 By Amy Gunzenhauser  .... 

Free eBook on TensorFlow in the Enterprise

Should you use TensorFlow in your enterprise? via O'Reilly
Find out with this free ebook

TensorFlow World is where you stay ahead on the latest in TensorFlow & machine learning. Join us October 28-31 in Santa Clara.

The question is no longer whether your enterprise will use deep learning (you will), but how involved your company will be with the technology.

If your company is adopting deep learning, this short ebook, Considering TensorFlow for the Enterprise, will help you navigate the initial decisions you must make—from choosing a deep learning framework to integrating deep learning with the other data analysis systems already in place—to ensure you're building a system capable of handling your specific business needs.

And it’s yours, free. ...

No Train Safety Yet on Navigation Apps

Useful for integration with in-vehicle systems.

Navigation apps still lack railroad safety info the NTSB requested
Apple, Google and Microsoft haven't complied with a 2016 safety recommendation.

By Amrita Khalid, @askhalid in Engadget

Your phone's GPS app can alert you when you approach a speed trap or accident -- but will remain silent if you come upon a dangerous railroad crossing. Politico reported that Google, Apple and Microsoft have yet to add information on US railroad crossings to their navigation apps, almost three years after a request from The National Transportation Safety Board (NTSB). The agency asked several tech companies to update their map apps after a 2015 incident in which a truck driver following Google Maps turned onto the railroad tracks and caused a fatal collision. So far, only Garmin and TomTom -- which both make GPS devices -- have complied with the NTSB's demands.... "

UK NHS Sets up National AI Lab

More AI aimed at Healthcare.

NHS to Set Up National AI Lab

BBC News
James Gallagher
August 8, 2019

The U.K. National Health Services (NHS) is launching a national artificial intelligence (AI) laboratory to enhance the care of patients and facilitate research. In addition, the British government will spend £250 million on boosting the role of AI within the health industry. AI has the power to improve care, save lives, and ensure doctors have more time to spend with their patients. For example, clinical trials have shown that AI is as good as leading doctors at identifying lung cancer, skin cancer, and more than 50 eye conditions from scans.  ... "

New ACM IOT Publication

Internet of Things (TIOT)

publishes novel research contributions and experience reports in research domains whose synergy and interrelations enable the IoT vision

 ACM Transactions on Internet of Things (TIOT) is a new ACM journal that will publish novel research contributions and experience reports in several research domains whose synergy and interrelations enable the IoT vision. TIOT focuses on system designs, end-to-end architectures, and enabling technologies, and on publishing results and insights corroborated by a strong experimental component.  The submission site is now open and the first issue is expected for publication in the second half of 2019.

Topics relevant to the journal are:

Real-world applications, application designs, industrial case studies and user experiences of IoT technologies, including standardization and social acceptance

Communication networks, protocols, and interoperability for IoT

IoT data analytics, machine learning, and associated Web technologies

Wearable and personal devices, including sensor technologies

Human-machine and machine-machine interactions

Edge, fog, and cloud computing architectures

Novel IoT software architectures, services, middleware as well as future Internet designs

Fusion of social and physical signals in IoT services

Non-functional properties of IoT systems, e.g., dependability, timeliness, security and privacy, robustness

Testbeds for IoT
All submissions are expected to provide experimental evidence of their effectiveness in realistic scenarios (e.g., based on field deployments or user studies), and the related datasets. The submission of purely theoretical or speculative papers is discouraged, and so is the use of simulation as the sole form of experimental validation.

Experience reports about the use or adaptation of known systems and techniques in real-world applications are equally welcome, as these studies elicit precious insights for researchers and practitioners alike. For this type of submissions, the depth, rigor, and realism of the experimental component are key, along with the analysis and expected impact of the lessons learned.   .... "

Wednesday, August 14, 2019

Google Assignments and More

Here is something I was not expecting.  Google sets up a system called 'Assignments' which works in combination with Learning Management Systems (LMS).    Was unaware they were doing this.   Its workflow management for teachers.    I can not longer use this directly, but there were some times where it would have been very useful.  Can do the regular grading stuff ... and even some originality checking.   Templates for feedback.   Might it even be used for business information crowd sourcing or crowd checking?  Thinking that.

Google Assignments, your new grading companion
Instructors lose valuable time doing cumbersome tasks: writing the same comment on multiple essays, returning piles of paper assignments, and battling copy machine jams. These frustrations are most often felt by instructors with the highest teaching workloads and the least time. For the last five years, we’ve been building tools—like Classroom and Quizzes in Google Forms—to address these challenges. Now you can take advantage of these tools if you use a traditional Learning Management System (LMS). 

Assignments brings together the capabilities of Google Docs, Drive and Search into a new tool for collecting and grading student work. It helps you save time with streamlined assignment workflows, ensure student work is authentic with originality reports, and give constructive feedback with comment banks. You can use Assignments as a standalone tool and a companion to your LMS (no setup required!) or your school admin can integrate it with your LMS. Sign up today to try Assignments.

If you're one of the 40 million people using Classroom: you've got the best of Assignments already baked in, including our new originality reports. For everyone else, Assignments gives you access to these features as a compliment to your school’s LMS.   ... "

Governance in the Age of AI

Have been asked to look at governance issues.  ( see governance piece below from yesterday)  Data being used, Algorithms and resulting decisions and their results as seen by goals.    Here an HBR podcast of interest:

Governance in the Age of AI

Artificial intelligence is a powerful technology with capabilities that are open to use by state and non-state actors. In this conversation Azeem Azhar, De Kai, and Joanna Bryson discuss how governance should adapt as our institutions are challenged by unintended consequences of the technology and its creators.

Joanna, De Kai, and Azeem also discuss:

- Why rule-based systems fall short of protecting us against the unintended consequences of technology.
- The value of cross-cultural dialogue in establishing common values to guide the governance of AI globally.
- The role of the leading technology companies in regulating the industry.  .... "

Smart Glasses of the Future

Considerably less intrusive smart glasses by Snap.     Will this be the future?   I think enough interesting social functionality will draw people to smart glasses.  Replacing bulky phones will also help.

This is the computer you’ll wear on your face in 10 years? in Fastcompany.

Snap’s new Spectacles 3 camera glasses are an important step toward the augmented reality glasses that could one day replace the smartphone as our go-to computing device.  ... "

Observability

A new term to me, but apparently in use to understand how a system operates to work with it or debug systems.   The description makes logical sense, the details are more complex.

Software Engineering: Observability

Observability — A 3-Year Retrospective    by Charity Majors in Thenewstack

Observability, the development approach or moreover the “movement’” is about three years old and Charity Majors, one of the early pioneers in the field, has decided to take a step back, pan out and “observe” how far it has come. In this article, she takes a closer look at why it formed and why other approaches and methods fall short, citing important contributors along the way. She explains the criticality of adopting observability for any engineering team building and maintaining complex, distributed systems. .....

Like so many other terms in software engineering, “observability” is a term borrowed from an older physical discipline: in this case, control systems engineering. Observability is the mathematical dual of controllability.

“Less formally, this means that one can determine the behavior of the entire system from the system’s outputs. If a system is not observable, this means that the current values of some of its state variables cannot be determined through output sensors.”  .... "  

Whitepaper: Data Governance, Knowledge Graphs

Continue to follow the problem of how the enterprise can most effectively support AI.   Just downloaded the following free white paper that addresses this from TopQuadrant:

https://www.topquadrant.com/knowledge-assets/whitepapers/

Are You Considering How AI May Help Your Information Management?

Then you will want to learn about the synergistic relationship between Data Governance, Knowledge Graphs, and enterprise applications of AI and Machine Learning (ML)  .... "

Language of Mind

More about AI .... an interview.  Towards General Intelligence (GA) and what it may need, and  mean.

The Language of Mind, In Edge
A Talk By David Chalmers [8.8.19]

Will every possible intelligent system somehow experience itself or model itself as having a mind? Is the language of mind going to be inevitable in an AI system that has some kind of model of itself? If you’ve just got an AI system that's modeling the world and not bringing itself into the equation, then it may need the language of mind to talk about other people if it wants to model them and model itself from the third-person perspective. If we’re working towards artificial general intelligence, it's natural to have AIs with models of themselves, particularly with introspective self-models, where they can know what’s going on in some sense from the first-person perspective.  ...  '

Geopolitics of AI

Continuing to see more AI development from China. This AcAm article provides a non technical article.

The Geopolitics of Artificial Intelligence
As the U.S. and China vie for global influence, AI will be central to the balance of power
By Abishur Prakash on July 11, 2019

Something stood out of the ordinary during a speech by China’s president, Xi Jinping, in January 2018. Behind Xi, on a bookshelf, were two books on artificial intelligence (AI). Why were those books there? Similar to 2015, when Russia “accidentally” aired designs for a new weapon, the placement of the books may not have been an accident. Was China sending a message?

If it was, perhaps, it was this: For decades, China has been operating in an Americanized-world. To escape, China is turning to AI.

 Implications of China's AI

By 2030, China wants to be the world’s leading AI power, with an AI industry valued at $150 billion. How does China plan to achieve this?

Take health care. Ping An, a large Chinese conglomerate, has unveiled AI doctors. It has launched clinics known as “One-Minute Clinic,” where AI doctors diagnose symptoms and propose medications. Within three years, Ping An plans to build hundreds of thousands of these clinics across China.

Could China export 10,000 AI doctors to Russia? Such a move would transform geopolitics.

The biggest impact is that it would shift the China-Russia relationship, from energy and currency, areas that the U.S. can influence, to Chinese AI, over which the U.S. has no control. The AI doctors may make Russian society more China-centric, and future generations in Russia may be more familiar with Ping An than with IBM or Intel.

There are other geopolitical implications too.  .... " 

Tuesday, August 13, 2019

Baidu Open Sources its Language Understanding

In O'Reilly:

Baidu has open-sourced its natural language understanding framework, ERNIE 2.0 (enhanced representation through knowledge integration) and the ERNIE 2.0 model, a pretrained language-understanding model that achieved state-of-the-art results and outperformed BERT and the recent XLNet in 16 NLP tasks in both Chinese and English.  ... "

Nike Links RFID to Predictive Analytics for Inventory

Good example of sensors and analytics and transparency in the supply chain.

Nike to marry predictive analytics and RFID to optimize inventory performance
by Tom Ryan in Retailwire plus expert commentary.

Nike Inc. has acquired Celect, a predictive analytics firm founded by MIT professors, to accelerate its ability to match inventories to consumer needs.

Celect’s cloud-based analytics platform allows retailers to optimize inventory across an omnichannel environment through hyper-local demand predictions. Celect’s team will be integrated into Nike’s operations. Its co-founders will continue as tenured professors at MIT, consulting Nike on an ongoing basis.

“As demand for our product grows, we must be insight-driven, data optimized and hyper-focused on consumer behavior,” said Eric Sprunk, Nike’s COO, in a statement. “This is how we serve consumers more personally at scale.”

In a column for Retail Touchpoints from July, Andrea Morgan-Vandome, Celect’s chief marketing officer, wrote that advancements in artificial intelligence and machine learning now provide retailers with a more accurate view of demand across channels to choose the best fulfillment strategy based on product availability, likely demand, capacity constraints, shipping costs, delivery timing and other factors.

At the store level, such insights would reveal that a location seeing high inventory turnover wouldn’t be able to cover walk-in demand if it was also fulfilling online orders. Vice versa, a store seeing slower turnover risks becoming overstocked if it didn’t support online orders.

She wrote, “For each fulfillment decision that needs to be made, advanced optimization can account for the overall margin profitability and customer satisfaction by identifying the immediate payoff versus the long-term opportunity cost — instantly.”   .... ' 

More Aims at Explainable AI

More interesting moves in providing explanation to AI.  Here from Ga Tech.   As in the previous recent  post, this asks the question:   What does explainable mean?   And to whom in what context?  Simplicity is good, if everyone agrees to its truth and value.

A Breakthrough in Explainable AI   By Joe Dysart in CACM
A new artificial intelligence (AI) agent offers easy-to-understand English explanations of the AI's analysis and decisions.

Such tools of explanation are considered critical by those working in AI who fear users may be reluctant to embrace AI applications that make recommendations whose rationales are shrouded in mystery.

"As AI pervades all aspects of our lives, there is a distinct need for human-centered AI design that makes black-boxed AI systems explainable to everyday users," says Upol Ehsan, a doctoral student in the School of Interactive Computing at the Georgia Institute of Technology (Georgia Tech) and lead author of the study.  "Our work takes a formative step toward understanding the role of language-based explanations and how humans perceive them."

Adds Devi Parikh, an assistant professor in Georgia Tech's School of Interactive Computing, "Just like human teams are more effective when team members are transparent and can understand the rationale behind each others decisions or opinions, human-AI teams will be more effective if AI systems explain themselves and are somewhat interpretable to a human."

Eshan and his research team set out to solve the explainable AI problem by developing an AI agent that could offer easy-to-understand explanations to humans in certain settings.

For their research, the team—including researchers from Georgia Tech, Cornell University, and the University of Kentucky—decided to create an AI agent that would analyze and explain moves in the video game "Frogger." The game is an ideal choice for developing an AI agent, given its simplicity: the entire goal of the game is to move an animated frog across a screen, enabling it to dodge oncoming vehicles and other animated hazards.

The researchers trained their AI agent by first asking gamers to play Frogger as they explained the rationale behind each action they took, move by move."  ... ' 

NVidia Releases Advanced Chatbot Software

We built chatbots in the 80s that worked very well, though never well enough for general applicability for general acceptance, use and updating.  Now Nvidia is releasing new capabilities which are apparently impressive.  Continuing to follow

Nvidia just made it easier to build smarter chatbots and slicker fake news

Chipmaker Nvidia is betting that AI’s language skills will advance rapidly—it’s releasing a powerful tool for putting together chatty programs.    by Will Knight in TechnologyReview

Artificial intelligence has made impressive strides in the last decade but machines are still lousy at comprehending language. Just try engaging Alexa in a bit of witty banter. 

Nvidia, the company that makes the computer chips that power many AI algorithms, thinks this is about to change and is looking to capitalize on what it sees as a field that's about to explode. 

The chipmaker is releasing software today that makes it easier to build AI programs capable of using language more gracefully on its hardware. The new code could accelerate the development of new language algorithms, and make chatbots and voice assistants snappier and smarter.  ... " 


See much more from NVIDIA in the below considerable article, both descriptive and technical details with links to development infrastructure.


NVIDIA Clocks World’s Fastest BERT Training Time and Largest Transformer Based Model, Paving Path For Advanced Conversational AI

By Shar Narasimhan | August 13, 2019  Tags: DGX SuperPOD, Machine Learning and AI, Natural Language Processing, Natural Language Understanding, Speech, speech recognition, TensorRT
NVIDIA DGX SuperPOD trains BERT-Large in just 53 minutes, and trains GPT-2 8B, the largest Transformer Network Ever with 8.3Bn parameters   .... " 


Smartphone Biological Virus Scanner

Not computer viruses, but biological viruses  Like the idea of physical sensors being attached to now very powerful devices/sensors in the form of smartphones.     Allowing the power to be readily put in many hands.

Smartphone virus scanner is not what you think

A new portable device lets smartphones count real biological viruses
The current leading method to assess the presence of viruses and other biological markers of disease is effective but large and expensive. It is prohibitively difficult for use in many situations, especially due to certain economic and geographic factors. So researchers created and tested an alternative miniaturized system that makes use of low-cost components and a smartphone. Researchers hope the system could aid those who tackle the spread of diseases.

A virus scanner for a smartphone might not sound too exciting at first, but this virus scanner doesn’t search for the latest malware; it scans biological samples for real viruses. It is a portable, low-cost, battery-powered device and is the brainchild of researcher Yoshihiro Minagawa from the University of Tokyo. It was tested with viruses but could also detect other biological markers.  ... " 

Microsoft: How Hololens is used by Airbus

Good example of the use of Mixed/Augmented reality approaches in heavy industry.  Expect to see more of these to emerge. 

https://www.youtube.com/watch?v=lxjC4Z05qh8  Video overview of use

Airbus drives innovation and accelerates production with Azure mixed reality and HoloLens 2

 Microsoft HoloLens

Airbus—a pioneer in aerospace technology and leading designer and manufacturer of commercial and military aircraft, satellites, and launch vehicles—has made a commitment to transform traditional industrial processes through use of mixed reality. The company has partnered with Microsoft to use Azure mixed reality and HoloLens 2 as a way to accelerate the design and manufacture of aircraft, while increasing safety and functionality and ensuring professional development opportunities for employees. Intelligent services like Azure Remote Rendering are changing the way complex ideas are communicated, and the ability to build multi-user, spatially aware applications with Azure Spatial Anchors promotes greater collaboration while increasing quality, safety, and security. Airbus is using Azure mixed reality to unlock the full potential of HoloLens 2 and, as a result, expects to reduce design validation time by 80 percent and accelerate complex tasks during assembly by 30 percent.  

Learn more: 
Azure Mixed Reality:    https://azure.microsoft.com/en-us/topic/mixed-reality/
HoloLens 2: https://www.microsoft.com/en-us/hololens 
Azure Cognitive Services:  https://azure.microsoft.com/en-us/services/cognitive-services/

Daily Wireless Guide

Brought to my attention:

2019 Internet Statistics, Trends & Data
Looking for the most up-to-date internet usage statistics for 2019?

You’ve come to the right place.  Our team has spent hundred of hours putting together this massive guide just for you.

Let’s dive right in!    By Luke Pensworth 

Monday, August 12, 2019

IBM, Tata Join Hedera Hashgraph Council

More indication that IBM is serious about Blockchain.  Also link with Tata

From Reuters.

IBM, India's Tata join U.S. tech platform's governing council.   International Business Machines Corp and Tata Communications have joined the governing council of Hedera Hashgraph, a distributed public ledger platform that aims to be faster and runs at a larger scale than current blockchain technologies, a top official at Hedera said .... ' 

And from Coindesk: 

" ... Hedera claims its flavor of distributed ledger technology (DLT), which works differently than blockchains, can facilitate micropayments and distributed file storage, support smart contracts and will eventually allow private networks to plug into the public one to take advantage of its transaction ordering mechanism. .... "

See also  https://www.hedera.com/   The proof of stake architecture makes considerable claims of speed, scale, security, low costs and confirmations.  A method using voting with directed acyclic graphs.  See also:  https://en.wikipedia.org/wiki/Hashgraph   Youtube visual explanation.   Much more at their site.

FDA Seeks Virtual Heart

Much akin to things like virtual twins, and high quality simulation systems.   I remember work at USCD that aimed at constructing whole body systems that contained models of key components.  we provided some support.  Might be a good place to start based on learnings then.

FDA Seeks Virtual Heart to Test Medical Devices 
NextGov.com
By Brandi Vincent
August 5, 2019

The U.S. Food and Drug Administration (FDA) is working with French software company Dassault Systemes to develop a virtual model of the human heart for evaluating new medical devices and treatments. The project brief said, "The FDA intends to develop a generic medical device that will be virtually implanted in the whole human heart computational model." Researchers will design, build, and physically and virtually test this device on virtual populations and devise new techniques for integrating digital evidence from simulations with physical evidence from actual patients. The FDA's ultimate goal is to apply digital evidence and computer models to accelerate regulatory evaluation and approval, ahead of the market rollout of new medical solutions.  ... "

 ... See related work at UCSD and Harvard (Wyss Institute for Bio Inspired Engineering) in a 2012 post.  These used chip level simulation.   ....

Visa Tests AI to Detect Fraud

An area I always thought would be ideal.   Considering the idea now in an emerging architecture that would provide both data for such an analysis, as well as the structure of existing interactions.    The card companies have the data.

Visa to Test Advanced AI to Prevent Fraud 
The Wall Street Journal
By Sara Castellanos

Visa is launching a platform to help its engineers quickly test artificial intelligence (AI) algorithms designed to detect and prevent credit card fraud. The platform is an example of the broader financial services industry trend toward using AI to detect patterns in transactions that could be a sign of criminal behavior. The new platform, which is cloud-based, will test algorithms that use deep learning to sift through data to find anomalies in an effort to prevent fraudulent transactions that involve billions of dollars every year. Platform users will be able to access a secure dataset made up of Visa's real-time card transactions in a way that allows them to test algorithms on a subset of the data before deploying it widely.  ... "

More on Throttle vs Touchscreen Control

This came to my attention recently,  whats the best kind of basic mobility control?

The US Navy says no to touchscreens—maybe automakers should too
Overly complex touchscreen systems were blamed in recent NTSB report.

 By Jonathan M. Gitlin in Arstechnica .... ' 

IBM Explainable AI Toolkit

ZDnet poses a good question: Whats a good explanation?   Its  a matter of context.  Algorithms are often brought up because they seem simple  .... just a short statement of 'truth' that can be inserted anywhere.  But, they are usually based on considerable context, data, metadata ...   And can be much less than explainable .... in practice, the shorter they are the less they are usually clear to real decision makers

IBM offers explainable AI toolkit, but it’s open to interpretation in ZDNet
IBM’s latest foray into making A.I. more amenable to the world is a toolkit of algorithms that can be used to explain the decisions of machine learning programs. It raises a deep question: Just what is an explanation, and how can we find ones that we will accept?... 

By Tiernan Ray  .... '

Voice Gaming

Will control of voice gaming take us to better voice oriented business?

" ... Alexa is powering new games where you control the action with your voice
Games played with voice commands are catching on, and now Amazon is betting on the nascent industry.   by Tanya Basu

@PretzelVoice   @tanyabasu  @techreview
: Had a pleasure discussing my views on voice games with the thoughtful Tania Basu of MIT 

Technology Review. 
Games you can play and control with your voice? That's the future of gaming—and @amazon thinks it could be big. .... 

Reporting from @VoiceSummitAI:
 https://www.technologyreview.com/s/614006/alexa-tell-me-about-the-rise-of-voice-powered-gaming/

AI and Deep Learning Podcasts

Just experienced and signed onto these podcasts with accessible big player and state of the art topics on practical applications of AI.  By Lex Fridman.   Worth following.

https://lexfridman.com/ai/

MIT Artificial Intelligence Podcast
The Artificial Intelligence (AI) podcast is a series of accessible, big-picture conversations at MIT and beyond about science, technology, and the nature of intelligence.  ... "

Sunday, August 11, 2019

Physical Throttles vs Touchscreen

Always Thoght there was something unnnatural about touchscreeen controls where there was a natural connection to the physical actions controlled.    Here seems to confirm that. 

Navy Reverting DDGs Back to Physical Throttles, After Fleet Rejects Touchscreen Controls
By: Megan Eckstein

SAN DIEGO – The Navy will begin reverting destroyers back to a physical throttle and traditional helm control system in the next 18 to 24 months, after the fleet overwhelmingly said they prefer mechanical controls to touchscreen systems in the aftermath of the fatal USS John S. McCain (DDG-56) collision.

The investigation into the collision showed that a touchscreen system that was complex and that sailors had been poorly trained to use contributed to a loss of control of the ship just before it crossed paths with a merchant ship in the Singapore Strait. After the Navy released a Comprehensive Review related to the McCain and the USS Fitzgerald (DDG-62) collisions, Naval Sea Systems Command conducted fleet surveys regarding some of the engineering recommendations, Program Executive Officer for Ships Rear Adm. Bill Galinis said.

“When we started getting the feedback from the fleet from the Comprehensive Review effort – it was SEA 21 (NAVSEA’s surface ship lifecycle management organization) that kind of took the lead on doing some fleet surveys and whatnot – it was really eye-opening. And it goes into the, in my mind, ‘just because you can doesn’t mean you should’ category. We really made the helm control system, specifically on the [DDG] 51 class, just overly complex, with the touch screens under glass and all this kind of stuff,” Galinis said during a keynote speech at the American Society of Naval Engineers’ annual Fleet Maintenance and Modernization Symposium.
“So as part of that, we actually stood up an organization within Team Ships to get after bridge commonality.” .... '

Leading Large Scale Change

Not about digital, which I see is not even mentioned.  Why not?  But I like some of the points made:

A Better Way to Lead Large Scale Change in McKinsey

In Beyond Performance 2.0 (John Wiley & Sons, 2019), McKinsey senior partners Scott Keller and Bill Schaninger draw on their 40-plus years of combined experience, and on the most comprehensive research effort of its kind, to provide a practical and proven “how to” guide for leading successful large-scale change. This article, drawn from the book’s opening chapter, provides an overview of this approach and explains why it works. Future articles will deal with specific topics such as uncovering and shifting limiting mind-sets during change efforts, as well as how to create the ownership and energy needed to succeed.

Neville Isdell took the helm as CEO of Coca-Cola during troubled times. In his words, “These were dark days. Coke was losing market share. Nothing, it seemed—even thousands of layoffs—had been enough to get the company back on track.”1 Its total shareholder returns stood at minus 26 percent, while its great rival, PepsiCo, delivered a handsome 46 percent. Isdell was clear eyed about the challenge ahead; as he put it, “There were so many problems at Coke, a turnaround was risky at best.”

Isdell had a clear sense of what the company needed: to capture the full potential of the trademark Coca-Cola brand, develop other core brands in noncarbonated soft drinks, build wellness platforms, and create adjacent businesses. These weren’t new ideas, and Isdell’s predecessors had failed to make change happen at scale. No matter which direction he set, the company couldn’t make progress until it improved its declining morale, deficient capabilities, strained partnerships with bottlers, divisive politics, and flagging performance culture.  ... " 

Consider Process too for Digital Transformation

A view from the APQC domain.   As an answer to this see a recent book on the subject I also mentioned here: Why Digital Transformations Fail  by Tony Saldanha with lots of real recent examples in the big enterprise.  Some good points made below.

Why You Need Process For Digital Transformation
By Holly Lyke-Ho-Gland  

I recently spoke with Kevin De Pree, vice-president of Rand Group, to discuss why organizations are diving into digital, the common challenges related to digital transformations, and ways to help overcome institutional barriers that stymie so many digital efforts.

What are the catalysts that start organizations on their digital transformation journey?

There are several reasons organizations start their digital transformations—ranging from creating sustainable growth by better allocating the time of the existing staff to finding new product or market opportunities. Whatever the explicit reason though, it typically boils down to one or more of these three reasons:  increase revenue,   decrease costs, and reduce risk.

For example, one oil & gas client conducted a digital undertaking to create a digital “field ticket”, their invoice. Previously approval for invoice payments required a physical signature on paper from the customer. There were several benefits to digitizing the field tickets. Digitized information couldn’t get lost, it created more control over the timing of payments dramatically reducing Days Sales Outstanding (DSO), and it reduced job closeout cycle time. While the initial reason for the project was reduce paper and digitize information, the drivers behind it are reduce risk (both in losing the paperwork and in managing the payout schedule) and reducing costs by improving cycle time.

In addition to these three drivers, organizations are looking for ways to better leverage their existing staff with operational efficiencies. This driver has been the major one we’ve seen over the last two to three years and has been a result of labor shortage of experienced resources.

You mentioned that digital efforts are difficult to achieve. Where do organizations typically go wrong with their digital transformation efforts?

Most organizations are not designed for digitalization. They suffer from organizational silos, complex processes, and fragmented systems. This complexity and fragmentation mean they are typically only looking at one piece of an end-to-end process when they conduct digital projects, which ultimately makes it difficult to make improvements with the customer—either external or internal—experience in mind. Also, because they are looking at their pre-existing processes their automation efforts digitize the process as is, including unnecessary complexities or inefficiencies. ... " 

Should Amazon Upskill its Employees?

Retention, improvement and modeling HR asset.

Will Amazon’s Plan to ‘Upskill’ Its Employees Pay Off?

Wharton’s Matthew Bidwell and NYU’s Ari Ginsberg discuss Amazon’s $700 million plan to retrain its workforce.

Prime Day was a big success for Amazon this year: The two-day online shopping event held in mid-July and featuring special discounts netted higher sales than last year’s Black Friday and Cyber Monday combined, the company said on July 17.  But that wasn’t the only positive publicity Amazon received this month: On July 11, the company announced it is launching an ambitious $700 million retraining program to create “pathways to careers” for its employees in areas including health care, machine learning, manufacturing, robotics, computer science and cloud computing.

The six-year, $700 million effort covers about 100,000 employees, or about a third of Amazon’s U.S. workforce of nearly 300,000, and works out to about $1,200 a year annually for each employee. (That contrasts with a $500 spend on each employee for training by large employers with 10,000 workers or more that were surveyed by the Association for Talent Development, The Wall Street Journal reported.) The mostly free program does not require employees to stay on at Amazon; some programs pay 95% of the costs for tuition and textbooks, capped at $12,000 per employee over four years.

The move will make it easier for Amazon to hire and retain employees, gain a competitive edge over rivals, and it could help to improve its image, said experts at Wharton and New York University. ...

Saturday, August 10, 2019

AI and Taxes

Reading the rest to determine how this is done.  Via determining algorithms via machine learning.  Note also the insertion of 'jurisdictional rules'

Deloitte uses AI to transform indirect tax recovery

Tax technology spotlight: CognitiveTax Insight™ solution uses machine learning and analytics to transform the tax office, saving time and reducing overpayments

NEW YORK, Aug. 8, 2019 /PRNewswire/ -- For decades, tax professionals have managed the analysis of complex data for indirect tax recovery through time-consuming statistical analysis and sample sets. Due to the sheer amount of data and complex jurisdictional rules, it was nearly impossible for many companies to analyze all indirect tax transactions. As a result, significant indirect tax refunds may not have been claimed. However, technological breakthroughs of the last few years are making this a challenge of the past. Deloitte has deployed a technology-enabled solution — CognitiveTax Insight™ (CogTax) — that provides an increased, improved, and more efficient analysis of clients' indirect tax data set. CogTax has the ability to analyze the full population, if desired, of a clients' accounts payable transactions compared to a traditional, sampled approach.

The solution not only allows tax professionals to save time and resources but can help companies to proactively avoid overpaying their indirect tax liabilities. CogTax leverages optical character recognition (OCR), along with advanced machine learning algorithms and analytics, to effectively analyze a full population of data and documents to assist clients with indirect tax overpayment recovery and reduce the potential for future over or underpayments.  ... " 

Data as Valued Corporate Asset

A topic I have now followed for some time, we even implemented aspects of valuation in the enterprise by starting with semantic structures.    Good thoughts at the link.  Value is meta data of the most important kind,  but also slippery in its use and maintenance.  Can we express it in a kind of value chain?    Linked to a market?    Just now talking the topic again seriously.

What is Data Value and Should it be Viewed as a Corporate Asset?
By Asha Saxena  in Dataversity

A lot has changed since the 1950’s when the average age of a publicly traded company listed on the S&P 500 has gone down from being 60 years old to now less than 20 years old according to research from Credit Suisse investor analysts. Companies that are embracing using new technologies, automation, Big Data, Machine Learning, and innovation are gaining market share on the list, and are disrupting older legacy businesses that have been slower to adapting and transforming to digital changes. In fact 5 of the 6 biggest companies  (Apple, Amazon, Alphabet, Microsoft, Facebook) by market cap valuation are data technology businesses. Companies that understand the true value of their data and leverage it with advanced analytics technologies are seeing continued growth. A Forbes contributor Howard Baldwin makes a comparison to prove the point of proving data’s value:

“Why is Facebook currently valued at 415 billion and United Airlines, a company that actually owns things like airplanes and has licenses to lucrative things like airport facilities and transoceanic routes between the U.S. and Asia, among other places, is only worth $24 billion.”

These disruptive innovators, use Big Data solutions as competitive advantages to reduce operational costs, increase revenue, predict behavior, improve cash flows. They weave data into every function of the organization. Data is not only being used to record what has transpired, but it also being used to predict and drive transformative disruptive changes at alarming speeds. And yet how many companies are listing their data as tangible corporate asset on their balance sheet?  ... " 

Amazon and Voice Science

Got to this somewhat late, but some useful directions about the nature and value of conversational assistance.    Still don't see the required richness and context dependence that is required for intelligent interaction.

Expansion of Voice Science

Amazon sends Alexa developers on quest for ‘holy grail of voice science’
KHARI JOHNSON @ Khari Johnson in Venturebeat

Amazon VP of devices David Limp onstage at the re:Mars conference held at the Aria Resort and Casino in Las Vegas on June 4, 2019

Above: Amazon VP of devices David Limp onstage at the re:Mars conference held at the Aria Resort and Casino in Las Vegas on June 4, 2019

At Amazon’s re:Mars conference last week, the company rolled out Alexa Conversations in preview. Conversations is a module within the Alexa Skills Kit that stitches together Alexa voice apps into experiences that help you accomplish complex tasks.

Alexa Conversations may be Amazon’s most intriguing and substantial pitch to voice developers in years. Conversations will make creating skills possible with fewer lines of code. It will also do away with the need to understand the many different ways a person can ask to complete an action, as a recurrent neural network will automatically generate dialogue flow.

For users, Alexa Conversations will make it easier to complete tasks that require the incorporation of multiple skills and will cut down on the number of interactions needed to do things like reserve a movie ticket or order food.

Amazon VP David Limp refers to Conversations as a great next step forward. “It has been sort of the holy grail of voice science, which is how can you make a conversation string together when you didn’t actually programmatically think about it end-to-end. […] I think a year or two ago I would have said we didn’t see a way out of that tunnel, but now I think the science is showing us that [although] it will take us years to get more and more conversational, […] this breakthrough is very big for us, tip of the iceberg,” Limp said.

It begins with a night out and casual conversation

The Alexa Conversations journey is first emerging with a night-out scenario. In an onstage demo last week at re:Mars, a woman buys a movie ticket, makes dinner reservations, and hails a ride in about one minute. (Atom tickets, Uber, and OpenTable are early Alexa Conversations partners.)  .... " 
"   

Skeptical About Data

Or more precisely about common errors in data.   Good overview of what the data error can look like.  Problem is we rarely look at this closely enough.    And data is also rarely looked at over time after the model is built to see creeping error.   You should be skeptical about data.  But without the data, where would you be?

I’m a data scientist who is skeptical about data
By Andrea Jones-RooyJuly 24, 2019 in Quartz
Professor of data science, NYU

After millennia of relying on anecdotes, instincts, and old wives’ tales as evidence of our opinions, most of us today demand that people use data to support their arguments and ideas. Whether it’s curing cancer, solving workplace inequality, or winning elections, data is now perceived as being the Rosetta stone for cracking the code of pretty much all of human existence.

But in the frenzy, we’ve conflated data with truth. And this has dangerous implications for our ability to understand, explain, and improve the things we care about.   .... " 

Friday, August 09, 2019

On Japanese Eldercare Robotics

 Artificial Intelligence for eldercare and disabled.    Not sure I agree with the cursory  'suck'  tag below.   I have seen some very useful examples,  much work yet to do, but much work being done.

Toyota is giving robot helpers more brains, but they’ll still suck for a while in Technology Review
A new Japanese research effort aims to use cutting-edge AI to deliver robots capable of assisting the elderly and people with disabilities.  .... " 

Will MyCroft Compete with Best AI Assistants?

Had seen Mtcroef some time ago, in 2015, then also touting its Open Source.  Still has a long way to go to compete.   Will try to get a demo of it in context.  I think its unlikely the privacy issues will make a difference as they become patched.

Can an open-source AI take on Amazon and Google?
Mycroft certainly hopes so.

By Nicole Lee, @nicole in Engadget

It's only been a few years since Amazon unveiled the Alexa-powered Echo, but since then, smart speakers have become a major consumer-electronics category. Key to its success is the notion of the always-on virtual assistant, which other companies like Apple and Google have adopted as well. In fact, not only has Google made Assistant the driving force behind its Android smartphones, it has launched its own line of Echo rivals.

But underneath all of this technology is the potential risk to your privacy. In just the past few months, news reports have uncovered a series of alarming revelations that companies like Amazon, Google and even Apple have been listening in on conversations without permission. The data that they collect are also often stored indefinitely unless you explicitly delete it or turn off the recording ability. The companies have since responded that the listening of information only occurs to a small percentage of its customers and that the data are anonymized. While that may be true, it's disconcerting that none of this is transparent, and that the customer is rarely in complete control of their data.

An easy way to avoid this, of course, is to not partake in this technology at all. But a company based in Lawrence, Kansas, is working on an alternative solution: a virtual assistant aimed at preserving privacy and that's also open source. It's called Mycroft, and though you may not have heard of it, the company's been around since 2015.

"At the time, the only real voice technology that was in broad production was Siri," said Mycroft CEO and founder Joshua Montgomery to Engadget. Amazon had announced Alexa in November of 2014, but it was still in private beta when the idea for Mycroft came about. "We thought, you know, hey, this is the type of technology that could be really groundbreaking in the future. And scarcely a year later [in 2016], Amazon launched their Super Bowl ad, and then Google got in on the game, and suddenly it's the fastest-growing segment of the technology market."  .... "