/* ---- Google Analytics Code Below */

Monday, December 31, 2018

Copyrights Expire

How important is this?   95 year ago.   What is the most iconically valuable thing from 1923 that could be repurposed, for free? Fascinating writeup.  Note the important distinction between copyrights and Trademark property enforcement.   From Motherboard:

A Massive Amount of Iconic Works Will Enter the Public Domain on New Year’s Eve
Why the copyright terms on a goldmine of works from 1923 are about to expire.

When the clock strikes midnight on New Year’s Eve, movies, songs, and books created in the United States in 1923—even beloved cartoons such as Felix the Cat—will be eligible for anyone to adapt, repurpose, or distribute as they please.

A 20-year freeze on copyright expirations has prevented a cache of 1923 works from entering the public domain, including Paramount Pictures’ The Ten Commandments, Charlie Chaplin’s The Pilgrim, and novels by Aldous Huxley.

Such a massive release of iconic works is unprecedented, experts say—especially in the digital age, as the last big dump predated Google.  ... "

Large Scale Tech Transformations

Analysis of a survey of CEO's about transformations.

The cornerstones of large-scale technology transformation

By Michael Bender, Nicolaus Henke, and Eric Lamarre in McKinsey

A clear playbook is emerging for how to integrate and capitalize on advanced technologies—across an entire company, and in any industry.

How does your company use advanced technologies to create value? This has become the defining business challenge of our time. If you ignore it or get it wrong, then anything from your job to your entire organization could become vulnerable to rivals who get it right. The new technologies come with many labels—digital, analytics, automation, the Internet of Things, industrial internet, Industry 4.0, machine learning, artificial intelligence (AI), and so on. For incumbent companies, they support the creation of all-new, digitally enabled business models, while holding out the vital promise of improving customer experiences and boosting the productivity of legacy operations. Advanced technologies are essential to modern enterprises, and it’s fair to say that every large company is working with them to some extent.

The cornerstones of large-scale technology transformation

In private discussions over the past year, we’ve asked more than 500 CEOs whether they think technology can improve business growth and productivity sufficiently to lift profits and shareholder value by 30 to 50 percent; a great many have said yes. So far, though, that prize has remained elusive for a lot of companies. Consider, for example, McKinsey research highlighting the large number of digital laggards, and the wide gap between them and leaders: digitally reinvented incumbents—those using digital to compete in new ways, and those making digital moves into new industries—are twice as likely as their traditional peers to experience exceptional financial growth.  ... "

Internet Pioneer Larry Roberts Obit

We utilized some of his earliest work in the Darpanet.   Was impressed then by the basic simple architecture and noted even then how people quickly re-purposed it for other needs, like email.

Net's founding father Dr Larry Roberts dies aged 81
American scientist Larry Roberts who helped design and build the forerunner of the internet has died aged 81.

In the late 1960s, he ran the part of the US Advanced Research Projects Agency (Arpa) given the job of creating a computer network called Arpanet.

He also recruited engineers to build and test the hardware and software required to get the system running.

Arpanet pioneered technologies underpinning the internet that are still used today.  ... "

Building Trust for an AI Project

Sounds of interest, about to check this out.  But as often comes to mind, what is the risk involved?  How is it measured and addressed?

The Right Amount of Trust for AI      Podcast, Slides:   Summary  In InfoQ
Chris Butler discusses the building blocks of AI from a product/design perspective, what trust is, how trust is gained and lost, and techniques one can use to build trusted AI products.

Bio: 
Chris Butler is Philosophie's Director of AI. He leads the firm in human-centered AI engagements design, research, and prototyping. Chris has over 18 years of product and business development experience at companies like Microsoft, KAYAK, and Waze. He has created techniques like Empathy Mapping for the Machine and Confusion Mapping to create cross-team alignment while building AI products.  ... "

Sunday, December 30, 2018

Verge's Report Card for AI in 2018

Nicely put piece in the Verge gives it a B grade.   Refers to some its own writing.  Not enough about how internal company work is improving efficiency and thus resource use.  And also not much about the resulting change in labor needs for specific industries. Or about how the very definition of AI influences how it is with used with other 'Automation'.    And yes:   Its still not magic.   Or  even very creative yet.  Still a very good read:

The Verge 2018 tech report card: AI   By James Vincent   @jjvincent

As for much of the tech industry, 2018 has been a year of reckoning for artificial intelligence. As AI systems have been integrated into more products and services, the technology’s shortcomings have become clearer. Researchers, companies, and the general public have all begun to grapple more with the limitations of AI and its adverse effects, asking important questions like: how is this technology being used, and for whose benefit? ... 

Looking over the year as a whole one lesson stands out: AI is not magic. It is not a two-letter incantation that can be used to summon venture capital and institutional confidence at a whim; nor is it fairy dust that can be sprinkled over products and institutions for instant improvements. Artificial intelligence is a process: something to be examined, deliberated, and — if all goes well — understood. In other words, long may the reckoning continue.

Autonomous Freight Rail in Australia is Operational

Interesting application for 'largest robot', it seems to decrease use of human engineers for some networks.

Mining company says first autonomous freight train network is fully operational
The system reduces the number of times a train has to stop for engineer shift changes.
By Megan Geuss in Astechnica

On Friday, major mining corporation Rio Tinto reported that its AutoHaul autonomous train system in Western Australia had logged more than 1 million km (620,000 mi) since July 2018, S&P Global Platts reported. Rio Tinto calls it's now-fully-operational autonomous train system the biggest robot in the world.

The train system serves 14 mines that deliver to four port terminals. Two mines that are closest to a port terminal will retain human engineers because they are very short lines, according to Perth Now. ... " 

Artificial Bee Colonies

Examples to how we can build agents and simulate them.  Use them to understand better architectures and models for enhancing and supporting human work?

Searching an Artificial Bee Colony for Real-World Results
Kanazawa University

Researchers from Kanazawa University and the University of Toyama in Japan have proposed a scale-free mechanism to guide an artificial bee colony (ABC) algorithm's exploration process. In the algorithm, employed bees search for food sources and share the information with onlooker bees, who then select a food source to leverage; scout bees randomly search for new food sources, whose positions represent possible solutions to an optimization problem. To overcome the need for many iterations to reach a solution, the researchers designed the scale-free mechanism and analyzed how scale-free network properties—specifically the power law distribution and low degree-degree correlation coefficient—shape the optimization process. The mechanism allows each employed bee to learn more effective information from its neighbors. This improves the algorithm's exploitation ability by preventing the information of high-quality employed bees from rapidly overtaking the entire population.    ...  " 

Collision of Demographics, Automation Inequality

I previously referenced this Bain report when I spoke about automation and AI, thinking that might be the biggest influence.  Worth reposting it here.

Labor 2030: The Collision of Demographics, Automation and Inequality
The business environment of the 2020s will be more volatile and economic swings more extreme.

By Karen Harris, Austin Kimson and Andrew Schwedel

Executive summary
Demographics, automation and inequality have the potential to dramatically reshape our world in the 2020s and beyond. Our analysis shows that the collision of these forces could trigger economic disruption far greater than we have experienced over the past 60 years (see Figure 1). The aim of this report by Bain's Macro Trends Group is to detail how the impact of aging populations, the adoption of new automation technologies and rising inequality will likely combine to give rise to new business risks and opportunities. These gathering forces already pose challenges for businesses and investors. In the next decade, they will combine to create an economic climate of increasing extremes but may also trigger a decade-plus investment boom. ....  "

Saturday, December 29, 2018

Bixby Assistant Speaker Imminent

Notable because Samsung makes many home appliances, connecting them naturally to the Smart Home.  I noticed last year that Samsung  was mentioning  their assistant  Bixby in most promotions about their appliances, though usually it was not available in English for their products.   This seems to be changing now, could bring them to compete with Amazon and Google.

Samsung is reportedly making a budget Bixby-powered smart speaker
A second Galaxy Home, this time entry-level

By Shannon Liao@Shannon_Liao 

Samsung promised a Bixby-powered Galaxy Home smart speaker back in August, a premium device that could potentially compete against Apple’s HomePod and the Amazon Echo Plus. While that speaker still isn’t available and doesn’t have a set release date, the company is reportedly also planning a second Bixby speaker that comes in black and according to SamMobile, citing an anonymous source, may be a more affordable option that can compete with the likes of cheaper smart speakers. ... " 

Illinois Biometric Privacy Act and Google

Notable use of legal regulation against new uses of AI.  In TheVerge.    More details at the link, see also the link to the Illinois Privacy Act, which was at issue.

Google just got an important lawsuit over facial recognition dismissed. As first reported by Bloomberg, the lawsuit has been dismissed by a state judge who found that the plaintiffs didn’t suffer “concrete injuries.” The Google lawsuit is one of three cases aimed at prominent tech companies that have allegedly violated the United States’ toughest biometric privacy law and it’s the first one to get dismissed.

The Illinois Biometric Information Privacy Act has long been a huge obstacle for tech companies working on facial recognition initiatives. The law requires companies to obtain people’s explicit permission before they can make biometric scans of their bodies. ...  "

a16z Podcast on Talent Tech and Culture

Broad look at tech and more:

a16z Podcast: Talent, Tech Trends, and Culture

with Marc Andreessen, Ben Horowitz, and Tyler Cowen
This episode of the a16z Podcast features the rare combination of a16z co-founders Marc Andreessen and Ben Horowitz in conversation, together, with economist Tyler Cowen (chair of economics at George Mason University and chairman and general director of the Mercatus Center there, and host of his own podcast.) The conversation originally took place at our most recent annual innovation Summit — which features a16z speakers and invited experts from various organizations discussing innovation at companies large and small, as well as tech trends spanning bio, consumer, crypto, fintech, and more.

This discussion covers Ben and Marc’s marriage, er, partnership; the evolution of VC and “talent as a network”; and where are we right now on industries being affected by tech (such as retail) and tech trends (such as VR/AR and wearables) — and where are we going next? Finally, is software eating culture… or is it the other way around?  .... 

Role of Context in Human Machine Interaction

Its always about context.  Every interaction has context, it can be deep or very simple.  Context also contains metadata, that is data that is required by the context,  Whether it be delivering an analytic business solution,  an assistant understanding and answering a question,  or a complex deep learning answer to a classification request.   It is also has a memory.   In some cases the long term memory of a database, or just the short term memory of the last statement or question posed in an interaction.

In the Alexa developer blog:

The Role of Context in Redefining Human-Computer Interaction  By Ruhi Sarikaya

Alexa Research  Alexa Science


In the past few years, advances in artificial intelligence have captured our imaginations and led to the widespread use of voice services on our phones and in our homes. This shift in human-computer interaction represents a significant departure from the on-screen way we’ve interacted with our computing devices since the beginning of the modern computing era.


 Substantial advances in machine learning technologies have enabled this, allowing systems like Alexa to act on customer requests by translating speech to text, and then translating that text into actions. In an invited talk at the second NeurIPS workshop on Conversational AI later this morning, I’ll focus on the role of context in redefining human-computer interaction through natural language, and discuss how we use context of various kinds to improve the accuracy of Alexa’s deep-learning systems to reduce friction and provide customers with the most relevant responses. I’ll also provide an update on how we’ve expanded the geographic reach of several interconnected capabilities (some new) that use context to improve customer experiences.

There has been remarkable progress in conversational AI systems this decade, thanks in large part to the power of cloud computing, the abundance of the data required to train AI systems, and improvements in foundational AI algorithms. Increasingly, though, as customers expand their conversational-AI horizons, they expect Alexa to interpret their requests contextually; provide more personal, contextually relevant responses; expand her knowledge and reasoning capabilities; and learn from her mistakes.


As conversational AI systems expand to more use cases within and outside the home, to the car, the workplace and beyond, the challenges posed by ambiguous expressions are magnified. Understanding the user’s context is key to interpreting a customer’s utterance and providing the most relevant response. Alexa is using an expanding number of contextual signals to resolve ambiguity, from personal customer context (historical activity, preferences, memory, etc.), skill context (skill ratings, categories, usage), and existing session context, to physical context (is the device in a home, car, hotel, office?) and device context (does the device have a screen? what other devices does it control, and what is their operational state?).  ....  '


Conversation on Explainable AI

Ajit Joakar makes some good points...  in DSC.  Yes, explain-ability is often useful, but depending on context is not always a requirement.  One way its useful is it helps you build yet further intelligence.

Why I agree with Geoff Hinton: I believe that Explainable AI is over-hyped by media  Posted by ajit jaokar

Geoffrey Hinton dismissed the need for explainable AI. A range of experts have explained why he is wrong.

I actually tend to agree with Geoff.

Explainable AI is overrated and hyped by the media.
And I am glad someone of his stature is calling it out

To clarify, I am not saying that interpretability, transparency, and explainability are not important (and nor is Geoff Hinton for that matter)  .... " 

Friday, December 28, 2018

Toyota Wants to put an Assistant Friend in Every Home

And more.   Must it be a robot, an android, a face, a smile or just a voice?  Healthcare also mentioned, and more.  Human Support Robot (HSR), new term to me.  Is support enough?

Toyota Wants to Put a Robot Friend in Every Home 
By Kevin Buckland in Bloomberg

Toyota envisions robots becoming commonplace in homes as companions to senior citizens, as part of its new artificial intelligence (AI) research center headed by renowned inventor Gill Pratt. Toyota believes the pressing need for elder care will make household robots more attractive, and its Human Support Robot (HSR) could be one of the first products to gain mainstream acceptance. The HSR is essentially a retractable arm on wheels, with a video screen on top and camera eyes to give it the semblance of a face. Among its demonstrated capabilities is learning where books and other items should go on a shelf, and cleaning untidy rooms using sensor-eyes and a pincer. Toyota's Masanori Sugiyama said the HSR could be ready for deployment in hospitals and rest homes to perform simple tasks within three years.  ... " 

Assistants as Homecare Help

Our devices are already something we much rely on.    So why not.   What does it mean to be a companion though?  Conversations probably, that include strong context.   But one could argue that there has to be an element of humanity as well.   Talk to the Japanese who have been working eldercare for years, we did.  Strong Social element, but beyond the post and forget it world of Facebook.  Healthcare advice too is mentioned.   Will take a look. 

Alexa app for elderly aid bridges digital divide, acts as companion   By  R. Danes

Isn’t it great when mind-bending technology and product development come down the chute to solve a real human problem? Diverse industries are applying the latest advancements in artificial intelligence to everyday consumer issues.

For example,  the AI technology in Amazon Alexa’s virtual assistant could prove a handy in-home healthcare assistant, according to Dr. Justin Marley (pictured, right), consultant psychiatrist at Essex Partnership University NHS Foundation Trust (EPUT). Together with Accenture LLP consultants, Marley developed an Alexa skill and web portal to aid the elderly.

“Our mission is to help people feel socially connected in this … always changing digital world and stay independent in their own home for a bit longer,” Marley said.

The aid works just like a smartphone application. It applies sophisticated AI and voice-recognition technology to a number of everyday tasks, processes, etc. “It’s constantly learning the behaviors that they’re doing on a daily basis,” said Gayle Sirard (pictured, left), applied intelligence lead, North America, at Accenture LLP.

It can monitor users’ mental health, remind them to take medication, and encourage them to participate in local activities, to name a few features. .... "

Robotic Process Automation

Based  On CIO Magazine on RPA- Robotic Process Automation ..

Astute colleague Walter Riker writes:    ... Robotic Process Automation – not exciting but causes one to think about processes and how fast they can be automated. It appears to be on the move. ... 

"I was born not knowing and have had only a little time to change that here and there. —Richard Feynman

I respond:  Absolutely, will see a lot more of it. Like you said gets at least some of the process  involved in the analytics and AI.    Also lets you carve things up into best accessible pieces.   ... "

Consider its use in legal process. ... 

Harvard Caselaw Access Project

Taking  a deeper dive into the details of this, and notably who is looking at related predictive analytics that might be applied.

Project: Caselaw Access Project

Where can I find Caselaw Access Project?

What does Caselaw Access Project do?

The Caselaw Access Project is making all U.S. case law freely accessible online.

Why does Caselaw Access Project exist?
Our common law - the written decisions issued by our state and federal courts - is not freely accessible online. This lack of access harms justice and equality and stifles innovation in legal services.

The Harvard Law School Library has one of the world's largest, most comprehensive collections of court decisions in print form. Our collection totals over 42,000 volumes and roughly 40 million pages. Caselaw Access Project aims to transform the official print versions of these court decisions into digital files made freely accessible online. ...

They are working with Ravel:    https://www.ravellaw.com/  Note examples of the use of legal analytics. 

Detailed Description of work:  http://etseq.law.harvard.edu/2015/10/free-the-law-overview/

Changing CVV Code for Credit Cards

The chip cards have been less successful than had been expected, the changing CVV codes may make it harder to fraudulently buy online.  Good discussion of further direction of card security

Pilot project demos credit cards with shifting CVV codes to stop fraud
Trial will last 90 days to test effectiveness and timing of CVV change.
By Megan Geuss in Ars Technica  ... "

Productivity and Self

How might this kind of process flow be applied to assistants?    At very least a means for tracking progress.

Are You Productive Enough?  in the HBR  by Elizabeth Grace Saunders

Productive: “Achieving or producing a significant amount of result.”
Enough: “As much or as many as required.”

As a time management coach, I’m keenly aware that you could answer the question “Am I productive enough?” using a variety of methods. I’m also familiar with the fact that individuals fall on a productivity spectrum. One person’s maximum productivity for a certain role in a particular environment could look vastly different from another person’s. These variations result from a combination of intrinsic ability, experience level, overall capacity, and desire.

For the purposes of this discussion, I’m narrowing the definition of “productive enough” to whether you are meeting the requirements of your job when operating at your personal peak performance. This reasoning process is outlined in the flowchart below, and we’ll walk through it step-by-step by answering a series of questions. At the end of this you should have a clearer sense of whether you can wrap up for the day knowing you were productive enough or whether you have room for improvement. ... "  

Examples of Virtual and Augmented World Training

Training still a good application

Virtual Training for Aircraft Carrier Flight Deck Crews
Office of Naval Research
Bobby Cummings

A collaborative effort between the U.S. Office of Naval Research Global (ONR Global) TechSolutions program and the Naval Air Warfare Center Training System Division (NAWCTSD) has produced Flight Deck Crew Refresher Training Expansion Packs (TEPs), an expandable framework of game-based immersive three-dimensional technologies allowing for individual, team, or multi-team training exercises. The first three TEPs will help an aircraft carrier's Primary Flight Control team. The training solutions can simulate normal operations and emergency conditions, exposing deck crews to a wide range of real-world scenarios. Said Mehdi Akacem of the aircraft carrier USS Gerald R. Ford, "This is really the first example I've seen of extending the value of a simulation environment to such an essential, tangible thing as a carrier flight deck." ... '

Thursday, December 27, 2018

Apple Still Innovative

Clear they still have a lot behind the scenes.    Need to work on some new things outside what they are known for delivery in now.

Is Apple Losing Its Innovation Edge?
Consumers have greeted Apple’s latest slate of iPhones, computers and the newest connected Watch with solid interest, but not the high enthusiasm from years past. Since the iPhone’s debut in 2007, public excitement soared in the early years but then has steadily dipped as the novelty wore off. According to Statista, global search interest for iPhones has been falling since 2012, after peaking with the iPhone 5. Citi analysts also noticed the same shift over time: “We suspect this is because of a slowdown in innovation and the saturation of iPhone in the addressable market.”

Is Apple falling behind on innovation compared to other tech giants such as Amazon, Google, Tesla and a revived Microsoft? At an event in Brooklyn this week, the company unveiled revamped versions of the iPad Pro, Mac Mini and MacBook Air, including what it’s calling the biggest refresh of its signature tablet since it was introduced eight years ago. But people point to the merely incremental changes made on iPhones and the late introduction of a connected watch — after Google and Samsung already launched theirs — as signs that Apple is no longer the innovation leader. And yet as skepticism lingered, Wall Street went the opposite direction by making Apple the first company in the world to be worth $1 trillion. How can a company supposedly losing its innovation edge get such a high valuation?

The answer is that Apple remains innovative today, according to Wharton experts, and any doubts about it stem from a limited understanding of what innovation entails. Wharton management professor Paul Nary said it is important to distinguish between invention and innovation. Invention is the creation of a new idea or opportunity while innovation “generally means a successful commercialization of a new technology or a new business model, usually by recombining it with other pre-existing elements, and making the possibility, stemming from the invention, into a market reality.” From this viewpoint, he said, Apple is innovative. .... "

Questions are the Answer

Have always believed in the premise of this book.    Leaders .... students .... citizens .... everyone should ask more questions.

Q&A: Why business leaders should — yes — ask questions
MIT Sloan’s Hal Gregersen talks about his new book, “Questions Are the Answer.”

By Peter Dizikes | MIT News Office 
December 18, 2018

Should business leaders spend more time asking questions? Hal Gregersen has a firm answer to that: Yes. Gregersen, the executive director of the MIT Leadership Center and a senior lecturer on leadership and innovation at the MIT Sloan School of Management, has been studying executives for decades. Time and again, he has noticed, the most successful managers are among the most inquisitive people in business. Now Gregersen has synthesized his observations on the subject in a new book, “Questions Are the Answer: A Breakthrough Approach to Your Most Vexing Problems at Work and in Life,” published by HarperCollins. MIT News sat down with Gregersen to, well, ask him about the new book.  .... " 

Blockchain for Consumer Goods

Seems a weak connection, but the case is well made its all about having records with trust and transparency:

5 Reasons Why Blockchain Will Be a Boon for Consumer Goods    By Vaijayanth M.K., of Accenture 

Distributed-ledger technologies are set to transform the consumer goods sector by improving trust and transparency

From data analytics to omnichannel shopping, emerging technologies are revolutionizing the consumer goods industry — and blockchain could mark the next step change. Why? Because many of its myriad applications will enable the industry to increase trust and transparency — both across the supply chain and, crucially, with consumers.

Blockchain represents a new way to store information. It is a digital record of all activity related to a product or service that is decentralized — “distributed” — rather than held in a single location. No single party can tamper with it, and every party can see everything, thereby creating a secure, transparent, “single version of the truth.”   

Blockchain is in the early stages of adoption in the consumer goods sector, but it represents a significant opportunity. Here are five ways for businesses to tap into its value.  ... " 

On the Awkwardness of Voice

I now have been testing the voice in home, in various forms, from three providers, since its common inception for over three years. Most do not consider it creepy.  Despite the advances, can still be awkward.   But then so can typed consumer interaction.  The biggest problem is still that we do not support  'conversational dialog' with operational contextual memory.   Errors exist in both the transcription of what is said, and how it assembles itself into meaningful, ongoing interaction.  Humans do that, but also make errors.   Yet sales of voice devices still increase, and there is still increasing use and value.

Your Voice Assistant may be Getting smarter, but its still Awkward. By Lauren Goode in Wired

In September of this year, Amazon hosted a press event in the steamy Spheres at its Seattle headquarters, announcing a dizzying array of new hardware products designed to work with the voice assistant Alexa. But at the event, Amazon also debuted some new capabilities for Alexa that showcased the ways in which the company has been trying to give its voice assistant what is essentially a better memory. At one point during the presentation, Amazon executive Dave Limp whispered a command to Alexa to play a lullaby. Alexa whispered back. Creepiness achieved.

Voice-controlled virtual assistants like Alexa and the speakers they live inside are no longer a novelty; an estimated 100 million smart speakers were installed homes around the world in 2018. But this year, the companies making voice-controlled products tried to turn them into sentient gadgets. Alexa can have the computer version of a "hunch" and predict human behavior; Google Assistant can carry on a conversation without requiring you to repeatedly say the wake word. If ambient computing—the notion that computers are all around us and can sense and respond to our needs—is the vision technologists have for the future, then 2018 might just be the year that vision came into sharper focus. Not with a bang, but a whisper.   .... "

3D Holography for Video Projection

On to videos.  The future of ads to be videos floating in our spaces?  Saw demos of this at conferences, but none were practical in general.  Augmented reality (AR) addresses a similar idea.

A Big Step Toward the Practical Application of 3D Holography With High-Performance Computers 
R&D Magazine

Researchers at Chiba University in Japan have developed a computer that can project high-quality three-dimensional (3D) holography as a video. Chiba's Tomoyoshi Ito began working on specially designed computers for holography, called HORN (for HOlographic ReconstructioN), in 1992. The latest version, the HORN-8, which utilizes a calculation method called the "amplitude type" for adjusting the intensity of light, was recognized as the world's fastest computer for holography earlier this year. With the newly developed "phase type" HORN-8, the calculation method for adjusting the phase of light was implemented, allowing the researchers to successfully project holographic information as a 3D video with high-quality images. Said Ito, "We have been developing the high-speed computers for 3D holography by implementing the knowledge of information engineering and the technology of electrical and electronic engineering and by learning insights from computer science and optical methods."  ... '

Wednesday, December 26, 2018

Nike Sees Online Dominating

Retail continues to surge online ...

Nike sees online eclipsing offline sales   by Tom Ryan in Retailwire with further expert comment.

The standard defense to the “Retail Apocalypse” is research showing that online sales still make up less than 10 percent of all retail sales. But Nike just joined a growing crop of brands that now expects over half of its sales to eventually originate online.

On its second-quarter conference call last week, Andy Campion, Nike’s CFO, noted that at its Investor Day last October, Nike predicted that online revenue — both from its own and wholesale partner websites — would generate about 30 percent of its sales by 2023, up from 15 percent currently.   .... " 

Foresight Institute

Foresight Institute

An excellent research group we have connected with for over 30 years. We initially linked due to our interest in nanotechnology for manufacturing.   Follow them.

https://foresight.org/

Today covering roughly, as I interpret it from their most recent communication:

Molecular Nanotechnology
Molecular Machines
Longevity
AI Safety
Education and Strategy
Existential Hope
Cryptocurrency  .... 

Voice Shopping Triples

Even contributing to some outages this weekend in Europe.    Will voice be more important than we think?  The integration of  'skills' to curate selection in new ways? 

Amazon Says Alexa Voice Shopping Tripled During 2018 Holiday Season Via Fortune,

Shoppers are increasingly using their voices to buy products online.

Amazon on Wednesday said that the number of voice-activated orders placed via its virtual personal assistant Alexa were three times greater during the 2018 holiday season than they were in the last year.

Meanwhile, Alexa was also called on “hundreds of thousands” of times to help folks find cocktail recipes: eggnog and Moscow Mules topped the list of most-requested drinks during the holidays.

The findings were part of Amazon’s (AMZN, +3.25%) announcement on Wednesday that its customers ordered more items worldwide this year than ever before. In the U.S. alone, Amazon shipped more than one billion products for free through its Amazon Prime subscription service. Amazon retail partners that sell products through its online marketplace accounted for more than half of the items sold through the holidays.  .... " 

Microsoft AI pushes the Gartner Report

Order it up at the link.  I have.   I assume because its praising Microsoft's Azure AI.  But likely quite interesting.    Like the intro description.   Comments, is this useful to use with clients who are just entering the space?

Drive strategic opportunity with intelligent apps and analytics

Intelligent apps have the potential to completely change the way developers build and deploy their projects. Many organizations already rely on developers to apply artificial intelligence (AI) and machine learning techniques that set them apart, and more will join them as AI becomes more widely understood and developing intelligent apps becomes even easier and more affordable.

In Top 10 Strategic Technology Trends for 2018: Intelligent Apps and Analytics, Gartner explores the current landscape of intelligent apps and analytics and offers actionable guidance on what’s to come in the next few years. Get up to speed on the evolution of AI-powered apps, and learn where to focus your efforts as you build your own.

This report includes:
Recommendations for building AI into your apps.
Examples of how organizations are using AI apps, broken down by industry.
What developers can expect to achieve with AI apps. .... "

Catching Up with AI Development

Interesting view,  personally think the catch up will be provided eventually by more automated AI systems.   I also like the mentioned of the 'knowledge systems' of times past, ultimately a complete AI systems will need them to operate, and those systems will require maintenance and delivery details - FAD

Why Companies That Wait to Adopt AI May Never Catch Up   by Vikram Mahidhar, Thomas H. Davenport in HBR

While some companies — most large banks, Ford and GM, Pfizer, and virtually all tech firms — are aggressively adopting artificial intelligence, many are not. Instead they are waiting for the technology to mature and for expertise in AI to become more widely available. They are planning to be “fast followers” — a strategy that has worked with most information technologies.

We think this is a bad idea. It’s true that some technologies need further development, but some (like traditional machine learning) are quite mature and have been available in some form for decades. Even more recent technologies like deep learning are based on research that took place in the 1980s. New research is being conducted all the time, but the mathematical and statistical foundations of current AI are well established.

System Development Time

Beyond the technical maturity issue, there are several other problems with the idea that companies will be able to adopt quickly once technologies are more capable. First, there is the time required to develop AI systems. Such systems will probably add little value to your business if they are completely generic, so time is required to tailor and configure them to your business and the specific knowledge domain within it. If the AI you are adopting employs machine learning, you will have to round up a substantial amount of training data. If it manipulates language — as in natural language processing applications — it can be even more difficult to get systems up and running. There is a lot of taxonomy and local knowledge that needs to be incorporated into the AI system —similar to the old “knowledge engineering” activity for expert systems. AI of this type is not just a software coding problem; it is a knowledge coding problem. It takes time to discover, disambiguate, and deploy knowledge.

Particularly if your knowledge domain has not already been modeled by your vendor or consultant, it will typically require many months to architect. This is particularly true for complex knowledge domains. For example, Memorial Sloan Kettering Cancer Center has been working with IBM to use Watson to treat certain forms of cancer for over six years, and the system still isn’t ready for broad use despite availability of high-quality talent in cancer care and AI. There are several domains and business problems for which the requisite knowledge engineering is available. However, it still needs to be manipulated to a company’s specific business context. . .... "

Monday, December 24, 2018

Marketing AI Example, and a Rush to Voice

How will AI influence how we market?   Useful examples:

Your key Marketing Strategy for 2019 had better include Voice   By Annie Pettit  in Customerthink

Take a minute to think about every car commercial you’ve ever seen.
Close your eyes. Let the imagery percolate.
Now watch this commercial.



Is that the PRECISE commercial you just imagined?

Probably.

That’s because Lexus used AI to create a commercial based on a training dataset of award winning car commercials. The commercial is effective for several reasons. First, the AI system that produced the script correctly identified the criteria that would win with its target audience. Second, a human director, Kevin Macdonald, applied emotional creativity to weave together the required components. And third, incorporating AI into the creative development process is the perfect way for Lexus to demonstrate how the use of cutting edge technology to build vehicles. This AI commercial is completely on brand.   ... "

Future of Voice with Human Mimicking

Was recently asked about voice based systems that mimic/fool people into thinking the speaker is human.  Notably for both assistant and possible marketing applications.  Notably the 'Google Duplex' method demonstrated this year.  This article in theVerge outlines their possible issues and next steps:

Google is being vague with disclosure in early real-world Duplex calls    By Chris Welch@chriswelch  

Google has begun rolling out its futuristic Duplex feature, which can automatically make voice calls to restaurants and other businesses on a user’s behalf, to a small group of Pixel owners in “select” cities around the US. VentureBeat managed to test out Duplex in the real world and recorded what the experience is like when initiating a call through Google Assistant. That part seems fairly straightforward. But the exchange between Duplex and a restaurant on the other side of the call is raising some early concerns about transparency.   .... " 

New Means of Seeing Objects

Closer to biomimicry, the article says a not deeply technical description:

New AI computer vision system mimics how humans visualize and identify objects
 UCLA Samueli School of Engineering

Summary:
Researchers have demonstrated a computer system that can discover and identify the real-world objects it 'sees' based on the same method of visual learning that humans use.
Researchers from UCLA Samueli School of Engineering and Stanford have demonstrated a computer system that can discover and identify the real-world objects it "sees" based on the same method of visual learning that humans use.

The system is an advance in a type of technology called "computer vision," which enables computers to read and identify visual images. It is an important step toward general artificial intelligence systems -- computers that learn on their own, are intuitive, make decisions based on reasoning and interact with humans in a more human-like way. Although current AI computer vision systems are increasingly powerful and capable, they are task-specific, meaning their ability to identify what they see is limited by how much they have been trained and programmed by humans.

Even today's best computer vision systems cannot create a full picture of an object after seeing only certain parts of it -- and the systems can be fooled by viewing the object in an unfamiliar setting. Engineers are aiming to make computer systems with those abilities -- just like humans can understand that they are looking at a dog, even if the animal is hiding behind a chair and only the paws and tail are visible. Humans, of course, can also easily intuit where the dog's head and the rest of its body are, but that ability still eludes most artificial intelligence systems. .... " 

Sunday, December 23, 2018

What Should we Expect from China?

McKinsey as usual prompts some interesting thoughts:

" ... The next stages of China’s transition away from economic equilibrium with the United States will likely create volatility in market growth and require conservatism in some areas and bold moves in others. ... "

Wired Guide to Virtual Reality

Nicely done, mostly up to date piece.    Non Technical, but good depth, with a mix of positive and negative.  Sure its a fun way to experience data and reality, but will it be used with most of your interactions with digital in the near feature?   Like the smartphone?  Still unclear.

The Wired Guide to Virtual Reality

ALL HAIL THE headset. Or, alternatively, all ignore the headset, because it’s gonna be a dismal failure anyway.

That’s pretty much the conversation around virtual reality, a technology by which computer-aided stimuli create the immersive illusion of being somewhere else—and a topic on which middle ground is about as scarce as affordable housing in Silicon Valley. VR is either going to upend our lives in a way nothing has since the smartphone, or it’s the technological equivalent of trying to make “fetch” happen. The poles of that debate were established in 2012, when VR first re-emerged from obscurity at a videogame trade show; they’ve persisted through Facebook’s $3 billion acquisition of headset maker Oculus in 2014, through years of refinement and improvement, and well into the first-and-a-half generation of consumer hardware.  .... "

Faked Faces

Been examining the use and misuse of facial recognition for a project.   You can tell that from recent posts.  The below shows how faking is being done, with examples of the faces at the click through.  Here the challenge is, how effectively can we detect if the faces are real?     And what is the nature  of the training set required?

These Incredibly Realistic Fake Faces Show How Algorithms Can Now Mess with Us  By Technology Review 

These faces don't seem particularly remarkable. They could easily be taken from, say, Facebook or LinkedIn. In reality, they were dreamed up by a new kind of AI algorithm. .... 

From Technology Review  

Curiosity Driven Data Science

Any time you examine data analytically you exercise your curiosity.    And that curiosity is naturally driven by your own or business goals.  So it makes sense to attach the goals more systematically to the analysis, whether it is as simple as a regression,  or complex as a trained neural net.  Just make sure you do exploratory examination of your results.

Curiosity-Driven Data Science      By Eric Colson in HBR

Data science can enable wholly new and innovative capabilities that can completely differentiate a company. But those innovative capabilities aren’t so much designed or envisioned as they are discovered and revealed through curiosity-driven tinkering by the data scientists. So, before you jump on the data science bandwagon, think less about how data science will support and execute your plans and think more about how to create an environment to empower your data scientists to come up with things you never dreamed of.

First, some context. I am the Chief Algorithms Officer at Stitch Fix, an online personalized styling service with 2.7 million clients in the U.S. and plans to enter the U.K. next year. The novelty of our service affords us exclusive and unprecedented data with nearly ideal conditions to learn from it. We have more than 100 data scientists that power algorithmic capabilities used throughout the company. We have algorithms for recommender systems, merchandise buying, inventory management, relationship management, logistics, operations — we even have algorithms for designing clothes! Each provides material and measurable returns, enabling us to better serve our clients, while providing a protective barrier against competition. Yet, virtually none of these capabilities were asked for by executives, product managers, or domain experts — and not even by a data science manager (and certainly not by me). Instead, they were born out of curiosity and extracurricular tinkering by data scientists.

Data scientists are a curious bunch, especially the good ones. They work towards clear goals, and they are focused on and accountable for achieving certain performance metrics. But they are also easily distracted, in a good way. In the course of doing their work they stumble on various patterns, phenomenon, and anomalies that are unearthed during their data sleuthing. This goads the data scientist’s curiosity: “Is there a better way that we can characterize a client’s style?” “If we modeled clothing fit as a distance measure could we improve client feedback?” “Can successful features from existing styles be re-combined to create better ones?” To answer these questions, the data scientist turns to the historical data and starts tinkering. They don’t ask permission. In some cases, explanations can be found quickly, in only a few hours or so. Other times, it takes longer because each answer evokes new questions and hypotheses, leading to more testing and learning. .... " 

Amazons Facial Recognition will Surveil

Overstated I think.   Its a natural and inevitable use of these technologies to make yourself and your family safer.  I have been using the Ring doorbell now for several years, and lately in a networked system, but without facial recognition.    Why not link that to law enforcement?

Amazon's Creepy Facial Recognition Doorbell Will Surveil Entire Neighborhood From People's Front Doors     via Tyler Durden

At first glance of Amazon’s new patent application, one would be tempted to think it no more than a built-in “smart” security system.

But no, this facial recognition surveillance doorbell does a lot more than record would-be thieves.
According to a new report, the patent application, made available in late November, would pair facial surveillance such as Rekognition, the product that Amazon is aggressively marketing to law enforcement, with Ring – a doorbell camera company that Amazon acquired in 2018.

CNN writes, “Amazon’s application says the process leads to safer, more connected neighborhoods, as well as better informed homeowners and law enforcement.”   ... " 

Saturday, December 22, 2018

Millie the Avatar Tests Interaction

Testing the idea of what kind of 'human' engagement works in what context.

Meet 'Millie' the Avatar. She'd Like to Sell You a Pair of Sunglasses  By Bloomberg in ACM

Toronto, Canada-based startup Twenty Billion Neurons (TwentyBN) has created a life-size digital avatar to help retail brands looking for ways to boost falling in-store sales in the face of growing competition from e-commerce.

The Millie avatar appears on a slightly-larger-than-life screen, seemingly making eye contact with customers and tracking their movements with her gaze; Millie is also able to tell where in the store a customer is looking and respond accordingly.

The system is equipped with speech recognition and natural language processing software, which allow Millie to understand and answer simple questions or have a basic conversation with a shopper.

The avatar also uses facial-recognition software to learn to recognize people it sees often by name.

Natalie Berg of U.K. consultancy NBK Retail observed that there is a fine line between cool and creepy, adding that “While this kind of tech is still novel, it is a way to get people into the store, but it might not be for everyone.”    .... " 

From Bloomberg via CACM.

Who Owns Precise Data Descriptions of Things?

Fascinating look at the ownership of metadata.  Here a more precise description of things that have ever been created.  And then that description allows to do analytics that we could never have done before.  Even analyses we still have not thought of yet.  Fascinating view of who is doing this and the approaches involved.  And then the ownership problem.

Who Owns 3D Scans of Historic Sites?   By Esther Shein 
Communications of the ACM, January 2019, Vol. 62 No. 1, Pages 15-17
10.1145/3290410  ... "

Microrobots for Inspection

More examples of very small scale solutions foor manufacturing, maintenance, inspection.

Harvard's sticky-footed inspection robot can climb through jet engines
By Michael Irving in NewAtlas

It's tricky to routinely inspect jet engines and other machines without taking them apart, which is a costly and time-consuming process. Now, a team at Harvard's Wyss Institute has developed small, insect-like robots that can climb inside and through machines to inspect them, saving the trouble of pulling them apart if there's nothing that needs fixing. .... " 

Further technical details.

Google Lens Re-Design: For Agriculture?

Yes, very much like what I see.    Gets back to the whole issue of visual similarity search.   About to try this with a long ago project in the area of agriculture, horticulture, Forestry and related mobile data gathering.    Allow a farmer to quickly gather mobile information from a planting or field about the nature of its plants.   How many plants are in what stage?  What are the weeds?  What is the soil type?   Moisture level?  Estimated harvest dates and production?   Data capture for Agricultural AI analysis?  All this has considerable value. Long been doing some testing with plant recognition apps.   How much further can this be taken with Google Lens?  Anyone working this or have pointers?

Google Lens celebrates its first anniversary with redesign, OCR update
By Georgina Torbet in DigitalTrends

Google Lens has been changing the way that smartphone users make use of the camera on their device for a year now. Using deep machine learning to analyze images collected through a device’s camera, the app can perform tasks like telling you about a book when you take a photo of the cover, identifying shops or locations by looking at a picture of them, or connecting to a wifi network when the camera is pointed at a label showing the login data. ... "

AWS Time-Series Database

This says it is an architecture better adapted for time series and related metadata.  Most of what we did in supply chain analysis used time series prediction.

AWS Launches Time-Series Database  by Alex Woodie in Datanami

And details in AWS.

AWS threw its hat into the nascent ring for time-series databases yesterday with the launch of AWS TimeStream, a managed time-series database that AWS says can handle trillions of events per day.

Time-series databases have emerged as a best-in-class approach for storing and analyzing huge amounts of data generated by users and IoT devices. While relational and NoSQL databases are sometimes used for time-stamped and time-series data – such as clickstream data from Web and mobile devices, log data from IT gear, and data generated by industrial machinery — today’s massive data volumes from the IoT have outstripped the capability of those databases to keep up.

As the high-end time-series use cases piled up, AWS decided it was time to take action and make its entry into the still-specialized field, much as it did with last year’s launch of Neptune, a graph database, which is another specialized database field that’s emerging.   ... "

AWS says its new Timestream database organizes data by time intervals, which reduces the amount of data that needs to be scanned to answer a query. It minimizes storage needs and costs by automatically applying rollups, retention, tiering, and compression of data. AWS is delivering the services (it’s still in a technical preview) as a serverless product, which means there’s no underlying server on AWS to manage.

Timestream features what AWS calls an “adaptive query processing engine,” which it says can adapt to different time scales, like milliseconds, microseconds, and nanoseconds. All told, AWS claims Timestream can deliver 1,000 faster query performance at one-tenth the cost of a relational database. ... "

Does AI Make Tech Companies Stronger?

Very good thoughtful piece on the relationship of data, machine learning and its application.  Just the header below.  Read on at the link and subscribe to these useful pieces.  Concur, yes, you typically need lots of data, but it still depends very much on how you apply the results to business decisions.

By Benedict Evans of Andreessen Horowitz 

Does AI make strong tech companies stronger?

Machine learning is probably the most important fundamental trend in technology today. Since the foundation of machine learning is data - lots and lots of data - it’s quite common to hear that the concern that companies that already have lots of data will get even stronger. There is some truth to this, but in fairly narrow ways, and meanwhile ML is also seeing much diffusion of capability - there may be as much decentralization as centralization.   ... " 

Friday, December 21, 2018

Toaster on Wheels from Kroger Delivers


A Toaster on Wheels to Deliver Groceries? Self-Driving Tech Tests Practical Uses 

The New York Times   by Cade Metz

Slow uptake of driverless passenger services is spurring the autonomous industry to experiment with offerings like food deliveries from small, self-driving vehicles. One example is an unmanned electric car from the Nuro startup that transports groceries from a local chain to customers in Scottsdale, AZ. Nuro's Dave Ferguson said, "If we can reduce the cost of these deliveries and get them to you faster than you could make the trip yourself, there would be no reason for you to get in the car." More recently, Postmates in San Francisco announced plans to dispatch robotic shopping carts with blinking digital eyes onto sidewalks for similar deliveries. Ferguson said Nuro can increase the margin of error on roads by making the delivery vehicle much smaller than a normal-sized car. ... (Full article requries registration) 

AI and Enterprise Business

But What Does It Mean!? How Enterprise AI is Going to (Further) Revolutionize Business  By Beth Partridge

We keep hearing that artificial intelligence is going to change the face of business forever. I happen to agree – it’s why I started milk+honey – but what’s missing from most of these declarations is a clear explanation of how AI is going to revolutionize business. What, exactly, is going to be different? This piece is the first in a series I’ve written to explain what AI actually looks like in a business context. My hope is that business leaders running companies outside of the data-native technology sector will start to see what’s possible with enterprise-based AI (EAI), and why its impact is indeed revolutionary. From there, future pieces I publish will make sense, and the focus will be on how to get things done.  ... " 

What Sounds Sit Babies Best?

Honda says it engine noises, but there are other opinions.   Seems its back to a matter of context, and the solution is still statistical.

Honda’s Sound Sitter lulls fussy children with engine noises

The company says the sounds are similar to what babies hear in the womb.
Mallory Locklear, @mallorylocklear in Engadget ... " 

IoT and Your Supply Chain

Useful background data and surveys,  direct recommendations in the continuation at the link.

Is IoT Right for Your Supply Chain?  By Jon Slangerup, SCB Contributor
A technological transformation is taking place in supply chains across the globe today.

As businesses work to build more agile, digitally enabled supply chains, their biggest challenge is determining which tools will drive real business value.

Logistics technology investments are on the rise, with organizations projected to spend nearly $88bn on supply chain tech by 2022. Of those investments, $2.63bn will include so-called “disruptive” technologies, as businesses embrace the latest innovations in an effort to sharpen their competitive edge. Emerging technologies such as artificial intelligence, machine learning and blockchain promise to improve automation and visibility across the supply chain, leaving businesses with a dizzying array of options to consider as they build their supply-chain tech stacks.

The internet of things (IoT) is among the buzziest of these technologies, with billions of connected devices worldwide poised to deliver real-time, highly nuanced insights. But while IoT holds real potential for optimizing supply chain operations, businesses must balance this potential with the costs and risks of implementation. For many organizations, the question remains: Should IoT have a future in my supply chain, and if so, to what extent?

Understanding IoT

The number of devices connected to the internet has proliferated in recent years, with an estimated 8.4 billion in 2017, and growing to a staggering 20.4 billion by 2020, according to Gartner. These smart devices include everything from security cameras to electric meters, with industry-specific applications predicted to represent 3.2 billion devices by 2020.

For supply chain managers, sensor-based logistics offers clear advantages to improve visibility and control, including:

Real-time updates on every shipment. Location tracking is only the beginning, with IoT devices engineered to detect subtle changes in humidity, temperature and other factors. Armed with such information, organizations can carefully monitor perishable shipments or fragile goods, such as electronics, to minimize the chances of damage or spoilage.

More powerful predictive analysis. Connected devices produce a tremendous amount of information that, when harnessed correctly, can enhance business decision-making. By establishing a centralized technology platform that collects and analyzes all relevant supply chain data, businesses can gain insights about carrier performance, average lead times and other key business indicators.
The benefits of IoT could add up to big value for businesses, with Cisco predicting a $1.9tn boost to supply-chain and logistics operations by 2025 through increased revenues as well as reduced costs. Among businesses already implementing IoT in logistics, 74 percent report a related rise in revenue, according to Deloitte.

The truth, however, is that many businesses simply aren’t there yet. One 2018 survey found that 95 percent of business leaders aren’t fully capitalizing on digital technologies in their supply chains, with only 54 percent reporting plans to implement IoT in the future. And in a 2018 study, Gartner noted that the majority of IoT supply chain tech will be proof of concept well into 2021.   ..... " 

Case study: Converting Hand Written to Integers

Nice little but complete toy example pattern learning, with test data and code.  Real time solver.  With links to more examples:

Real time numbers recognition (MNIST) on an iPhone with CoreML from A to Z
By Thomas Ebermann

Learn how to build and train a deep learning network to recognize numbers (MNIST),how to convert it in the CoreML format to then deploy it on your iPhoneX and make it recognize numbers in realtime!

This is the third part of our deep learning on mobile phones series. In part one I have shown you the two main tricks on how to use convolutions and pooling to train deep learning networks. In part two I have shown you how to train existing deep learning networks like resnet50 to detect new objects. In part three I will now show you how to train a deep learning network, how to convert it in the CoreML format and then deploy it on your mobile phone!

TLDR: I will show you how to create your own iPhone app from A-Z that recognizes handwritten numbers: .... "

Cognixion Enabling Assistant Support with AI

Long time correspondent Andreas Forsland provides information about Cognixion:

Cognixion™: Giving AI-Superpowers to Humans with Disabilities

Andreas Forsland, Founder & CEO  in CIO Review

Although smart and intellectual, world-renowned scientist, Stephen Hawking, would not have contributed to cosmology and theoretical physics without the necessary assistive technology support to communicate with the external world. Leveraging state-of-the-art, but expensive technology, Hawking could unsheathe new dimensions in the field of science just by blinking an eye and twitching a cheek. 

Now, what many don’t realize is that there are almost half a billion people worldwide with speech disabilities and over a billion with accessibility barriers if you include hearing and vision loss. There are likely hundreds or even thousands of people out there just like Stephen Hawking, awaiting the right technology at the right price to unlock their own creativity, curiosity, self-expression, and inclusion. 

Try imagining a world where every differently abled person could seamlessly communicate without being hampered by their disabilities. But, is it possible to contrive advanced technologies of this kind at scale? “The vivid applications of latest technologies such as artificial intelligence (AI), machine learning (ML), and augmented reality (AR) can make it possible,” answers Andreas Forsland, founder and CEO of Cognixion. With a desire to democratize communication, Forsland laid the foundation of Cognixion, an AI-based company. Cognixion brings together the power of AI, ML, and AR to devise affordable products that enrich human communication. “Through our inventive technology, we allow differentially abled people to use their brain waves to control objects around them in the real and digital world. It is like a virtual mouse reading brain signals and taking decisions accordingly.”  .... "

On Google Lens

God piece on where it is and where it is going.    Impressive so far,  it gives you some remarkable results, but not always exactly what you need.   The notion of precision search using images, say from a camera,  is quite different than text search.

The era of the camera: Google Lens, one year in
By Aparna Chennapragada      VP, Google Lens and AR

There is, of course, the vacation beach pic, the kid’s winter recital, and the one--or ten--obligatory goofy selfie(s). But there’s also the book that caught my eye at a friend’s place, the screenshot of an insightful tweet and the tracking number on a package.

As our phones go everywhere with us, and storage becomes cheaper, we’re taking more photos of more types of things. We’re of course capturing sunsets and selfies, but people say 10 to 15 percent of the pictures being taken are of practical things like receipts and shopping lists.

To me, using our cameras to help us with our day-to-day activities makes sense at a fundamental human level. We are visual beings—by some estimates, 30 percent of the neurons in the cortex of our brain are for vision. Every waking moment, we rely on our vision to make sense of our surroundings, remember all sorts of information, and explore the world around us.  

The way we use our cameras is not the only thing that’s changing: the tech behind our  cameras is evolving too. As hardware, software, and AI continue to advance, I believe the camera will go well beyond taking photos—it will help you search what you see, browse the world around you, and get things done.

That’s why we started Google Lens last year as a first step in this journey. Last week, we launched a redesigned Lens experience across Android and iOS, and brought it to iOS users via the Google app.

I’ve spent the last decade leading teams that build products which use AI to help people in their daily lives, through Search, Assistant and now Google Lens. I see the camera opening up a whole new set of opportunities for information discovery and assistance. Here are just a few that we’re addressing with Lens: .... " 

Google Lens Can Detect a Billion Objects

I noted that the IOS version of Google Lens has been updated to a more stable condition..  Impressive capabilities.  Been experimenting with it to construct reference images for machine learning experiments.   Images will be captured from sources like broadcasts and other advertising resources. 

The claim is that Google Lens 'can now detect over 1 Billion objects': Quote and more from Techspot.

Thursday, December 20, 2018

Powerapps for Custom Business App Delivery

Brought to my attention again, note connection mentioned to Teams, which I have been testing now for some time.    Like the no-code, more direct connection to process idea.   Also leads to much more that's closer to the Edge.

Microsoft PowerApps: Build Custom Business Apps

Transform your business by creating custom business apps with Microsoft PowerApps. Connect data from the cloud and make your own app—no coding ...

One platform, unlimited opportunity

Make Office 365 and Dynamics 365 your own with powerful apps that span productivity and business data. Customize SharePoint Online, use PowerApps with Microsoft Teams, and build apps on Dynamics 365.

Innovate faster: 

Build apps fast with a point-and-click approach to app design. Choose from a large selection of templates or start from a blank canvas. Easily connect your app to data and use Excel-like expressions to easily add logic. Publish your app to the web, iOS, Android, and Windows 10. It’s that easy. ... "

AI Assistance and Small Business

Will assistants provider cheaper access to expertise to the small business first, because its a cheaper source of knowledge?

How AI Can Help Small Business Solve Big Problems

Podcast:

Intuit's Ashok Srivastava explains how AI-powered software can help individuals and small businesses.

Small business owners and self-employed individuals typically face financial and operational challenges. Artificial intelligence is giving them a leg up through applications such as smarter accounting software and fintech services like expanded access to capital. At the recent AI Frontiers conference in Silicon Valley, Ashok Srivastava, chief data officer at financial software firm Intuit, the creator of TurboTax, QuickBooks and Mint, spoke to Knowledge@Wharton about how his firm is using AI to “power prosperity for the current and future generations.”     

An edited transcript of the conversation follows.

Knowledge@Wharton: How did you get interested in AI and data sciences?

Ashok Srivastava: It’s an interesting story. In some ways you might say it was predestined. My father was a mathematician and a statistician who worked in many areas of information science, experimental design and so forth. When I was young, he bought me a book on artificial intelligence (AI) and told me that I had to read it during the summer. Being the good son, I took it and I read it in the university library. It made a tremendous impact on me. Ever since I was a child, I was interested in making things do things for themselves. That was just my way of thinking. I remember that I used to think like that even while playing with toys. AI seemed to be the way to do it.

Well, I ended up reading that book and thinking about it, but frankly, I then put it aside and went about my journey in electrical engineering. I got a Ph.D. in electrical engineering and I focused on signal process and control theory and those types of fields. But towards the end of my Ph.D., I became interested in machine learning. That was the point where I started to work in machine learning and neural networks and bringing ideas from signal processing and time series into it. That got me into the field and I’ve been in it ever since. ... "

Autonomous Cars Reading Passenger Emotions

Driver emotions might be interesting, but if there aren't any drivers?   Have always been interested in how the behavior of passengers in self-driving cars, which will now include the former driver, will change.  We followed the affective group at MIT for some time as members, worth a look.

Kia wants future autonomous cars to be able to read passengers’ emotions   By Stephen Edelstein in DigitalTrends

At CES 2019, Kia will look into the future, to a time when self-driving cars are the norm. When every person is a passenger, companies will need new ways to improve the experience, Kia reasons. The automaker believes artificial-intelligence technology that can recognize human emotions is the way to do that.

Kia is working with the Massachusetts Institute of Technology’s Media Lab’s Affective Computing group to develop a system called READ, short for Real-time Emotion Adaptive Driving. Kia calls the system a world first, claiming it can analyze a person’s emotional state through “bio-signal recognition technology” and artificial intelligence. The system can then alter certain aspects of the cabin to lift the occupant’s mood, according to Kia.  ... " 

Humor That Works

Former Colleague Andrew Tarvin,  of Humor that Works,  is giving a webinar in January for the P&G Alumni network.    Thought I would give him a shout out .... check out his space.  We need more humor.

https://www.humorthatworks.com/

Andrew Tarvin is the world’s first Humor Engineer teaching people how to get better results while having more fun. He has worked with thousands of people at 250+ organizations, including P&G, GE, and Microsoft.

Today’s world is constantly driving towards greater efficiency. The only problem is that you can’t be efficient with humans because they have “emotions” and “feelings.” And those feelings get in the way of greater productivity. The key is to focus on effectiveness.

This webinar will teach you how to accomplish your goals and not just complete a task. Combining elements of leadership, emotional intelligence, and software engineering, attendees will walk away knowing how to work effectively with the most challenging resource there is: humans.

Presenter Bio:
Andrew Tarvin is the world’s first Humor Engineer teaching people how to get better results while having more fun. He has worked with thousands of people at 250+ organizations, including P&G, GE, and Microsoft.

Prior to starting his company, Humor That Works, Andrew was an IT project manager for Procter & Gamble and the self-proclaimed Corporate Humorist of the company. During his time at P&G, he worked for R&D and engineering in Cincinnati, for P&G Prestige in NYC, and wrote a terrible rap jingle for Pringles.

He is a best-selling author, has been featured in The Wall Street Journal, Forbes, and TEDx, and has delivered programs in 50 states, 20 countries, and 4 continents. He loves the color orange and is obsessed with chocolate. ..... 

Thanks,
P&G Alumni Network

Companion Robotics in Japan

A space we followed as it was integrated into the smart home.

Japan’s latest companion robot is the fuzzy, expressive Lovot
It can beg for attention and follow you around.

Mallory Locklear, @mallorylocklear  in Engadget... "

Technology for the Deaf

And how people integrate technology into accessibility.

Technology for the Deaf    By Keith Kirkpatrick

Communications of the ACM, December 2018, Vol. 61 No. 12, Pages 16-18
10.1145/3283224

A nurse asks a patient to describe her symptoms. A fast-food worker greets a customer and asks for his order. A tourist asks a police officer for directions to a local point of interest.

For those with all of their physical faculties intact, each of these scenarios can be viewed as a routine occurrence of everyday life, as they are able to easily and efficiently interact without any assistance. However, each of these interactions are significantly more difficult when a person is deaf, and must rely on the use of sign language to communicate.

In a perfect world, a person that is well-versed in communicating via sign language would be available at all times and at all places to communicate with a deaf person, particularly in settings there is a safety, convenience, or legal imperative to ensure real-time, accurate communication. However, it is exceptionally challenging, from both a logistical and cost perspective, to have a signer available at all times and in all places.

That's why, in many cases, sign language interpreting services are provided by Video Remote Interpreting, which uses a live interpreter that is connected to the person needing sign language services via a videoconferencing link. Institutions such as hospitals, clinics, and courts often prefer to use these services, because they can save money (interpreters not only bill for the actual translation service, but for the time and expenses incurred traveling to and from a job).

However, video interpreters sometimes do not match the accuracy of live interpreters, says Howard Rosenblum, CEO of the National Association of the Deaf (NAD), the self-described "premier civil rights organization of, by, and for deaf and hard of hearing individuals in the United States of America."

"This technology has failed too often to provide effective communications, and the stakes are higher in hospital and court settings," Rosenblum says, noting that "for in-person communications, sometimes technology is more of an impediment than a solution." Indeed, technical issues such as slow or intermittent network bandwidth often make the interpreting experience choppy, resulting in confusion or misunderstanding between the interpreter and the deaf person.

That's why researchers have been seeking ways in which a more effective technological solution or tool might handle the conversion of sign language to speech, which would be useful for a deaf person to communicate with a person who does not understand sign language, either via an audio solution or a visual, text-based solution. Similarly, there is a desire to allow real-time, audio-based speech or text to be delivered to a person who is deaf, often through sign language, via a portable device that can be carried and used at any time. .... "

Wednesday, December 19, 2018

VW Uses D-Wave for Quantum Chemistry

VW Solves Quantum Chemistry Problems on a D-Wave Machine 

IEEE Spectrum   By Mark Anderson

VW Solves Quantum Chemistry Problems on a D-Wave Machine

Scientists say their work offers a proof of principle for using D-Wave’s quantum computers to tackle even tougher chemistry problems  .... 

 " ... D-Wave computers are known as “quantum annealers,” running complex circuits using the machine’s 128,000 superconducting Josephson junctions. The integrated superconducting circuit, cooled down to thousandths of a degree above absolute zero, contains 2,048 quantum bits (qubits) and 6,016 interconnections (a.k.a. couplers) between qubits. It is called an annealer because the circuit begins in one state and then slowly transitions through to its final state, with its individual qubits representing distillations of an answer. ... "

Researchers at Volkswagen in Germany and the U.S. have used a D-Wave 2000Q quantum computer to solve rudimentary quantum chemistry problems. The researchers ran D-Wave computations that identified the ground-state energies of molecular hydrogen and lithium hydride. Although both molecules are well known and well studied, the Volkswagen researchers established an increasingly computational route to exploring chemistry in the quantum realm. The researchers also enumerated a list of quantum chemistry simulation goals that sufficiently robust quantum computation should address, such as: designing next-generation batteries; optimizing solar cells via detailed study of photosynthesis, and faithfully simulating complex molecules without restoring to approximations that conventional computers use to make such simulations tractable.  .... " 

Risk of Facial Recognition

Its important to remember that the latest emergence of AI methods are not perfect,  though much better than in the past, can produce erroneous results.   This can be for reasons of context in training, use or environment.    So there is risk involved.   So I always recommend at least a cursory look at risks analysis.  Much more if it shows higher indications of wrong predictions, sensitive domain or changing environment or regulation.

 Almost Everyone Involved in Facial Recognition Sees Problems   in Bloomberg    By Dina Bass

Facial recognition software is almost universally acknowledged by the scientific, technology, and legislative communities as flawed, with bias, mass surveillance, and other hazards making a strong case for regulation. In response, the Algorithmic Justice League and Georgetown University Law Center's Center of Privacy & Technology have introduced the Safe Face Pledge, urging companies not to provide facial artificial intelligence (AI) for autonomous weapons, and not to sell to facial recognition systems to law enforcement agencies unless explicit laws regulating their use are considered and approved. Facial recognition for surveillance, policing, and immigration is under scrutiny because scientists have demonstrated that the technology lacks sufficient accuracy for critical decisions, and performs worse on darker-skinned people. The Safe Face Pledge asks companies to "show value for human life, dignity, and rights, address harmful bias, facilitate transparency," by incorporating such commitments into business practices. The University of Washington's Ryan Calo said broad regulation and government oversight could complement pressure from workers and customers for companies to practice ethical AI deployment.  ... " 

NVIDIA Announces first Personal AI Supercomputer

More at the link with registration: 

This DGX Station technical white paper provides an overview of the system technologies, DGX software stack and Deep Learning frameworks.

Read this paper to understand how NVIDIA DGX Station will allow you to experiment at your desk and extend that same deep learning software across DGX systems and the cloud.  ... "

Kroger Goes Live with Self-Driving Delivery

Seeing these live, commonly, will be the view of a new world.

Kroger goes live with self-driving delivery vehicles
Supermarket giant sells You Technology digital coupon unit
By Russell Redman

In a pair of technology announcements today, The Kroger Co. said it has started using unmanned vehicles for online grocery deliveries and formed a new relationship for managing digital offers.

At a Fry’s Food Store in Scottsdale, Ariz., Kroger launched what it calls the first-ever autonomous vehicle delivery service available to the general public. The service follows a successful pilot with Nuro, a Mountain View, Calif.-based robotics and artificial intelligence specialist, that began in August.

Related: Kroger to pilot unmanned grocery delivery vehicles

The test with Nuro in Scottsdale, which made almost a thousand grocery deliveries, used a fleet of self-driving Prius cars accompanied by vehicle operators, Kroger said. With the official launch of the service, the delivery fleet is being expanded to include the Nuro R1 custom unmanned vehicle.

With no driver or passengers, the R1 travels on public roads and only transports goods. The vehicle has been in development since 2016.  ... "

The Goal of Automating AI

Good, lengthy and somewhat technical piece from O'Reilly.  Worth reading.   Quite a challenge out there.   This will happen, and relatively soon.   This is also why I think teaching everyone coding will not be a useful thing to do.  It will be specialist work for a few. It will all be automated.   Key will be to have people work with data sources and apply results.  This is worth a read:

Deep Automation in Machine Learning
We need to do more than automate model building with autoML; we need to automate tasks at every stage of the data pipeline.     By Ben Lorica, Mike Loukides in O'Reilly

In a previous post, we talked about applications of machine learning (ML) to software development, which included a tour through sample tools in data science and for managing data infrastructure. Since that time, Andrej Karpathy has made some more predictions about the fate of software development: he envisions a Software 2.0, in which the nature of software development has fundamentally changed. Humans no longer implement code that solves business problems; instead, they define desired behaviors and train algorithms to solve their problems. As he writes, “a neural network is a better piece of code than anything you or I can come up with in a large fraction of valuable verticals.” We won’t be writing code to optimize scheduling in a manufacturing plant; we’ll be training ML algorithms to find optimum performance based on historical data.

If humans are no longer needed to write enterprise applications, what do we do? Humans are still needed to write software, but that software is of a different type. Developers of Software 1.0 have a large body of tools to choose from: IDEs, CI/CD tools, automated testing tools, and so on. The tools for Software 2.0 are only starting to exist; one big task over the next two years is developing the IDEs for machine learning, plus other tools for data management, pipeline management, data cleaning, data provenance, and data lineage. ... "