/* ---- Google Analytics Code Below */

Thursday, January 31, 2019

Data Law Compliance

More data global data regulation and related compliance.  An example of negative value in data as an asset.   In BusinessInsider:

Russia to check P&G's compliance with local data laws
 Reuters Jan. 28, 2019, 8:26 AM
  
MOSCOW (Reuters) - Russia's communications watchdog is carrying out checks to establish whether U.S. consumer goods group Procter & Gamble and Burger King are complying with local data laws, a spokesman for the watchdog said on Monday. ... " 

InfoQ interviews Google on Quantum Neural Nets

Further examination of this, based on two papers in Google Research, provides a less technical and critical view.

Exploring Quantum Neural Nets   by  Sergio De Simone in InfoQ

An important area of research in quantum computing concerns the application of quantum computers to training of quantum neural networks. The Google AI Quantum team recently published two papers that contribute to the exploration of the relationship between quantum computers and machine learning. 

In the first of the two papers, "Classification with Quantum Neural Networks on Near Term Processors", Google researchers propose a model of neural networks that fits the limitation of current quantum processors, specifically the high levels of quantum noise and the key role of error correction.

The second paper, "Barren Plateaus in Quantum Neural Network Training Landscapes", explores some peculiarities of quantum geometry that seem to prevent a major issue with classical neural networks, known as the problem of vanishing or exploding gradients.

InfoQ took the chance to speak with Google senior research scientist Jarrod McClean to better understand the importance of these results and help frame them in a larger context.  ... "

Provability and Machine Learning

Interesting point,  but the point being made is rarely useful in practical mathematics.   Just because something cannot be rigorously proved does not mean it is not practically useful.    Results that are used in AI methods are statistical, not exact logic.   Still what is shown does claim to link the related limitations of logic and machine learning.  Now how does that limit ML in practice?

Unprovability comes to machine learning  in Nature

Scenarios have been discovered in which it is impossible to prove whether or not a machine-learning algorithm could solve a particular problem. This finding might have implications for both established and future learning algorithms.

During the twentieth century, discoveries in mathematical logic revolutionized our understanding of the very foundations of mathematics. In 1931, the logician Kurt Gödel showed that, in any system of axioms that is expressive enough to model arithmetic, some true statements will be unprovable 1. And in the following decades, it was demonstrated that the continuum hypothesis — which states that no set of distinct objects has a size larger than that of the integers but smaller than that of the real numbers — can be neither proved nor refuted using the standard axioms of mathematics2–4. Writing in Nature Machine Intelligence, Ben-David et al.5 show that the field of machine learning, although seemingly distant from mathematical logic, shares this limitation. They identify a machine-learning problem whose fate depends on the continuum hypothesis, leaving its resolution forever beyond reach. ..... " 

Paper: https://www.nature.com/articles/s42256-018-0002-3

Computational Thinking

Computational thinking is 'thinking like a computer scientist', that is logically, effectively applied to important human goals.   I agree its a noble definition.  Its one effective way  that you can prove you know something, and have a means to change your knowledge as its context changes.  But I think there may be other future ways, not best called 'computational thinking', that may emerge.  Good read that makes you think:

Do We Really Need Computational Thinking?  By Enrico Nardelli 
Communications of the ACM, February 2019, Vol. 62 No. 2, Pages 32-35
10.1145/3231587

I confess upfront, the title of this Viewpoint is meant to attract readers' attention. As a computer scientist, I am convinced we need the concept of computational thinking, interpreted as "being able to think like a computer scientist and being able to apply this competence to every field of human endeavor."

The focus of this Viewpoint is to discuss to what extent we need the expression "computational thinking" (CT). The term was already known through the work of Seymour Papert,13 many computational scientists,5 and a recent paper15 clarifies both its historical development and intellectual roots. After the widely cited Communications Viewpoint by Jeannette Wing,19 and thanks to her role at NSF,6 an extensive discussion opened with hundreds of subsequent papers dissecting the expression. There is not yet a commonly agreed definition of CT—what I consider in this Viewpoint is whether we really need a definition and for which goal.

To anticipate the conclusion, we probably need the expression as an instrument, as a shorthand reference to a well-structured concept, but it might be dangerous to insist too much on it and to try to precisely characterize it. It should serve just as a brief explanation of why computer science (or informatics, or computing: I will use these terms interchangeably) is a novel and independent scientific subject and to argue for the need of teaching informatics in schools.

Wing discussed CT to argue it is important every student is taught "how a computer scientist thinks,"19 which I interpret to mean it is important to teach computer science to every student. From this perspective, what is important is stressing the educational value of informatics for all students—Wing was in line with what other well-known scientists had said earlier; I mention several here.

Donald Knuth, well known by mathematicians and computer scientists, in 1974 wrote: "Actually, a person does not really understand something until he can teach it to a computer."10 George Forsythe, a former ACM president and one of the founding fathers of computer science education in academia, in 1968 wrote: "The most valuable acquisition in a scientific or technical education are the general-purpose mental tools which remain serviceable for a lifetime. I rate natural language and mathematics as the most important of these tools, and computer science as a third."9 Even if both citations are not relative to a school education context, in my view they clearly support the importance of teaching computer science in schools to all students.   ... " 

Google Research in 2018

A look back at Google emergent tech research in 2018.     Mostly nontechnical descriptions and discussion of implications, with link to more technical publications

Looking Back at Google’s Research Efforts in 2018

Posted by Jeff Dean, Senior Fellow and Google AI Lead, on behalf of the entire Google Research Community

2018 was an exciting year for Google's research teams, with our work advancing technology in many ways, including fundamental computer science research results and publications, the application of our research to emerging areas new to Google (such as healthcare and robotics), open source software contributions and strong collaborations with Google product teams, all aimed at providing useful tools and services. Below, we highlight just some of our efforts from 2018, and we look forward to what will come in the new year. For a more comprehensive look, please see our publications in 2018. ... " 

Wharton Research on Wisdom in the Crowd

Wharton research indicates there are better ways to crowdsource ideas.   Would like to see more tests of this.

Is There Really Wisdom in the Crowd?  Podcast and transcript:

Wharton's John McCoy discusses his research on a better way to crowdsource ideas.
  
As predictive analytics become more sophisticated, companies are increasingly relying on aggregated data to help them with everything from marketing to new product lines. But how much should firms trust the wisdom of the crowd? In his latest research, Wharton marketing professor John McCoy proposes a new solution for crowdsourcing that can help create better, more accurate results: Instead of going with the most popular answer to a question, choose the answer that is “surprisingly popular.” His paper, which was jointly written with Drazen Prelec and H. Sebastian Seung, is titled, “A Solution to the Single-question Crowd Wisdom Problem” and was published in Nature. He spoke to Knowledge@Wharton about why there’s plenty of wisdom in the crowd for those willing to ask the right questions.

An edited transcript of the conversation follows:

Knowledge@Wharton: The power of the crowd to make predictions or recommendations has gained wide acknowledgement in the past couple of years. Can you talk about how this has developed, where it’s being used and what some of the limitations are?

John McCoy: Many companies are using internal prediction markets to try and get a handle on good ways of drawing on the wisdom of all their employees, for instance. Government agencies are using the crowd to make good economic or geopolitical forecasts. I’m not sure that there are limitations to using the wisdom of the crowd, per se. I think the limitations are in some of the current methods for extracting wisdom from the crowd. For instance, many of the current methods, before we did our work, assume that often the majority is correct or assume that it’s easy to tell almost immediately who in the crowd is an expert. In fact, that’s often not the case.  ..... " 

Wednesday, January 30, 2019

Experiential Retail

Been a student and customer of Jungle Jims outside Cincinnati for a long time.   Semi experiential it is.    And the idea certainly be overdone.    Make sure the store is well stocked with products and relevant ideas.     Even an occasional surprise.

Is experiential retail overhyped and misunderstood?
by Doug Garnett  With expert discussion.

Through a special arrangement, presented here for discussion is a summary of a current article from the Doug Garnett’s blog.  ... 

In New York City, an “experiential” restaurant enables customers to catch trout or salmon in a countertop “river” then have it prepared for dinner. The food had better be superb to support the extraordinary prices needed to maintain catchable, edible live fish.

Still, the question of experience in the store is critical because a group of “experiencer” consultants recommend turning stores into bad amusement parks. ...  "

Leveraging AI and Machine Learning

Had not heard of this concept.   Useful examples.  Thinking next Practices.

Three 'Next Practices' That Leverage AI And Machine Learning
 Forbes Technology Council,   By Ashok Santhanam

 If you are a CIO, VP of IT operations or some other type of IT leader, you are under constant pressure to ensure that IT systems operate at maximum efficiency. Systems must meet increasing service-level expectations in terms of performance, availability and security. In fact, you're probably already anticipating that this challenge is only going to get bigger. After all, you must deal with skills shortages and are tasked with supporting a growing number of IT initiatives such as cloud migrations, digital transformation, M&A integrations and other strategic projects. To address these challenges, you need to think about leveraging "next practices," not best practices. Let me explain.

The pace of change in today’s increasingly digitalized business environment means that what has worked in the past (as codified by "best practices") increasingly will not work moving forward. This has given rise to the concept of next practices. Next practices do not focus on improving existing processes since existing processes are becoming increasingly obsolete due to transformative technologies. Instead, they deal with the best ways to rethink your processes for the future, leveraging transformative technologies like artificial intelligence (AI) and machine learning (ML) to make your processes smarter.

Let me give you three interconnected examples of how AI-powered next practices can be applied to the system incident detection and resolution process. If properly applied, they will address limitations in managing your current setup while transforming your process in a way that allows you to both meet service-level targets today and create scalability for tomorrow. ... " 

Data Monetization Strategy

Led to look at this concept again, we spent much time on the problem.

Framing a winning data monetization strategy   By Sid Mohasseb in   kpmg.com

At many forward looking organizations across the globe, an immediate and evolving new front of competitiveness, data monetization, is bein added to the C-Suite and Board agendas. Highly competitive and innovative companies constantly seek new competitive grounds and explore new paths to success. The recent advent of the technological advancements in Big Data and analytics has paved the road to a new era of competitiveness; an era where data is viewed strategically and as a living and evolving asset, an asset that can unleash new opportunities for monetization. The use of data and analytics to enhance decisions is not new or novel. The new frontier is exploiting and monetizing data in a coordinated and organization-wide manner to strategically advance our competitive advantage; a strategic goal that demands a strategic approach and framework. This document offers a dynamic framework for developing a winning data monetization strategy – a framework that considers the varying data sources, offers a process for recognition of value, presents applicable business model options, explores commercialization alternatives and points to the various challenges to be addressed. ... " 

Retraining to Recognize New Categories

Good to see how such technology is being integrated into assistant and smart home tech.   Also the challenge of extending classification is also expressed.

Updating Neural Networks to Recognize New Categories, with Minimal Retraining  By Alessandro Moschitti  Dev Blog,  Amazon

Many of today’s most popular AI systems are, at their core, classifiers. They classify inputs into different categories: this image is a picture of a dog, not a cat; this audio signal is an instance of the word “Boston”, not the word “Seattle”; this sentence is a request to play a video, not a song.

But what happens if you need to add a new class to your classifier — if, say, someone releases a new type of automated household appliance that your smart-home system needs to be able to control?

The traditional approach to updating a classifier is to acquire a lot of training data for the new class, add it to all the data used to train the classifier initially,  and train a new classifier on the combined data set. With today’s commercial AI systems, many of which were trained on millions of examples, this is a laborious process.

This week, at the 33rd conference of the Association for the Advancement of Artificial Intelligence (AAAI), my colleague Lingzhen Chen from the University of Trento and I are presenting a paper on techniques for updating a classifier using only training data for the new class. ... "

Technical paper:
https://s3.us-east-2.amazonaws.com/alexapapers/NER-adaptation-AAAI2019-Moschitti.pdf

Seeking Cashierless

Continues to charge ahead, and there are still some weaknesses to the tech, but it will seen be a consumer expectations.

Retailers seek partners to match Amazon Go tech

Tech companies are increasingly being asked by retailers to match the no-cashier systems of the Amazon Go store. One contender is the NanoStore, a 160-square-foot box that blends a convenience store with a vending machine.

Amazon Go, One Year Old, Has Attracted a Host of Cashierless Imitators
Startups and established giants alike are working to replicate elements of Go or come up with other ways to streamline the shopping experience.

By Matt Day  and Spencer Super in Bloomberg

Mighty AI spent much of its first five years building software that helps self-driving cars recognize real-world objects. The Seattle startup went so far as to open a Detroit office to cozy up to the auto industry.

Then last February, Mighty AI’s sales team received an unusual request: Instead of identifying pedestrians and cars, could they track items plucked from store shelves by shoppers? A few months later, Mighty AI signed a deal to do just that, joining the race to help brick-and-mortar retailers keep pace with Amazon.com Inc.  ... " 

Tuesday, January 29, 2019

Microsoft Teams vs Slack

Now been using Teams since its release, and also Slack for projects and consulting.  And have been using things like Google Drive and OneNote for similar purposes of communication sharing, information and development.    Following article takes a comparative look:

Microsoft Teams: Its features, how it compares to Slack and other rivals

Microsoft's entry in the crowded group messaging market continues to evolve. Here's how Microsoft Teams fits into the boom collaboration software market and what you need to know to evaluate it against its rivals. .... 
           
By Matthew Finnegan By Matthew Finnegan in ComputerWorld ...

IBM Withdraws Standalone Assistant Solutions from Market

As a Dev tester of the system, received the below.  Implications?

Dear valued client,

We want to share with you some important upcoming changes to the future direction of our Watson Assistant Solutions offering.

On May 14, 2019 , IBM will withdraw Watson Assistant Solutions from the market. This includes Watson Assistant for Connected Vehicles, Watson Assistant for Connected Spaces, and Watson Assistant for Connected Platforms. IBM has already stopped accepting new orders for Watson Assistant Solutions, and we will not be adding any new features to the offerings.  Your service will not be interrupted, and you will continue to receive support in accordance with your service agreement.

We're taking the bold move to focus our future efforts to embed Watson Assistant into our strategic enterprise offerings - Maximo, TRIRIGA, engineering, and IoT Platform.  As a result, we will no longer be supporting a stand-alone platform for enterprise AI assistants. We believe this enables us to bring the greatest value to our industrial clients.  

We are committed to working with all our clients and partners who want to continue with a stand-alone conversational assistant product to determine the right solutions for their continuing needs.  Customers and business partners will be able to work with us to handle their subscriptions and contracts once the end of service announcement is published to provide a smooth transition.

Thank you for your interest in Watson Assistant Solutions. 

IBM Watson Assistant Solutions, Offering Management

Intelligent Quantum Machines?

We studied the perceptron neural model mentioned in this piece.  An early first look at how biological neurons might be built in computers.  Now an indication that quantum computers might be a means to build them efficiently.   Below excerpt from MIT's Technology Review makes the early case.  Article somewhat technical, but readable, with link to tech paper.

Machine learning, meet quantum computing  in Technology Review

An Artificial Neuron Implemented on an Actual Quantum Processor?

Back in 1958, in the earliest days of the computing revolution, the US Office of Naval Research organized a press conference to unveil a device invented by a psychologist named Frank Rosenblatt at the Cornell Aeronautical Laboratory.  Rosenblatt called his device a perceptron, and the New York Times reported that it was “the embryo of an electronic computer that [the Navy] expects will be able to walk, talk, see, write, reproduce itself, and be conscious of its existence.”

A perceptron is a single-layer neural network. The deep-learning networks that have generated so much interest in recent years are direct descendants. Although Rosenblatt’s device never achieved its overhyped potential, there is great hope that one of its descendants might.

Today, there is another information processing revolution in its infancy: quantum computing. And that raises an interesting question: is it possible to implement a perceptron on a quantum computer, and if so, how powerful can it be?

Today we get an answer of sorts thanks to the work of Francesco Tacchino and colleagues at the University of Pavia in Italy. These guys have built the world’s first perceptron implemented on a quantum computer and then put it through its paces on some simple image processing tasks. ... " 

A quantum version of the building block behind neural networks could be exponentially more powerful.   by Emerging Technology from the arXiv  November 16, 2018  Ref: arxiv.org/abs/1811.02266 : 

Tari: Imagining the Future of Digital Assets

Been looking at the management of digital assets, the podcast below is a good introduction  The approach is still in development. I like the idea of using rules for assets and the ability to trust in how they will be enforced.  Back to smart contracts?  Event ticket sales, resales and the management of data about your purchasing behavior are often used as examples.  Worth a close look.

Digital Assets-focused blockchain protocol

Discussed in Floss Podcast: :  https://twit.tv/shows/floss-weekly/episodes/514 

Records live every Wednesday at 12:30pm Eastern / 9:30am Pacific / 17:30 UTC.
Guests: Riccardo Spagni, Naveen Jain

Tari is a new open source, decentralized protocol that reimagines the future of digital assets.
Built for Builders.   The platform allows anyone to program complex rules for digital assets and trust that they will be enforced.  

With Tari, you are in control; the Tari platform allows anyone to program complex rules for digital assets and trust that they will be enforced. Tari enables the management, use and transfer of nearly any kind of digital asset you can imagine — from tickets to loyalty points to virtual goods and more — and offers unparalleled monetization opportunities for creators.  

Highly Useful
By leveraging scalability technologies like payment channels, transaction cut-through, and more, the Tari network will support nearly instantaneous peer-to-peer transfer of digital assets and many thousands of transactions per second.

Security Powered by Monero
Monero is one of the most secure and decentralized cryptocurrencies in the world. Tari is being architected as a merge-mined sidechain of Monero and will inherit its security model.

Converting Wifi to Electricity

Converting Wi-Fi signals to electricity with new 2-D materials
Device made from flexible, inexpensive materials could power large-area electronics, wearables, medical devices, and more.

By Rob Matheson | MIT News Office 

Imagine a world where smartphones, laptops, wearables, and other electronics are powered without batteries. Researchers from MIT and elsewhere have taken a step in that direction, with the first fully flexible device that can convert energy from Wi-Fi signals into electricity that could power electronics.

Devices that convert AC electromagnetic waves into DC electricity are known as “rectennas.” The researchers demonstrate a new kind of rectenna, described in a study appearing in Nature today, that uses a flexible radio-frequency (RF) antenna that captures electromagnetic waves — including those carrying Wi-Fi — as AC waveforms. .... " 

Monday, January 28, 2019

Laser Created Audio by MIT

Cures for deafness, secret communications?  The future for audiophiles?  A kind of whisper into your ear.

deliver secret messages directly to your ears   By Luke Dormehl —  in Digitaltrend

What’s the best way to communicate with a person when they’re outside of normal speaking distance? Call them on their phone, obviously. But what if the person either doesn’t have a phone with them — or you’re just looking for a high-tech communication method that’s a little more James Bond in its nature? Researchers from Massachusetts Institute of Technology have you covered with a new invention that makes it possible to literally beam an audible message to a specific person across the room using a laser. Because, you know, science.

“We have demonstrated that an eye-safe laser will heat the water molecules in the air, via the well-known photoacoustic effect, to create a local sound,” Ryan Sullenberger, a researcher who worked on the project, told Digital Trends. “We have also leveraged the special behaviors that occur at the speed of sound to help us amplify and localize the sound. If we rotate our laser beam, we can localize the sound not only along the laser path, but also isolate a specific range — the one at which the beam is moving at the speed of sound — at which the signal is amplified to an audible level.” ... "

Data Preprocessing with NVIDIA Dali

Came across DALI in the NVIDIA developers blog.    Nice idea, below is the intro, much more tech detail at the link, with coding usage examples.  As in all data-hungry methods, its often about how to get the data ready and verified to do testing and and then ongoing for delivery.    Especially if the data resides on an  Edge Application.   Approach here is intriguing.

Fast AI Data Preprocessing with NVIDIA DALI  By Joaquin Anton Guirao, Krzysztof Łęcki, Janusz Lisiecki, Serge Panev, Michał Szołucha, Albert Wolant and Michał Zientkiewicz    Tags: DALI, data preprocessing, Deep Learning, featured, machine learning and AI, Storage

 Training deep learning models with vast amounts of data is necessary to achieve accurate results. Data in the wild, or even prepared data sets, is usually not in the form that can be directly fed into neural network. This is where NVIDIA DALI data preprocessing comes into play. .... 

 ... DALI To the Rescue
NVIDIA Data Loading Library (DALI) is a result of our efforts find a scalable and portable solution to the data pipeline issues mentioned above. DALI is a set of highly optimized building blocks plus an execution engine to accelerate input data pre-processing for deep learning applications, as diagrammed in figure 1. DALI provides performance and flexibility for accelerating different data pipelines. .... "

Alibaba Future Hotel

Somewhat unusual player in the hotel space. But will help test other AI related interactions.  Note recent indications that players in the hospitality space have decreased use of guest interacting robotics.

At Alibaba's Futuristic Hotel, Robots Deliver Towels and Mix Cocktails   Reuters  By Cate Cadell

A hotel that Chinese multinational conglomerate Alibaba opened in China last month utilizes robots and a suite of other high-tech tools that the company says drastically reduces the hotel's cost of human labor and eliminates the need for guests to interact with others. The 290-room "FlyZoo" hotel is an incubator for technology Alibaba wants to sell to the hotel industry in the future; it also serves as an opportunity to showcase Alibaba's proficiency in artificial intelligence (AI). FlyZoo's guests can check in via face scanners; visitors with a Chinese national ID can scan their faces using their smartphones for advance check-in. Projects like the hotel are designed to develop AI and other high-tech expertise to propel Alibaba's e-commerce revenue growth, and also foster new areas of business at a time when e-commerce revenue growth rates are slowing... "

State of AI as Technology in 2019

What looks to be a very good piece is starting in TheVerge.  As a practitioner of analytics, business AI and Consumer augmentation,   all with claims of intelligence now,  very  interested in this view.  Following.

How artificial intelligence and machine learning are affecting technology right now

The term ‘artificial intelligence’ was coined fairly recently in 1955, but the idea of smart machines that do our bidding has far deeper roots, going back to the ancient myths of Greece, India, and China. Perhaps that’s why the idea of AI has such a this on our imagination, and why, in recent years, there’s been so much hype surrounding the technology.

But AI is not a myth, and neither is it a magical machine. It’s a technology like any other, that, after decades of research, has reached a new plateau of productivity. Cheap processing power and abundant data have made this possible, and AI and machine learning are now useful tools in a diverse range of fields, from astronomy to healthcare, transportation to music.

After Years of Promise AI is Finally Becoming Useful

After years of promise AI is finally becoming useful, but what usually happens to useful technologies that they disappear. We forget about the things that Just Work, and we shouldn’t let that happen to AI. Any technology destined to change the world needs scrutiny, and AI, with its combination of huge imaginative presence and very real, very dangerous failings, needs that scrutiny more than most.

So, for The AI Issue at The Verge, we’re taking a closer look at some of the ways that artificial intelligence and machine learning are affecting technology right now. Because it’s too late to understand something after it’s changed the world.  .... "

Japan Hacks its own Population to Promote Security

In preparation for olympics to promote security.  Or will it just guide black hats to the best places to go?
 
Japan plans to hack into millions of its citizens’ connected devices  In Technology Review

The Japanese government will try to hack into internet-connected devices in homes and offices around the country starting from next month, as part of efforts to improve cyber security, NHK World-Japan reports.

First of its kind: The program, which could last for up to five years, was approved on Friday. It will be conducted by Japan’s National Institute of Information and Communications Technology (NICT). From mid-February, the institute will use default passwords and password dictionaries to try to break into about 200 million devices, starting with webcams and routers. When they successfully gain access to a device, the owner will be contacted and advised to improve security measures.  ... " 

New Retail is Here

Good piece in Forrester on Retail in China that sums up and charts progress in China

Future of Retail is Here   Xiaofeng Wang, Senior Analyst  Forrester

The future of retail is already here: Alibaba’s New Retail is reshaping the retail landscape in China and beyond. JD.com, Tencent, and traditional retailers like Starbucks and Walmart are all jumping on the bandwagon. My latest report, New Retail Is Transforming Commerce, shows retailers what they can learn from early adopters and how they can accelerate their omnichannel transformation.

eCommerce has tremendous momentum in China: As a percentage of total retail sales, online retail sales have risen from less than 2% in 2009 to nearly 25% in 2018. However, offline retail still takes the lion’s share and will remain the key growth engine for Alibaba and JD. As the first to recognize the trend, Alibaba responded with New Retail (a term coined by Jack Ma), its strategy to redefine commerce by blending online and offline ecosystems to create seamless customer experiences and improve retail efficiency. Alibaba’s New Retail models are transforming key retail sectors:... "

What exactly is Artificial Intelligence?

I believe I mentioned this piece before, but worth pointing to again.

What Exactly is Artificial Intelligence and Why is it Driving me Crazy

Posted by William Vorhies 

Summary:  Advanced analytic platform developers, cloud providers, and the popular press are promoting the idea that everything we do in data science is AI.  That may be good for messaging but it’s misleading to the folks who are asking us for AI solutions and makes our life all the more difficult.

Arrgh!  Houston we have (another) problem.  It’s the definition of Artificial Intelligence (AI) and specifically what’s included and what isn’t.  The problem becomes especially severe if you know something about the field (i.e. you are a data scientist) and you are talking to anyone else who has read an article, blog, or comic book that has talked about AI.  Sources outside of our field, and a surprising number written by people who say they are knowledgable are all over the place on what’s inside the AI box and what’s outside.

This deep disconnect and failure to have a common definition wastes a ton of time as we have to first ask literally everyone we speak to “what exactly do you mean by that when you say you want an AI solution”.

Always willing to admit that perhaps the misperception is mine, I spent several days gathering definitions of AI from a wide variety of sources and trying to compare them.  ... " 

Cable Cutting Skill

Direct competition between companies using skills that include AI?   Interesting, but can this lead to some legal embattlement?

DirecTV encourages cable cutting with Alexa skill    Creative Works By Kyle O'Brien

For all the potential cable cutters who have been discouraged by the runaround they get from their cable companies when they try to cut the cord, DirecTV has come up with an easy, voice-activated alternative – the DirecTV Cable Cutter skill for Amazon Alexa.  ... "

Sunday, January 27, 2019

IBM and VMWare

Hybrid clouds.

Also see IBMs info on this:

    https://www.ibm.com/cloud/vmware

A question I would ask:  How does Watson assistant link to this for a company?  What value does it provide in what industry?

Ops 4.0 Continuous Improvement Cycle

Transformation that does not stop, and has to react to fast  changing context of many types.

In McKinsey: 

Ops 4.0 helps businesses understand their greatest challenges, and solve ones that previously seemed beyond reach. It also means committing to a transformation that never entirely ends.

The Operations 4.0 podcast: A new continuous-improvement cycle
By Yogesh Malik and Rafael Westinner

In this final episode of the Ops 4.0 podcast series, McKinsey partners Yogesh Malik and Rafael Westinner start by discussing how Ops 4.0 enables greater predictability and precision in identifying the root causes of problems. Consequently, a company can address questions that previously seemed too resource-intensive to consider—in 80/20 terms, shifting them from the “20” category (not worth tackling) to the “80” category (worth the effort). The conversation concludes with a summary of the cultural factors that help organizations succeed in adopting Ops 4.0.  ... "  

Efficiency of Urban Farming

A topic of my interest, still following

Urban farms could be incredibly efficient—but aren’t yet
Casual farmers overwork, buy fertilizer, and use municipal water.
   By John Timmer  in Ars Technica ....

Saturday, January 26, 2019

Scale Free Networks

Technical but interesting view here, we examined and experimented with the SantaFe Institute about using consumer networks to market and advertise,  To some success, but not as good as we wanted.   Hints and previous writings on this in the tags below.  Are there further capabilities to drive advantage in Barabasi's work?  This is about training, but consider an advertising angle.
.
Q&A: Albert-László Barabási on the diversity of networks
The author of Network Science talks about his foundational work in that field and developing resources for online learning.    By Melinda Baldwin  in Physics Today

Network science is a new and rapidly growing field, with applications ranging from protein interactions to the internet. However, the sheer breadth of areas in which network analysis might be useful presents a daunting challenge to potential textbook authors.

In Network Science (Cambridge University Press, 2016), Albert-László Barabási tackles that challenge. The result, writes reviewer Zoltán Toroczkai in the April issue of Physics Today, is “a hands-on and engaging textbook” that also makes good use of accompanying online resources, such as animations and other pedagogical tools.

Physics Today recently talked to Barabási about his pioneering contributions to network science, the challenges of writing a textbook in an interdisciplinary field, and his approach to developing online resources. ... "

Modern Elders in a Multi-Generational Workforce

Am one myself.   Also a kind of diversity, but hardly emphasized these days.    Here a podcast and transcript on the topic of inter-generational workforces and 'Modern Elders',

Wisdom at Work: Why the Modern Elder Is Relevant  I K@W 

Airbnb executive Chip Conley discusses the benefits of having an inter-generational workforce.

Historically, the boss typically has been older than the staff. But in the last few decades, several trends converged that made it more common for employees to have younger managers. One catalyst is the shift from seniority-based promotions toward those based on merit, according to a research article in the Journal of Organizational Behavior. Also, as the pace of technology innovation increases, companies promote more tech-savvy younger workers into supervisory jobs. Meanwhile, older workers are staying employed longer due to such things as the disappearance of early retirement schemes.

Entrepreneur Chip Conley knows first-hand what it feels like to have a younger boss. The former hotelier works at Airbnb, where the CEO is two decades younger, as its head of global hospitality and strategy. But Conley believes that diversity of ages makes for a better workplace. His new book, Wisdom At Work: The Making of the Modern Elder, argues the case for this “intergenerational potluck” where everyone brings something to the table. He recently joined the Knowledge@Wharton radio show on SiriusXM to explain how one generation can learn from another.

An edited transcript of the conversation follows. .... " 

Japans Rush to Consumer Cryptocurrency

A serious step forward, heading towards the olympics.  Premature?

Will People Ditch Cash for Cryptocurrency? Japan Is About to Find Out 
Technology Review by Mike Orcutt

Japan's prime minister Shinzo Abe wants 40% of payments in that country to be cashless by 2025; in August, the Japanese government announced plans to offer tax breaks and subsidies for companies that embrace cashless systems. Methods like credit card payments and quick response codes would qualify under Abe's cashless plan, but some of Japan's financial leaders think the way to move Japan away from cash is through blockchain. Researchers from Mitsubishi UFJ Financial Group (Japan’s largest bank) and Akamai are working to build a blockchain-based consumer payment network before the 2020 Summer Olympics. In testing, the researchers claimed their system can handle more than 1 million transactions per second, with each transaction confirmed in two seconds or less; eventually, they said, it could eventually achieve 10 million transactions per second.  .... '

Seeing Around the Corner

More examples of seeing normally hidden objects.

Program Allows Ordinary Digital Camera to See Round Corners 
The Guardian   by Ian Sample

Boston University researchers have demonstrated a computer program that turns a normal digital camera into a periscope, allowing the camera to see details of objects hidden from view around a corner by analyzing shadows they cast on a nearby wall. The program relies on algorithms that use the combination of light and shade at different points on the wall to reconstruct what lies around the corner. In testing, the program pieced together hidden images of video game characters, as well as colored strips and the letters "BU." Said Martin Laurenzis, an imaging expert at the French–German Research Institute of Saint-Louis, “The technique could be used by vehicles to avoid collisions, and by firefighters and first responders to look into burning or collapsed structures.”  .... "

Ensemble Models for Deep Learning

Ensemble models are now commonly used in all sorts of analytics.  You use the results of multiple models and combine the results.  Jason Brownlee shows how this can be done for deep learning methods.  Good tutorial explanation.

How to Create a Random-Split, Cross-Validation, and Bagging Ensemble for Deep Learning in Keras  by Jason Brownlee in Better Deep Learning

Ensemble learning are methods that combine the predictions from multiple models.

It is important in ensemble learning that the models that comprise the ensemble are good, making different prediction errors. Predictions that are good in different ways can result in a prediction that is both more stable and often better than the predictions of any individual member model.

One way to achieve differences between models is to train each model on a different subset of the available training data. Models are trained on different subsets of the training data naturally through the use of resampling methods such as cross-validation and the bootstrap, designed to estimate the average performance of the model generally on unseen data. The models used in this estimation process can be combined in what is referred to as a resampling-based ensemble, such as a cross-validation ensemble or a bootstrap aggregation (or bagging) ensemble.

In this tutorial, you will discover how to develop a suite of different resampling-based ensembles for deep learning neural network models.   ... " 

Friday, January 25, 2019

Customer Experience Metrics

Good metrics matter.    A good recent survey.

Does it matter which customer experience metric you choose?   By Alyona Medelyan in CustomerThink

Are you responsible for measuring the progress in improving customer experience? If yes, I’m sure you needed to come up with a rationale on which metrics to choose for this: Is it an all ubiquitous Net Promoter Score (NPS), the traditional customer satisfaction CSAT, or a more recent invention Customer Effort Score (CES)?

Is one enough or should you implement several metrics? Does it actually matter? Here, we discuss the two arguments: Pro and against.

Why choosing the right metric matters

Choosing the right metric matters to the extent that the metric must be meaningful to the specific customer touchpoint you’re wanting to analyze. ... "

Car Language by Jaguar

Intriguing idea.  talking, conversation?   In one ways sounds absurd, but keep the design simple.

Jaguar is creating a language so autonomous cars can talk to pedestrians
By Ronan Glon  in Digitaltrend.

Autonomous cars will not be able to use hand gestures to communicate with pedestrians, which could lead to confusing situations at busy intersections. Jaguar Land Rover is working on creating a simple language driverless vehicles can use to tell pedestrians whether they’re stopping, turning, or cruising.

Instead of letters and numbers, the language is made up of a series of light bars projected onto the ground. The amount of space between each car gradually shrinks as the car brakes, and it expands again as it starts accelerating. The bars fan out left or right as the car prepares to turn in either direction. These signals are easy to understand anywhere in the world. It’s a better solution than the truly creepy eyes Jaguar tested in 2018 to allow pedestrians to make eye contact with autonomous cars.   .... " 

Passive Bluetooth Sensor

Intriguing.   More details at the link.

Wiliot Unveils Passive Bluetooth Sensor

The innovative semiconductor firm has also announced that it has raised an additional $30 million in funding from Avery Dennison, Amazon Web Services and Samsung Venture Investment.
By Mark Robertiin RFIDJournal

Jan 25, 2019—Wiliot, a fabless semiconductor company launched in 2017, has unveiled the world's first passive Bluetooth sensor. The device, which is still a prototype, can harvest ambient RF energy from Wi-Fi access points, as well as smartphones connecting to cell towers and other Bluetooth signals. It is also able to detect temperature, pressure and movement, then relay that information to any Bluetooth transceiver. The firm has raised an additional $30 million in funding from Amazon Web Services, Avery Dennison and Samsung Venture Investment Corp.

"This is the third iteration of our chip, and this one is actually transmitting," says Steve Statler, Wiliot's senior VP for marketing and business development. "It's sort of our first call across the Atlantic. This year will be about going from proving it's possible to create a passive Bluetooth sensor to scaling it. Having the investment from Avery Dennison, Amazon Web Services and Samsung will help us do that."


Lessons from Corporate Nudging

Mixing Behavioral Science, Economics and Psychology.    Its likely that this concept will influence AI methods as implemented in assistant technologies.  But there are lots of issues in such methods.  Utimately its a test of how machines manage people.   Important challenges hinted at here.

Lessons from the front line of corporate nudging  By Anna Güntner, Konstantin Lucks, and Julia Sperling-Magro in McKinsey

Executives setting up a behavioral-science unit should start by challenging themselves with six questions.

If you’re serious about setting up a behavioral-science team—or a nudge unit, as we’ll more colloquially refer to it in this article—you need to ask yourself some tough questions, such as what it should do, where it should sit, how you’ll know it’s succeeding, and whether you’re ready for the ethical tensions it may raise.

Lessons from the front line of corporate nudging

Subtle interventions to help people make better decisions are hardly new. Since the 1950s, behavioral scientists, using a mix of economics and psychology, have studied human irrationality and devised ways both to improve the choices made by consumers and influence how employees react in the workplace. Increasingly, over the past two decades, companies have used the insights of behavioral science to reduce bias in boardrooms, improve strategic decision making, provide benefits for customers, enhance the effectiveness of marketing campaigns, and avoid making bad bets on major acquisitions or investments.  ... " 

Kraft Heinz Does Innovation Shuffle

Case study for CPG food and advancing technology.

How Kraft Heinz's CIO avoids getting lost in the shuffle of innovation

Francesco Tinto posed the question, "are we really changing the way we're operating, or just operating the same way and we just changed the technology?"

By Samantha Ann Schwartz  @SamanthaSchann

Since 1937, many a child has sat in the kitchen individually spearing Kraft Macaroni and Cheese noodles on the prong of a fork, creating warm, cheesy memories — it's one of the reasons Kraft Heinz is a household name.

In 2015, Kraft Foods Group and H.J. Heinz Co. announced their merger and created the fifth-largest global food company. Two iconic brands, under one roof, was a deal that shook the food industry. It also moved Francesco Tinto from CIO of Kraft Foods Group to global CIO of Kraft Heinz.

Just as Kraft perfected the recipe for its packets of powdered cheese, Tinto found the sweet spot between company culture and innovation — a recipe CIOs struggle to perfect. ... "

Thursday, January 24, 2019

IOTA Wallet Theft Solved?

Last years theft of some 11 million Euros from IOTA wallets using insecure random number choices,  has reportedly been solved.   Some of the security details have leaked, which could lead to better user choices.   The IOTA system uses a novel DAG architecture,  a Permissionless distributed ledger, different from blockchains, but with similar goals.   I continue to follow.

Europol arrests UK man for stealing €10 million worth of IOTA cryptocurrency   Suspect operated the iotaseed.io portal that generated and secretly logged passwords for IOTA wallets.  By Catalin Cimpanu in ZDNet ... "

IOTA is also now offering a considerable cash bounty for anyone who can hack their hash function.    Here are the details in Nextweb.

Figure 8 Talks Active Learning

Have now been a 'guest' in many machine learning efforts.  My experience that the pre consideration of how you effectively learn and continue to learn is rare.   Leads back to the truth we discovered early on, that maintenance of your effort is crucial.  Most of our failures ended there. Also that data is an asset that needs to be labeled, measured, secured.   Its often a too-late afterthought.  Here Figure Eight offers a free E-book on the topic of 'Active Learning', at the link.

An Introduction to Active Learning

The increased availability of computer resources and the prevalence of high-quality training data combined with smart learning schemas, have resulted in a rise in successful AI deployments. However, many organizations simply have too much data, posing a challenge for data scientists: unless at least some of that data is labeled, it's essentially useless for any ML approach that relies on supervised or semi-supervised learning. So, which data needs to be labeled? How much of a dataset needs to be labeled for an ML application to be viable? How can we solve the problem of having more data than we can reasonably analyze?

One promising answer is active learning. Active learning is unique in that it can both solve this data labeling crisis and train models to be more accurate with less data overall. Download this eBook to learn:

The pros and cons of active learning as an approach
The three major categories of active learning
How your active learner should decide which rows need labeling
How to tell if active learning is appropriate for your ML project  ... 

Amazon Wants the Last Mile

Pretty clear they are experimenting everywhere on this.  Good roundup here.

Amazon takes multi-pronged approach to owning the last mile  by George Anderson in Retailwire with expert comments. 

Forget drones for home deliveries of online orders. That was the consensus of a variety of industry experts I spoke with at the recent NRF Big Show in New York. Many however do see opportunities for direct-to-consumer sellers to use alternative service options outside of FedEx and UPS, as well as technologies such as autonomous vehicles and delivery robots to conquer the challenges of the last mile.

A few, not surprisingly, said to look to Amazon.com as the leading indicator for where the industry is headed. A couple of breaking stories in recent days may provide some insight.  ... " 

Apple's Finger Controllers

Apple has interesting patents in this area.   How will gesture be successfully be used as user interface?    By definition its not hands-free like voice.  Still have yet to see gesture commonly used in a business environment.  Why?  Is it because people are afraid of being seen waving their hands about?

Apple’s finger controllers are a glimpse at mixed reality’s future
For mixed-reality experiences to take off, input will be as crucial as output. An Apple patent suggests that the company is already at work on that challenge.  ... "

Third Party Risk Models

Recorded Future has been a long time correspondent.   We tested their methods to understand competitive actions.    Always liked their highly visual approaches, easy to use with executives.  "  ... Third-Party Risk helps you quantify the threat environments of your business partners. ... ".   An intriguing add-on to their current capabilities.

Quantify Third-Party Risk in Real Time With Our New Module    By Matt Kodama

At Recorded Future, our mission has been to empower our users to defend themselves against cyber threats at the speed and scale of the internet. Empowerment means giving you the capabilities necessary to understand and manage your own risk environment — and the Recorded Future® Platform helps you measure and understand your own risk environment in real time, with full transparency to original sources of risk data. First-party risk reduction remains our first and foremost goal, and in today’s world, that means managing third-party risk, as well.

Leading companies in every industry today are undergoing digital transformation. They are driving more online and mobile access, more transparency, more interconnection of processes across their businesses, all with faster cycle times. These changes further blur the lines between an organization and its partners, suppliers, vendors, and other third parties. Interconnection creates advantages but also expands attack surfaces. Now more than ever before, the state of our security is only as strong as the weakest link.

That’s why Recorded Future is introducing our new Third-Party Risk module. An add-on to our core platform, Third-Party Risk helps you quantify the threat environments of your business partners. It is a powerful complement to traditional risk management processes focused on compliance frameworks, reviews, and audits. For organizations where risk management and security teams work together to identify and reduce risks, the Third-Party Risk module generates the threat intelligence they need to understand the risks stemming from their third-party associates. .... '

Analytics and Construction

Interesting example, which relates to lots of complex operational decisions to finish a project.  Worth a look to see what is being done here.

In McKinsey: 

How analytics can drive smarter engineering and construction decisions

Three applications illustrate how companies are beginning to embrace data-driven solutions while establishing a foundation for future initiatives. ... '

Technical Model Tuning

Nice technical piece which points at some of the 'art' of deep learning.  These are the kinds of near intuitive things that would have to be embedded in completely autonomous systems.   Sometimes we crowd-sourced these methods with multiple practitioners, when we sought other measures of variability.  Also, you are not always looking for optimization,  given other forms of  model variability.  So when I see the word optimize used I am cautious, since it always exists only in some context.   In the article, some good graphical systems provide intuitive directions.

An introduction to high-dimensional hyper-parameter tuning
Best practices for optimizing ML models     By Thalles Silva

If you ever struggled with tuning Machine Learning (ML) models, you are reading the right piece.

Hyper-parameter tuning refers to the problem of finding an optimal set of parameter values for a learning algorithm.

Usually, the process of choosing these values is a time-consuming task.

Even for simple algorithms like Linear Regression, finding the best set for the hyper-parameters can be tough. With Deep Learning, things get even worse.

Some of the parameters to tune when optimizing neural nets (NNs) include:  ... "

Google Donates to Wikipedia

Makes sense for all the assistant players to do this, since keeping the information up to date, quickly accessible and maintained, is an important resource for assistants.     Also support for the future of Wikimedia, and how its data resources link to AI capabilities, is also important.

Google.org donates $2 million to Wikipedia’s parent org   By Megan Rose Dickey in Techcrunch

Google, as well as many other companies, has long relied on Wikipedia for its content. Now, Google and Google.org are giving back.

Google.org President Jacquelline Fuller today announced a $2 million contribution to the Wikimedia Endowment. An additional $1.1 million donation went to the Wikimedia Foundation, courtesy of a campaign where Google employees decided where to direct Google’s donation dollars. The Wikimedia Foundation is the nonprofit organization behind Wikipedia, while the Endowment is the fund.

“Google and Wikimedia each play a unique role in an internet that works for and reflects the diversity of its users,” the Wikimedia Foundation wrote in a blog post. “We look forward to continuing our work with Google in close collaboration with our communities around the world.”

In addition to the donation, Google and Wikipedia are expanding Project Tiger, an initiative to expand the content on Wikipedia into additional languages. The pilot program has already increased the amount of locally relevant content in 12 Indic languages. With the expansion, the goal is to include 10 more languages. .... " 

See also in the Wikimedia foundation:

Google and Wikimedia Foundation partner to increase knowledge equity online   By Lisa Gruwel Wikimedia Foundation.

Wednesday, January 23, 2019

Amazon Prime Tests Rolling Delivery

Sidewalk drones they are calling them in this test.   Will they survive suburbia?

Amazon begins testing deliveries with sidewalk drones
Robots are "the size of a small cooler and roll along at a walking pace."
By Timothy B. Lee  in ArsTechnica

Amazon said on Wednesday that it will start delivering packages using a six-wheeled sidewalk robot called Amazon Scout.

"Starting today, these devices will begin delivering packages to customers in a neighborhood in Snohomish County, Washington," the company's announcement says—that's just north of Seattle. Amazon says that its robots "are the size of a small cooler and roll along sidewalks at a walking pace."... ' 

Considering Generation Z

Always somewhat suspicious of such simplistic classifications.   Simply defining them  (generation Z, born after 1995) can create a narrative with a bias.   Does the narrative march on?     At least this piece admits that. 

Make Way for Generation Z in the Workplace      in K@W

As a group, they are “sober, industrious and driven by money,” reports the Wall Street Journal, but also “socially awkward and timid about taking the reins.” They are risk-averse and more diverse, says Inc. magazine. Forbes says they “want to work on their own and be judged on their own merits rather than those of their team.”

Generation Z is arriving, and they are different than previous generations – or at least that’s how this young cohort is being portrayed as it begins to enter the workforce. After the traditionalists, baby boomers, Generation X and Generation Y/millennials, we have Generation Z – that group born after 1995 now starting to graduate college.

But is Generation Z really different, and if so, how? When it comes to ascribing characteristics and accepting advice about a particular generation, caveat emptor. Over-generalizing about any group is a slippery business.

“We have to be careful that we are seeing people for the complex beings that they are,” says Wharton assistant management professor Stephanie Creary. Generational categories, she notes, might help us to understand commonalities. “But people are also going to behave in ways that are consistent with their multiple other identities. We want to make sure we are not creating biases.”   .... "

Intel Camera with Deep Learning for Vision

Suggesting it can be used for providing vision for robots or drones, or providing an AR experience in retail.  $199, available by end of February.

Intel debuts AI-powered camera system for robots and AR applications   By Maria Deutscher in SiliconAngle

Intel Corp. today debuted a camera system for robots, drones and augmented reality applications that uses a built-in deep learning chip to process visual information.

The new RealSense Tracking Camera T265 (pictured) sports two wide-angle fisheye lenses that can each capture footage in a more than 160-degree field of view. They’re powered by a Movidius Myriad 2 visual processing unit specifically optimized for running deep learning algorithms.

The chip, which is based on technology that Intel obtained through a 2016 acquisition, can be found in millions of connected devices worldwide. It sports 12 processing cores that provide more than 100 gigaflops of performance. One gigaflop equals a billion floating-point operations per second, a standard unit of computing power.  ... '

Pattern Based Thinking

Podcast and Transcript regarding a new book.   Knowledge@Wharton:

Wharton's Eric K. Clemons explains why even the newest business models echo a pattern from successful companies in the past.

 Today’s technology giants, such as Uber and Google, are successful because they introduced something new and innovative to the market, according to conventional wisdom. But Wharton professor of operations, information and decisions Eric K. Clemons thinks that’s too simplistic. Patterns repeat throughout history, and one can find glimpses of today’s new business models in the most successful companies of yore, he says.

 Mastering “pattern-based thinking” will help today’s companies get ahead, Clemons argues. He joined the Knowledge@Wharton radio show on SiriusXM to talk about this mindset, which he encapsulates in his book, New Patterns of Power and Profit: A Strategist’s Guide to Competitive Advantage in the Age of Digital Transformation.


Knowledge@Wharton: Why did you decide to write a book about this topic?

Eric K. Clemons: It’s actually a memoir. It’s the history of a great adventure. About mid-1980s, I realized that economics really understood big industry and [Harvard professor] Michael Porter had said just about everything that needed to be said about strategy for traditional manufacturing, transportation and retailing companies. But traditional industry wasn’t where things were happening. Economics had started to look at the power, the value of information.  ... " 

Lenses for Dog Recognition

Have also been seeing this for livestock applications like recognizing horses and pigs.  Imagine other applications.

Object recognition taken beyond humans.  Dogs, and some indication cats are next.

Snapchat lenses now officially work on dogs
Because pets can't take selfies on their own. ....

By AJ Dellinger, @ajdell

Future of Beauty shown at CES

The Future of Beauty Was on Display at CES
in CNN   By   Ahiza Garcia; Kaya Yurieff

Last week's CES 2019 showcased tech beauty products from Procter & Gamble (P&G), L'Oreal, and Neutrogena. P&G announced Olay Future You Simulation, an addition to its online Olay Skin Advisor tool that uses an algorithm to show users what their skin and face will look like in the future under different conditions. L'Oreal unveiled a wearable adhesive skin sensor that tracks skin pH in real time, to help people with conditions like eczema and acne test and monitor those conditions. The sensor can reportedly furnish an accurate pH reading within15 minutes, using micro-channels to capture trace amounts of perspiration. Neutrogena's MaskiD app measures the size of a user's face and generates a custom mask with color patterns adjusted for the user’s skin needs, along with personalized treatments. Said Forrester Research retail analyst Sucharita Kodali, "People want products that are going to work for them. I don't think that they care whether or not it's personalized." ... " 

Tuesday, January 22, 2019

Labeled Datasets for Use AND Value

Note how this links to other goals, like understanding the value of datasets as an asset.  Why not use labeling as a means to attach to value analyses as well?   Labels are usually assigned for business purposes that do not work when linking to specific analytic approach. 

The Data Scientist’s Holy Grail — Labeled Data Sets  #ODSC  in Medium

The Holy Grail for data scientists is the ability to obtain labeled data sets for the purpose of training a supervised machine learning algorithm. An algorithm’s ability to “learn” is based on training it using a labeled training set — having known response variable values that correspond to a number of predictor variable values.

There are a number of common and maybe not-so-common methods for labeling a data set. In this article, we’ll run down a short list of such methods and then you can choose the best for your specific circumstances.

Readily Available Labeled Data Sets: 

Sometimes, labeled datasets are readily available as a byproduct of on-going business operations. For example, if a company is trying to predict customer churn (a very common classification problem), the company’s data assets will likely contain the label values: “churned,” or “not-churned” based on the customer’s account history. The company knows when the customer canceled their account, thus generating a churn transaction.

Sometimes, the label is not readily available and must be acquired or derived. For example, in a real estate application that wishes to predict the monthly rental value of a residential apartment building, the desired label may only come from a laborious process conducted by problem domain experts who can determine the value based on their industry knowledge. Sometimes finding label values can be time-consuming and labor-intensive, especially if a large amount of labeled data is needed for the project. ... " 

Blockchain Network Seeks to Rival VisaNet

Though there has been considerable hype and value slipping in Cryptocurrencies,   Blockchain architectures are still getting lots of serious minded attention.    Good to stay connected.   Note in particular the transaction speeds sought.  Here the latest:

MIT, Stanford and others to build blockchain payments network to rival VisaNet

Seven universities are collaborating to create a blockchain-based online payment system that will solve issues of scalability, privacy, security and performance, enabling up to 10,000 transactions per second.
           
 By Lucas Mearian in Computerworld  .... 

Retail and AI

Thoughts from NRF.    How technologies of many kind will improve shopping.

Tech lets shoppers say ‘Optimize Me’ when ordering groceries   by Al McClain in Retailwire and further expert opinion.

The line between digital and physical shopping continues to get fuzzier.

At NRF’s “Big Show” last week, Cisco showcased a concept called “Optimize Me” within its fictitious Stop, Shop & Go store.

Rather than the shopper buying everything online and picking it up in store (BOPIS), this scenario allows the shopper to save items in a grocer’s smartphone app as they go about their regular activities. They add items to a virtual shopping list as they think of what they need and then select “optimize me” within the app when they are ready to head to the store.

Using artificial intelligence (AI) technology, items purchased before and marked as favorites in the shopper’s profile are picked by store associates and placed at a designated in-store pickup area, while those products never purchased before remain in the shopping list. For those, customers go to the shelf to browse and select the items they prefer. The physical locations of these items are displayed on the shopper’s smartphone via a store map using location-based services. The customer still goes through the purchase process at the store (scan and go, self-checkout, traditional register, etc.), but they don’t have to trek through the store to pick up refill items.  ... "

Predicting What you Will Say

A kind of prediction classification assumption.      What exactly you will say, or rather how what you will say is readily classifiable as well?  Implications in marketing and advertising?    Speaking/writing is only part of your behavior.

Social media can predict what you’ll say, even if you don’t participate

On twitter, your words are predictable using the words of your network.  By John Timmer in Ars Technica

There have been a number of high-profile criminal cases that were solved using the DNA that family members of the accused placed in public databases. One lesson there is that our privacy isn't entirely under our control; by sharing DNA with you, your family has the ability to choose what everybody else knows about you.

Now, some researchers have demonstrated that something similar is true about our words. Using a database of past tweets, they were able to effectively pick out the next words a user was likely to use. But they were able to do so more effectively if they simply had access to what a person's contacts were saying on Twitter. .... "

Survey of the Use of Face Recognition in Business

Despite the issues, its in use and expanding, it is inevitable globally, net it will protect everyone, expect more.  Here an outline of uses.

The most exciting facial recognition use cases in business
By Steven Van Belleghem  in CustomerThink

Efficiency & the era of Pig Data

When a new or a matured software enters the market, it’s typically used to increase the speed and efficiency of certain processes. So it should not come as a surprise that organizations are looking into facial recognition to make them more effective, secure and to improve the quality of their products. This is not the cool or ‘sexy’ part where the customer experience is enhanced to nearly magical levels, but it’s where processes are sped up and made cheaper. In these continued price wars times, that’s a real competitive advantage.

Target, Walmart, and Lowe’s: they have all run trials of facial recognition technology. One of the most common applications of facial recognition here is catching shoplifters. FaceFirst – one of the more popular technologies amongst retailers – can scan a shopper’s face up to 50 to 100 feet away and it has reduced theft in stores by as much as 30 percent. The TSA (Transportation Security Administration) – the agency of the U.S. Department of Homeland Security in charge of the security of travellers in the United States – began testing facial recognition technology for international travelers at Los Angeles International Airport (LAX) earlier last year and is planning to expand the use of biometrics technology. Even entertainers are starting to adopt the software for reasons of security. Taylor Swift, for instance, monitored a concert with a facial recognition system in California last year in order to record and recognize the faces of her too many stalkers. And the most obvious example of them all, of course: China’s very controversial social credit system which monitors and rewards, or punishes the behavior of its population, and ranks them based on their “social credit.”

One of the most advanced applications in the matter comes from China, too, where the e-commerce platform JD.com is using pig facial recognition to revolutionize farming. This type of projects can help reduce the cost of rearing pigs by as much as 50%, but it could also greatly enhance the quality of the meat, by monitoring how much which pig moves and eats. That’s a really nice case of Pig Data, if you ask me.  ....   '

AWS Challenges the Blockchain

See Amazon's writeup on their QLDB (Quantum Ledger Database) .   Do note that their approach has nothing to do with Quantum computing.  A bad choice of name which can only confuse people, perhaps with riding another hype stream.

Amazon’s QLDB challenges Permissioned Blockchain in the Gartner Blog   by Avivah Litan  |  

Amazon Web Services recently announced the preview of Quantum Ledger Database (QLDB), promising a centrally administered immutable data ledger within AWS.  We predict that QLDB and other competitive centralized ledger technology offerings that will eventually emerge will gain at least twenty percent of permissioned blockchain market share over the next three years.

My colleague Nick Heudecker and I just published a research note on the AWS announcement See Amazon QLDB Challenges Permissioned Blockchain analyzing the challenges and benefits of permissioned blockchain vs. QLDB (which by the way has no quantum computing technology built inside it). (We will soon publish a follow-up research note that analyzes decentralized ledger technology vs. centralized ledger technology vs. blockchain used for a ‘single version of truth’).

As noted in our research, Gartner is witnessing four common denominators in promising multi-company or consortia-led blockchain projects, of which AWS QLDB satisfies the second and third:

The majority of industry (or consortia) participants need a distributed ledger where every participant has access to the same (single) source of truth.

Once written to the ledger, the data is immutable and cannot be deleted or updated.

A cryptographically and independently verifiable audit trail is needed to satisfy the use case, for example to prove the provenance or state of an asset.

The various participants in the blockchain consortia all have a vested interest in its success; and there is no single entity in direct control of all activities.  .... " 

The Electronic Pill

Not a new idea,  was presented to us in the 80s.   Advanced monitoring takes this further.

Electronic Pill Slowly Delivers Drugs, Monitors Health    by Kenny Walter  in R&D Mag

The hassle of taking medication every day could someday be eliminated, thanks to an ingestible electronic pill that lasts in the stomach for close to a month and releases medication only when necessary.

A research team from the Massachusetts Institute of Technology (MIT) has developed the capsule, which could be designed to treat a variety of diseases and disorders and also enables physicians to monitor and control dosages using Bluetooth wireless technology and sensors.

Giovanni Traverso, a visiting scientist in MIT's Department of Mechanical Engineering, where he will be joining the faculty later in 2019 , explained in an exclusive interview with R&D Magazine that a major problem for patients is that they do not always adhere to their medication regime, particularly when they begin to start feeling better.

“One of the major focuses of our group is how we can make it easier for patients to take medication,” Traverso said. “That really is grounded on the observation that if one is given medication to take more infrequently that the patient is more likely to continue to take that medication. We developed some technologies that really enabled the oral delivery of systems that can stay in the stomach for long periods of time and stay there safely.”  ... " 

Monday, January 21, 2019

Reverse Deep Learning Engineering in Pharma

If workable, quite a coup.  Its back to the idea that it is all patterns.

A.I. finds non-infringing ways to copy drugs pharma spends billions developing  By Luke Dormehl  in DigitalTrends

Drug companies spend billions developing and protecting their trademark pharmaceuticals. Could artificial intelligence be about to shake things up? In a breakthrough development, researchers have demonstrated an A.I. which can find new methods for producing existing drugs in a way that doesn’t infringe on existing patents.

Called Chematica, the software platform does something called “retrosynthesis,” similar to the kind of reverse engineering that takes place when an engineer dissects an existing product to see how it works. In the case of Chematica, this process is based on a deep knowledge of how chemical interactions take place. It has around 70,000 synthetic chemistry “rules” coded into its system, along with thousands of additional auxiliary rules prescribing when particular reactions occur and with which molecules they’re compatible. An algorithm then inspects the massive number of possible reaction sequences in order to find another way to the same finish line.

“They effectively walk on enormous trees of synthetic possibilities, so it is a graph search problem they are trying to solve,” Bartosz Grzybowski at the Ulsan National Institute of Science and Technology (UNIST) in South Korea, told Digital Trends. “Akin to chess, every synthetic move these algorithms make is evaluated by scoring functions, which we developed over the years to tell the program whether it is navigating in the right synthetic direction.’” .... " 

Can a Neural Net Catch a Virus?

The question struck me,  so follow the link below for more.  Yes, in one general sense, but probably not in the way they are being used today.  Good basically non-technical discussion below:

Can a neural network catch a “virus”?
By Samuel Troper in Medium

Neural networks are one of the most powerful tools in the present-day machine learning toolbox, but could they be manipulated or damaged with a “virus”?  .... '

Nike Store of the Future

Good example of in store retail experimentation.

Inside Nike's Store of the Future
Bloomberg   By Eben Novy-Williams in Bloomberg

Nike has opened a new flagship store in New York City that focuses on personalized, experiential shopping as a model for similar stores worldwide. The venue is designed to work seamlessly with the Nike app, allowing shoppers to check out items via phone, request deliveries to dressing rooms, and schedule appointments with a stylist. Shoppers are automatically designated NikePlus members after downloading the app, entitling them to various perks; the app employs geo-fencing technology, becoming aware of when shoppers enter the store and instantly changing the home page to showcase new offers and content. Said Nike's Adam Sussman, "We want to create a seamless connection between the physical and digital experience." Both the phone integration and additional in-store benefits for members aim to improve the customer experience and generate data about customers and their preferences, which is fed into product design and inventory decisions. ... " 

Holoforge for Historical Experience

Saw a number of examples early on of using VR to effectively show historical data,  good example for use, for both teaching and for interacting with complex data.

The Mont Saint-Michel: Digital perspectives on the model
Celebrating French art, history, and culture through mixed reality and artificial intelligence.
Share

The Musée des Plans-Reliefs is bringing to life the historic Mont-Saint-Michel relief map—an example of the 17th century’s most advanced mapping technology—via a mixed reality experience that uses current mapping innovations to immerse viewers in a vital piece of French history and culture.

For centuries, technology has been influencing the way people engage with the world and shape the course of history. In 17th and 18th-century France, large-scale 3D maps—painstakingly built by hand down to the most intricate details—were the most advanced mapping technology of their time. They were considered such valuable strategic tools that leaders like Napoleon and King Louis XIV considered them military secrets and hid them from public view.

The Musee des Plans-Reliefs in Paris is home to more than 100 of these historic relief maps that  have withstood the test of time. But the crown jewel of the collection is the model of Mont-Saint-Michel—a rocky headland off the Normandy coast with a Benedictine abbey that’s an architectural marvel in its own right—presented by a monk to Louis XIV in 1701. A new HoloLens experience is bringing these technological feats, both the relief map and Mont-Saint-Michel itself, to life for a new generation.

The museum is partnering with Microsoft, HoloForge, and Iconem to create “The Mont Saint-Michel: Digital perspectives on the model,” a HoloLens experience celebrating French culture and innovation. The goal of the exhibit is to use mixed reality technology in a way that empowers the Musée des Plans-Reliefs to unlock a more vital kind of storytelling.  ....  " 

Sunday, January 20, 2019

Cortana as a Skill in other Assistants

Good, as opposed to having assistants competing,  have them work together by addressing particular contexts.   Not Sure how this will work with Cortana exactly, would make sense for it to provide assistance for its office suite, will be following

In new consumer push, Microsoft looks to put Cortana into rival voice assistants  By Maria Deutscher in SiliconAngle

Microsoft Corp. is looking to make its Cortana voice assistant available as a native “skill” for Amazon.com Inc.’s Alexa and Google Assistant as part of a new push to court consumers.

Microsoft Chief Executive Satya Nadella  shared the plan during a press event earlier this week. His remarks, which were made public today, indicate that Microsoft has more or less given up trying up to compete directly with Alexa and Google Assistant.

It’s not hard to see the reasoning behind the decision. The two services boast a massive a market share advantage over Cortana. According to data from analyst firm Canalys, Amazon’s Alexa powered nearly 75 percent of the roughly 20 million smart speakers that were sold during in third quarter of 2018, while Google Assistant accounted for practically all the rest.

Nadella said that Cortana could extend the services’ capabilities to make them more useful for users.

“Would it be better off, for example, to make Cortana a valuable skill that someone who is using Alexa can call? Or should we try to compete with Alexa? We, quite frankly, decided that we would do the former. Because Cortana needs to be that skill for anyone who is a Microsoft Office 365 subscriber,” Nadella was quoted as saying.  .... " 

Democratizing AI in Health Care

Perhaps surprising because of the particularly technical aspect of healthcare data.   Interpretation of results are also most important.

Democratizing artificial intelligence in health care  in MIT News

Leo Celi, a researcher at MIT and a physician at Beth Israel Deaconess Medical Center, leads a global series of health care hackathons, or datathons, to bring doctors and data scientists together and encourage hospitals to make use of their electronic medical records.

Hackathons promote doctor-data scientist collaboration and expanded access to electronic medical-records to improve patient care.

By Kim Martineau | MIT Quest for Intelligence 

An artificial intelligence program that’s better than human doctors at recommending treatment for sepsis may soon enter clinical trials in London. The machine learning model is part of a new way of practicing medicine that mines electronic medical-record data for more effective ways of diagnosing and treating difficult medical problems, including sepsis, a blood infection that kills an estimated 6 million people worldwide each year. 

The discovery of a promising treatment strategy for sepsis didn’t come about the regular way, through lengthy, carefully-controlled experiments. Instead, it emerged during a free-wheeling hackathon in London in 2015. ... "