"The Seven Tools of Causal Inference, with Reflections on Machine Learning," by ACM A.M. Turing Award recipient Judea Pearl, describes tools that overcome obstacles to human-level machine intelligence. Pearl delivers a message to machine-learning and AI experts in an original video at bit.ly/2GUEyJW.
Excerpt from the long paper, ultimately positioning our challenge:
Key insights:
- Data Science is a two-body problem, connecting data and reality, including the forces behind the data.
- Data Science is the art of interpreting reality in the light of data, not a mirror through which data sees itself from different angles.
- The ladder of causation is the double helix of causal thinking, defining what can and cannot be learned about actions and about worlds that could have been. ... "
Thursday, February 28, 2019
Beyond WorstCase Analysis
A look at the worst case in performance. This deals more with optimization methods, those that can find specific correct examples. So for example sorting a file is given as a common problem. There is a precise correct result that is the sorted file. Now how long does it take to sort very large files, with many sort parameters? This is different for deep learning methods, where the result depends on some chosen solution architecture, and is not expected to be optimally and provably best. (But as the paper suggests, often is!) A number of interesting problems often addressed, like classifiers, are mentioned. But I don't often expect a classifier to produce optimal answers.
Video intro by the author.
Paper is technical.
Beyond Worst-Case Analysis By Tim Roughgarden
Communications of the ACM, March 2019, Vol. 62 No. 3, Pages 88-96 10.1145/3232535
Comparing different algorithms is hard. For almost any pair of algorithms and measure of algorithm performance like running time or solution quality, each algorithm will perform better than the other on some inputs.a For example, the insertion sort algorithm is faster than merge sort on already-sorted arrays but slower on many other inputs. When two algorithms have incomparable performance, how can we deem one of them "better than" the other? ....... '
The author also talks about current 'Deep learning':
" ... To illustrate some of the challenges, consider a canonical supervised learning problem, where a learning algorithm is given a dataset of object-label pairs and the goal is to produce a classifier that accurately predicts the label of as-yet-unseen objects (for example, whether or not an image contains a cat). Over the past decade, aided by massive datasets and computational power, deep neural networks have achieved impressive levels of performance across a range of prediction tasks.25 Their empirical success flies in the face of conventional wisdom in multiple ways. First, most neural network training algorithms use first-order methods (that is, variants of gradient descent) to solve nonconvex optimization problems that had been written off as computationally intractable. Why do these algorithms so often converge quickly to a local optimum, or even to a global optimum?q Second, modern neural networks are typically over-parameterized, meaning that the number of free parameters (weights and biases) is considerably larger than the size of the training dataset. Over-parameterized models are vulnerable to large generalization error (that is, overfitting), but state-of-the-art neural networks generalize shockingly well.40 How can we explain this? The answer likely hinges on special properties of both real-world datasets and the optimization algorithms used for neural network training (principally stochastic gradient descent) ....
... With algorithms increasingly dominating our world, the need to understand when and why they work has never been greater. The field of beyond worst-case analysis has already produced several striking results, but there remain many unexplained gaps between the theoretical and empirical performance of widely used algorithms. With so many opportunities for consequential research, I suspect the best work in the area is yet to come. .... "
Further thinking the implications of this. But it does make us think about how such algorithms should be used, and their inherent risk.
Video intro by the author.
Paper is technical.
Beyond Worst-Case Analysis By Tim Roughgarden
Communications of the ACM, March 2019, Vol. 62 No. 3, Pages 88-96 10.1145/3232535
Comparing different algorithms is hard. For almost any pair of algorithms and measure of algorithm performance like running time or solution quality, each algorithm will perform better than the other on some inputs.a For example, the insertion sort algorithm is faster than merge sort on already-sorted arrays but slower on many other inputs. When two algorithms have incomparable performance, how can we deem one of them "better than" the other? ....... '
The author also talks about current 'Deep learning':
" ... To illustrate some of the challenges, consider a canonical supervised learning problem, where a learning algorithm is given a dataset of object-label pairs and the goal is to produce a classifier that accurately predicts the label of as-yet-unseen objects (for example, whether or not an image contains a cat). Over the past decade, aided by massive datasets and computational power, deep neural networks have achieved impressive levels of performance across a range of prediction tasks.25 Their empirical success flies in the face of conventional wisdom in multiple ways. First, most neural network training algorithms use first-order methods (that is, variants of gradient descent) to solve nonconvex optimization problems that had been written off as computationally intractable. Why do these algorithms so often converge quickly to a local optimum, or even to a global optimum?q Second, modern neural networks are typically over-parameterized, meaning that the number of free parameters (weights and biases) is considerably larger than the size of the training dataset. Over-parameterized models are vulnerable to large generalization error (that is, overfitting), but state-of-the-art neural networks generalize shockingly well.40 How can we explain this? The answer likely hinges on special properties of both real-world datasets and the optimization algorithms used for neural network training (principally stochastic gradient descent) ....
... With algorithms increasingly dominating our world, the need to understand when and why they work has never been greater. The field of beyond worst-case analysis has already produced several striking results, but there remain many unexplained gaps between the theoretical and empirical performance of widely used algorithms. With so many opportunities for consequential research, I suspect the best work in the area is yet to come. .... "
Further thinking the implications of this. But it does make us think about how such algorithms should be used, and their inherent risk.
Robots with Sense of Self?
We often found it useful, when modeling a conversation to introduce the 'self' aspect for concepts like memory, context and goals. These were not for guiding physical robots, but robotic processes. Still useful for noting and evaluating competitive goals and options. Also elements of risk to individual 'selves'. Might also be used in the 'digital twin' concepts when modeling human behavior? Article below made me think of how the concept could be used.
Can robots ever have a true sense of self? Scientists are making progress by Vishwanathan Mohan, in TechExplore
Having a sense of self lies at the heart of what it means to be human. Without it, we couldn't navigate, interact, empathise or ultimately survive in an ever-changing, complex world of others. We need a sense of self when we are taking action, but also when we are anticipating the consequences of potential actions, by ourselves or others.
Given that we want to incorporate robots into our social world, it's no wonder that creating a sense of self in artificial intelligence (AI) is one of the ultimate goals for researchers in the field. If these machines are to be our carers or companions, they must inevitably have an ability to put themselves in our shoes. While scientists are still a long way from creating robots with a human-like sense of self, they are getting closer.
Researchers behind a new study, published in Science Robotics, have developed a robotic arm with knowledge of its physical form – a basic sense of self. This is nevertheless an important step. ... "
Can robots ever have a true sense of self? Scientists are making progress by Vishwanathan Mohan, in TechExplore
Having a sense of self lies at the heart of what it means to be human. Without it, we couldn't navigate, interact, empathise or ultimately survive in an ever-changing, complex world of others. We need a sense of self when we are taking action, but also when we are anticipating the consequences of potential actions, by ourselves or others.
Given that we want to incorporate robots into our social world, it's no wonder that creating a sense of self in artificial intelligence (AI) is one of the ultimate goals for researchers in the field. If these machines are to be our carers or companions, they must inevitably have an ability to put themselves in our shoes. While scientists are still a long way from creating robots with a human-like sense of self, they are getting closer.
Researchers behind a new study, published in Science Robotics, have developed a robotic arm with knowledge of its physical form – a basic sense of self. This is nevertheless an important step. ... "
TurboTrack for RFID Robotics and More
Recalling an application we might have used this.
MIT Media Labs Creates Highly Precise UHF RFID for Robotics
The TurboTrack system employs a standard tag and interrogator, as well as a "helper" antenna device that pulses short signals to pinpoint the locations of even fast-moving tags at the sub-centimeter level. By Claire Swedberg in RFID Journal
Feb 28, 2019—The Massachusetts Institute of Technology (MIT)'s Media Lab has completed its testing of a radio frequency identification system known as TurboTrack. The lab's researchers say the solution could enable a new level of flexibility and autonomy for robots in manufacturing processes, as well as in applications such as search-and-rescue.
The TurboTrack system is designed to pinpoint a passive UHF RFID tag's location at the sub-centimeter level, even if it is moving at fairly high-speed. Such a system could make it possible for a robot to understand where a tagged item was located and to respond accordingly—even one flying overhead, as in the case of a swarm of drones. In the long run, the technology is intended to offer a more effective option for managing robotics than computer vision. The group will present a paper on the technology today at the USENIX Symposium on Networked Systems Design and Implementation. .... "
MIT Media Labs Creates Highly Precise UHF RFID for Robotics
The TurboTrack system employs a standard tag and interrogator, as well as a "helper" antenna device that pulses short signals to pinpoint the locations of even fast-moving tags at the sub-centimeter level. By Claire Swedberg in RFID Journal
Feb 28, 2019—The Massachusetts Institute of Technology (MIT)'s Media Lab has completed its testing of a radio frequency identification system known as TurboTrack. The lab's researchers say the solution could enable a new level of flexibility and autonomy for robots in manufacturing processes, as well as in applications such as search-and-rescue.
The TurboTrack system is designed to pinpoint a passive UHF RFID tag's location at the sub-centimeter level, even if it is moving at fairly high-speed. Such a system could make it possible for a robot to understand where a tagged item was located and to respond accordingly—even one flying overhead, as in the case of a swarm of drones. In the long run, the technology is intended to offer a more effective option for managing robotics than computer vision. The group will present a paper on the technology today at the USENIX Symposium on Networked Systems Design and Implementation. .... "
Robotic Process Automation
Update on the advance of RPA. I don't often hear the excellent idea mentioned, many have not seemed to have heard of it, does it has a marketing problem squeezed between analytics and AI?
Robotic Process Automation Gains Momentum
Robotic process automation software is growing fast in enterprises. Here's why it can be an attractive option for businesses racing into digital transformation.
Does your ERP system talk to the rest of the systems in your enterprise? Do your call center representatives need to manually enter data from one system into another system to close a call? Are your workers performing a lot of repetitive tasks?
These are some of the problems that are fueling the rapid rise of robotic process automation, or RPA, in the enterprise. The technology is growing at a fast clip -- 57% year over year, according to Gartner. That pace will continue. Gartner has said that global spending on RPA software is on pace to reach $2.4 billion in 2022, up from an estimated $680 million in 2018. ....
Robotic Process Automation Gains Momentum
Robotic process automation software is growing fast in enterprises. Here's why it can be an attractive option for businesses racing into digital transformation.
Does your ERP system talk to the rest of the systems in your enterprise? Do your call center representatives need to manually enter data from one system into another system to close a call? Are your workers performing a lot of repetitive tasks?
These are some of the problems that are fueling the rapid rise of robotic process automation, or RPA, in the enterprise. The technology is growing at a fast clip -- 57% year over year, according to Gartner. That pace will continue. Gartner has said that global spending on RPA software is on pace to reach $2.4 billion in 2022, up from an estimated $680 million in 2018. ....
Small Coffee Farmers Seek Trust
And another example of Integrity-type usage .... Used to work in the coffee procurement and blending space, sometimes with small grower.. Might be an interesting place to look for small agricultural suppliers?
Coffee Farmers Bet on Blockchain to Boost Business By Reuters
The United Nations Food and Agriculture Organization said in a recent report that blockchain technology could potentially address challenges faced by smallholder coffee farmers by "reducing uncertainty and enabling trust among market players."
Blockchain facilitates shared access to data maintained by a computer network, and can rapidly trace the myriad parties involved in food production and distribution.
The Agriculture Alliance of the Caribbean (AACARI) has embarked on a blockchain project, which entails auditing by accredited professionals to ensure farmers comply with Global GAP (good agricultural practices) standards, and a digital marketplace where buyers can find information about the produce.
Vijay Kandy of AcreCX, the company building AACARI's blockchain platform, said farmers could bypass intermediaries and deal directly with buyers via the auditing process. Buyers would no longer need to rely on middlemen to ensure farmers are adhering to Global GAP. ...
Coffee Farmers Bet on Blockchain to Boost Business By Reuters
The United Nations Food and Agriculture Organization said in a recent report that blockchain technology could potentially address challenges faced by smallholder coffee farmers by "reducing uncertainty and enabling trust among market players."
Blockchain facilitates shared access to data maintained by a computer network, and can rapidly trace the myriad parties involved in food production and distribution.
The Agriculture Alliance of the Caribbean (AACARI) has embarked on a blockchain project, which entails auditing by accredited professionals to ensure farmers comply with Global GAP (good agricultural practices) standards, and a digital marketplace where buyers can find information about the produce.
Vijay Kandy of AcreCX, the company building AACARI's blockchain platform, said farmers could bypass intermediaries and deal directly with buyers via the auditing process. Buyers would no longer need to rely on middlemen to ensure farmers are adhering to Global GAP. ...
Strategy for Storing Clinical Trial Data with Blockchains
Here the goal example for using Blockchains is the integrity of stored results among multiple parties. Assuming some parties have an incentive to tamper with the data. So why not use a database ledger with assured strong passwords and encryption?
The argument being made is that the blockchain cannot be altered without leaving evidence of tampering, using a combination of hashing and proof-of-work strategies. Yet there have been a number of cases lately where the blockchain was not as strong as theoretically expected. Due to some bad design and operational choices.
The approach below uses centralized authority to enforce choices, similar to enforcing better passwords in computer systems. Since hashing is used, the choice of random numbers to create the hash is key, and we assume the central authority enforces that.
Researchers Create Method to Ensure Integrity of Clinical Trials Data With Blockchain
News-Medical.net
By James Ives
University of California, San Francisco (UCSF) researchers have developed a proof-of-concept method for ensuring the integrity of clinical trials data, using blockchain. The prototype system produces an inflexible audit trail in which tampering can be easily flagged. The system is designed to run through a Web portal, so every time new data is entered on a given trial participant, the sender, receiver, timestamp, and file attachment containing the data, as well as the hash of the previous block of data relating to the patient, are recorded onto a new block with its own unique signature. A regulator with centralized authority must operate the portal, register all parties, and maintain a ledger of the blockchain's hashes. Real-time reporting of data to the regulator could augment the safety and effectiveness of clinical trials. Said UCSF's Atul Butte, "We think it could someday be useful for pharma companies running clinical trials." .... "
The argument being made is that the blockchain cannot be altered without leaving evidence of tampering, using a combination of hashing and proof-of-work strategies. Yet there have been a number of cases lately where the blockchain was not as strong as theoretically expected. Due to some bad design and operational choices.
The approach below uses centralized authority to enforce choices, similar to enforcing better passwords in computer systems. Since hashing is used, the choice of random numbers to create the hash is key, and we assume the central authority enforces that.
Researchers Create Method to Ensure Integrity of Clinical Trials Data With Blockchain
News-Medical.net
By James Ives
University of California, San Francisco (UCSF) researchers have developed a proof-of-concept method for ensuring the integrity of clinical trials data, using blockchain. The prototype system produces an inflexible audit trail in which tampering can be easily flagged. The system is designed to run through a Web portal, so every time new data is entered on a given trial participant, the sender, receiver, timestamp, and file attachment containing the data, as well as the hash of the previous block of data relating to the patient, are recorded onto a new block with its own unique signature. A regulator with centralized authority must operate the portal, register all parties, and maintain a ledger of the blockchain's hashes. Real-time reporting of data to the regulator could augment the safety and effectiveness of clinical trials. Said UCSF's Atul Butte, "We think it could someday be useful for pharma companies running clinical trials." .... "
Wednesday, February 27, 2019
History of Data Visualization and its Future
Good, detailed and lengthy piece. Remember many of the transitions. Now would like to see more and better analytics/AI visualization to add to this. And better ways to understand deeper context and application of data for use potential.
The 3 waves of data visualization: A brief history and predictions for the future By Elijah Meeks, Senior Data Visualization engineer, Netflix
This post is based on Elijah Meeks’ keynote from the 2018 Tapestry Conference. Elijah is a Senior Data Visualization Engineer at Netflix. A version of this post originally appeared on Medium.com.
Fifteen years ago, there was no D3, no Tableau, no ggplot or even Prefuse/Flare. If you wanted to do network visualization you might use the newly published Cytoscape, though it was focused on bioinformatics—the science of collecting and analyzing complex biological data like genetic codes. Geospatial options were more advanced, with ArcGIS providing more and more cartographic functionality in its many red toolboxes. .... "
The 3 waves of data visualization: A brief history and predictions for the future By Elijah Meeks, Senior Data Visualization engineer, Netflix
This post is based on Elijah Meeks’ keynote from the 2018 Tapestry Conference. Elijah is a Senior Data Visualization Engineer at Netflix. A version of this post originally appeared on Medium.com.
Fifteen years ago, there was no D3, no Tableau, no ggplot or even Prefuse/Flare. If you wanted to do network visualization you might use the newly published Cytoscape, though it was focused on bioinformatics—the science of collecting and analyzing complex biological data like genetic codes. Geospatial options were more advanced, with ArcGIS providing more and more cartographic functionality in its many red toolboxes. .... "
D-Wave Announces Next Gen of Its Unique Quantum Computing
Continue to follow this. Notably for its possible use for certain kinds of problems.
D-Wave announces its next-gen quantum computing platform By Frederic Lardinois in TechCrunch
D-Wave, the well-funded quantum computing company, today announced its next-gen quantum computing platform with 5,000 qubits, up from 2,000 in the company’s current system. The new platform will come to market in mid-2020.
The company’s new so-called Pegasus topology connects every qubit to 15 other qubits, up from six in its current topology. With this, developers can use the machine to solve larger problems with fewer physical qubits — or larger problems in general.
It’s worth noting that D-Wave’s qubits are different from those of the company’s competitors like Rigetti, IBM and Google, with shorter coherence times and a system that mostly focuses on solving optimization problems. To do that, D-Wave produces lots of qubits, but in a relatively high-noise environment. That means that you can’t compare D-Wave’s qubit count to that of its competitors (with D-Wave claiming the superiority of its machine for certain problems), which are building universal quantum computers. .... "
D-Wave announces its next-gen quantum computing platform By Frederic Lardinois in TechCrunch
The company’s new so-called Pegasus topology connects every qubit to 15 other qubits, up from six in its current topology. With this, developers can use the machine to solve larger problems with fewer physical qubits — or larger problems in general.
It’s worth noting that D-Wave’s qubits are different from those of the company’s competitors like Rigetti, IBM and Google, with shorter coherence times and a system that mostly focuses on solving optimization problems. To do that, D-Wave produces lots of qubits, but in a relatively high-noise environment. That means that you can’t compare D-Wave’s qubit count to that of its competitors (with D-Wave claiming the superiority of its machine for certain problems), which are building universal quantum computers. .... "
Datafication is a Process
OK, specific terminology is new to me, but I can see the point. Being data driven is important, but getting there is also a process. Datafication. But does it need a new term? Very nicely done, non technical paper on the topic by Mark van Rijmenam"
" ... Transforming your organisation into a data organisation requires data above all. To achieve that, the first step is to datafy your organisation. Datafication is the process of making a business data-driven, by transforming social action into quantified data. It involves collecting (new) data from various sources and processes using IoT devices or creating detailed customer profiles. Datafying your organisation starts by making your office, your workplace, your processes and your products smart. This will make previously ‘invisible’ processes traceable so that they can be monitored, analysed and optimised. ... "
Why Datafication is Key for the Organisation of Tomorrow
February 27, 2019 | Big Data Blog Internet of Things | Mark van Rijmenam .... "
" ... Transforming your organisation into a data organisation requires data above all. To achieve that, the first step is to datafy your organisation. Datafication is the process of making a business data-driven, by transforming social action into quantified data. It involves collecting (new) data from various sources and processes using IoT devices or creating detailed customer profiles. Datafying your organisation starts by making your office, your workplace, your processes and your products smart. This will make previously ‘invisible’ processes traceable so that they can be monitored, analysed and optimised. ... "
Why Datafication is Key for the Organisation of Tomorrow
February 27, 2019 | Big Data Blog Internet of Things | Mark van Rijmenam .... "
Fedex Experiments with Autonomous Pods
I think we can assume we will soon see multiple autonomous delivery pods crawling (and perhaps later flying) all around the landscape. Waiting for unintended consequences. Not too different from automated mail delivery bots we had in the office as early as the 70s, but these will be in the open, sometimes competing with traffic.
FedEx unveils autonomous delivery robot Trials of the robot, which has a top speed of 10 mph, will begin later this year By James Vincent in The Verge
Startups do it. Amazon does it. And now even Fedex is doing it — experimenting with robots for short-range deliveries. Today, the company officially announced its new FedEx SameDay Bot, which it says could help make “last mile” deliveries more efficient.
The SameDay Bot is battery-powered, has a top speed of 10 mph, and is autonomous, meaning it can steer itself around pedestrians and traffic using a combination of LIDAR sensors like those found in self-driving cars and regular cameras.
FedEx says it will initially use the bot to courier packages between the company’s offices in its headquarters in Memphis (pending approval from local government). But if these trials are successful it wants to expand the service to other companies and retailers, eventually making robots a standard part of its same-day delivery service. ... "
FedEx unveils autonomous delivery robot Trials of the robot, which has a top speed of 10 mph, will begin later this year By James Vincent in The Verge
Startups do it. Amazon does it. And now even Fedex is doing it — experimenting with robots for short-range deliveries. Today, the company officially announced its new FedEx SameDay Bot, which it says could help make “last mile” deliveries more efficient.
The SameDay Bot is battery-powered, has a top speed of 10 mph, and is autonomous, meaning it can steer itself around pedestrians and traffic using a combination of LIDAR sensors like those found in self-driving cars and regular cameras.
FedEx says it will initially use the bot to courier packages between the company’s offices in its headquarters in Memphis (pending approval from local government). But if these trials are successful it wants to expand the service to other companies and retailers, eventually making robots a standard part of its same-day delivery service. ... "
Alexa Socialbot Reports on Grand Challenge
Following this effort. Have experimented with the bots to date. We still need more substantial results to make these approaches more powerful. True, meaningful, contextual conversation is the goal. Not just fooling someone that they talking to a knowledgeable human, but really helping. Like to see more results from the challenge emerge on the platform.
AI Tools Let Alexa Prize Participants Focus on Science By Anu Venkatesh
March 4 marks the kickoff of the third Alexa Prize Socialbot Grand Challenge, in which university teams build socialbots capable of conversing on a wide range of topics and make them available to millions of Alexa customers through the invitation “Alexa, let’s chat”. Student teams can begin applying to the competition on March 4, and in the subsequent six weeks, the Alexa Prize team will make a series of roadshow appearances at tech hubs in the U.S. and Europe to meet with students and answer questions about the program.
As we gear up for the third Alexa Prize Socialbot Grand Challenge, the Alexa science blog is reviewing some of the technical accomplishments from the second, which were reported in a paper released in late 2018. This post examines contributions by Amazon’s Alexa Prize team; a second post will examine innovations from the participating university teams.
To ensure that Alexa Prize contestants can concentrate on dialogue systems — the core technology of socialbots — Amazon scientists and engineers built a set of machine learning modules that handle fundamental conversational tasks and a development environment that lets contestants easily mix and match existing modules with those of their own design. ... "
Alexa Science Blog
Paper on Socialbot work to date.
AI Tools Let Alexa Prize Participants Focus on Science By Anu Venkatesh
March 4 marks the kickoff of the third Alexa Prize Socialbot Grand Challenge, in which university teams build socialbots capable of conversing on a wide range of topics and make them available to millions of Alexa customers through the invitation “Alexa, let’s chat”. Student teams can begin applying to the competition on March 4, and in the subsequent six weeks, the Alexa Prize team will make a series of roadshow appearances at tech hubs in the U.S. and Europe to meet with students and answer questions about the program.
As we gear up for the third Alexa Prize Socialbot Grand Challenge, the Alexa science blog is reviewing some of the technical accomplishments from the second, which were reported in a paper released in late 2018. This post examines contributions by Amazon’s Alexa Prize team; a second post will examine innovations from the participating university teams.
To ensure that Alexa Prize contestants can concentrate on dialogue systems — the core technology of socialbots — Amazon scientists and engineers built a set of machine learning modules that handle fundamental conversational tasks and a development environment that lets contestants easily mix and match existing modules with those of their own design. ... "
Alexa Science Blog
Paper on Socialbot work to date.
People and Process
Thoughts on process in Retail, not that often effectively utilized.
Intersection of technology, processes and people by Jim Frome
From the front end to the backend: the people, technology, and the processes that empower them must all align towards a genuinely unified retail experience. From brick and mortar stores to e-commerce, to social, to the marketplace––and everything in between––the ability to buy has to be fast, easy, and reliable. But for all of that to work, there also has to be a seamless, “omnichannel” unified retail experience throughout your organization to live up to these changing shopper expectations.
Today, the departments, systems and resources (everyone and everything) must all work together seamlessly to drive the overarching goal of capturing consumer attention to successfully deliver on promises, gain customer loyalty and accumulate market share. ... "
Intersection of technology, processes and people by Jim Frome
From the front end to the backend: the people, technology, and the processes that empower them must all align towards a genuinely unified retail experience. From brick and mortar stores to e-commerce, to social, to the marketplace––and everything in between––the ability to buy has to be fast, easy, and reliable. But for all of that to work, there also has to be a seamless, “omnichannel” unified retail experience throughout your organization to live up to these changing shopper expectations.
Today, the departments, systems and resources (everyone and everything) must all work together seamlessly to drive the overarching goal of capturing consumer attention to successfully deliver on promises, gain customer loyalty and accumulate market share. ... "
Tuesday, February 26, 2019
Pre Learning to Address Data Gap
Brought to my attention. Common inadequacy of data. Pre-training? Priming the neural pump? Worth a good look.
New AI approach bridges the 'slim-data gap' that can stymie deep learning approaches by Tom Rickey, Pacific Northwest National Laboratory
PNNL's deep learning network tackles tough chemistry problems with the aid of some pre-training. Credit: Timothy Holland/PNNL
Scientists have developed a deep neural network that sidesteps a problem that has bedeviled efforts to apply artificial intelligence to tackle complex chemistry—a shortage of precisely labeled chemical data. The new method gives scientists an additional tool to apply deep learning to explore drug discovery, new materials for manufacturing, and a swath of other applications.
Predicting chemical properties and reactions among millions upon millions of compounds is one of the most daunting tasks that scientists face. There is no source of complete information from which a deep learning program could draw upon. Usually, such a shortage of a vast amount of clean data is a show-stopper for a deep learning project.
Scientists at the Department of Energy's Pacific Northwest National Laboratory discovered a way around the problem. They created a pre-training system, kind of a fast-track tutorial where they equip the program with some basic information about chemistry, equip it to learn from its experiences, then challenge the program with huge datasets. .... "
New AI approach bridges the 'slim-data gap' that can stymie deep learning approaches by Tom Rickey, Pacific Northwest National Laboratory
PNNL's deep learning network tackles tough chemistry problems with the aid of some pre-training. Credit: Timothy Holland/PNNL
Scientists have developed a deep neural network that sidesteps a problem that has bedeviled efforts to apply artificial intelligence to tackle complex chemistry—a shortage of precisely labeled chemical data. The new method gives scientists an additional tool to apply deep learning to explore drug discovery, new materials for manufacturing, and a swath of other applications.
Predicting chemical properties and reactions among millions upon millions of compounds is one of the most daunting tasks that scientists face. There is no source of complete information from which a deep learning program could draw upon. Usually, such a shortage of a vast amount of clean data is a show-stopper for a deep learning project.
Scientists at the Department of Energy's Pacific Northwest National Laboratory discovered a way around the problem. They created a pre-training system, kind of a fast-track tutorial where they equip the program with some basic information about chemistry, equip it to learn from its experiences, then challenge the program with huge datasets. .... "
What are Knowledge Graphs?
Some good introductory resources from the recent Webinar:
From TopQuadrant:
Thank you for attending our webinar: "What are Knowledge Graphs? Why are they key to Successful Data Governance?" We hope you enjoyed the event. ...
The recording and slides from the webinar are available here (Plus other resources):
Audio recording.
Slides.
From TopQuadrant:
Thank you for attending our webinar: "What are Knowledge Graphs? Why are they key to Successful Data Governance?" We hope you enjoyed the event. ...
The recording and slides from the webinar are available here (Plus other resources):
Audio recording.
Slides.
Augmented Reality in an Empty Store
Clever idea. Makes some strong assumptions about how many people are outfitted for AR.
Lego brings AR to an empty store by Tom Ryan in Retailwire
On February 13, for London Fashion Week, Lego opened a “virtually” empty pop-up store for one day to sell a limited-edition apparel range that could only be bought through Snapchat.
The only thing in “The Missing Piece” pop-up in London’s Soho district was a Snapcode, a QR code for Snapchat, displayed on a plinth. Scanning the Snapcode transported the shopper via their smartphone screens into an augmented-reality (AR) fashion boutique.
Visitors were then able to explore the AR space that featured a DJ booth, arcade machines and bouncer, all made of Legos. Lego mannequins showcased the streetwear range that could be bought online through an integrated “Shop Now” feature on Snapchat. ... "
Lego brings AR to an empty store by Tom Ryan in Retailwire
On February 13, for London Fashion Week, Lego opened a “virtually” empty pop-up store for one day to sell a limited-edition apparel range that could only be bought through Snapchat.
The only thing in “The Missing Piece” pop-up in London’s Soho district was a Snapcode, a QR code for Snapchat, displayed on a plinth. Scanning the Snapcode transported the shopper via their smartphone screens into an augmented-reality (AR) fashion boutique.
Visitors were then able to explore the AR space that featured a DJ booth, arcade machines and bouncer, all made of Legos. Lego mannequins showcased the streetwear range that could be bought online through an integrated “Shop Now” feature on Snapchat. ... "
Nestle Combines Retail Channels with Tesco
Also should give new kinds of data.
Nestlé connects in-store and online media in retailwire, with expert comments. by Dale Buss
Through a special arrangement, presented here for discussion is a summary of a current article from the bi-monthly e-zine, CPGmatters.
Nestlé partnered with Tesco to help launch the dunnhumby media platform that promises to bring continual, consistent and contextual communications with shoppers within and outside the store.
Nestlé worked with dunnhumby media in the U.K. to promote the launch of a new on-pack promotion to Tesco customers across owned and paid channels, including in-store, mobile, online and out-of-home. The insights-driven media plan reached 5.8 million customers and drove an 11 percent increase in sales. ... "
Nestlé connects in-store and online media in retailwire, with expert comments. by Dale Buss
Through a special arrangement, presented here for discussion is a summary of a current article from the bi-monthly e-zine, CPGmatters.
Nestlé partnered with Tesco to help launch the dunnhumby media platform that promises to bring continual, consistent and contextual communications with shoppers within and outside the store.
Nestlé worked with dunnhumby media in the U.K. to promote the launch of a new on-pack promotion to Tesco customers across owned and paid channels, including in-store, mobile, online and out-of-home. The insights-driven media plan reached 5.8 million customers and drove an 11 percent increase in sales. ... "
Healthcare Hospital Assistant
Had previously mentioned an IBM Watson powered hospital assistant system. Care was taken there to not deal with any patient data. Seems the same is true here with a test called Aiva. Note also the linkage to workflow, how can we make hospitals more efficient by leveraging people with an RPA (Robotic Process Automation) approach? Overlaying key detected patterns of use? Many more examples in the links below.
An LA hospital will put Alexa in over 100 patients' rooms
It provides a hands-free way to call for healthcare providers and to control the TV.
By Mariella Moon, @mariella_moon in Engadget ... "
Meet Aiva, the world’s first voice-powered care assistant. Hands-free communication for happier patients and better workflow. .... " On AivaHealth.
See more on Hospital Virtual Assistants.
An LA hospital will put Alexa in over 100 patients' rooms
It provides a hands-free way to call for healthcare providers and to control the TV.
By Mariella Moon, @mariella_moon in Engadget ... "
Meet Aiva, the world’s first voice-powered care assistant. Hands-free communication for happier patients and better workflow. .... " On AivaHealth.
See more on Hospital Virtual Assistants.
Time Series and Deep Learning
A Considerable look at Deep learning and time series problems. We did many of these kinds of problems in the enterprise. And we did not consider DL because it seemed inefficient. In O'Reilly.
3 reasons to add deep learning to your time series toolkit
The most promising area in the application of deep learning methods to time series forecasting is in the use of CNNs, LSTMs, and hybrid models.
By Francesca Lazzeri:
The ability to accurately forecast a sequence into the future is critical in many industries: finance, supply chain, and manufacturing are just a few examples. Classical time series techniques have served this task for decades, but now deep learning methods—similar to those used in computer vision and automatic translation—have the potential to revolutionize time series forecasting as well.
Due to their applicability to many real-life problems—such as fraud detection, spam email filtering, finance, and medical diagnosis—and their ability to produce actionable results, deep learning neural networks have gained a lot of attention in recent years. Generally, deep learning methods have been developed and applied to univariate time series forecasting scenarios, where the time series consists of single observations recorded sequentially over equal time increments. For this reason, they have often performed worse than naïve and classical forecasting methods, such as exponential smoothing (ETS) and autoregressive integrated moving average (ARIMA). This has led to a general misconception that deep learning models are inefficient in time series forecasting scenarios, and many data scientists wonder whether it’s really necessary to add another class of methods—such as convolutional neural networks or recurrent neural networks—to their time series toolkit.
In this post, I'll discuss some of the practical reasons why data scientists may still want to think about deep learning when they build time series forecasting solutions. ... "
3 reasons to add deep learning to your time series toolkit
The most promising area in the application of deep learning methods to time series forecasting is in the use of CNNs, LSTMs, and hybrid models.
By Francesca Lazzeri:
The ability to accurately forecast a sequence into the future is critical in many industries: finance, supply chain, and manufacturing are just a few examples. Classical time series techniques have served this task for decades, but now deep learning methods—similar to those used in computer vision and automatic translation—have the potential to revolutionize time series forecasting as well.
Due to their applicability to many real-life problems—such as fraud detection, spam email filtering, finance, and medical diagnosis—and their ability to produce actionable results, deep learning neural networks have gained a lot of attention in recent years. Generally, deep learning methods have been developed and applied to univariate time series forecasting scenarios, where the time series consists of single observations recorded sequentially over equal time increments. For this reason, they have often performed worse than naïve and classical forecasting methods, such as exponential smoothing (ETS) and autoregressive integrated moving average (ARIMA). This has led to a general misconception that deep learning models are inefficient in time series forecasting scenarios, and many data scientists wonder whether it’s really necessary to add another class of methods—such as convolutional neural networks or recurrent neural networks—to their time series toolkit.
In this post, I'll discuss some of the practical reasons why data scientists may still want to think about deep learning when they build time series forecasting solutions. ... "
Math: Truth vs Beauty?
Is Beauty the same as Truth? Or are we seduced by Cognitive, group bias? Is this the same in Computer Science?
Lost in Math? By Moshe Y. Vardi Communications of the ACM, March 2019, Vol. 62 No. 3, Page 7
When I was 10 years old, my math teacher started a Math Club. It was not popular enough to last more than a few weeks, but that was long enough for me to learn about matrices and determinants. When I came home, my mother asked me how the club had been. "Beautiful," I answered. "Do you mean, 'interesting'?" she inquired. "No," I said, "Beautiful!" While some people find mathematics befuddling, others find it elegant and beautiful. The mathematician Paul Erds often referred to "The Book" in which God keeps the most beautiful proofs of each mathematical theorem. The philosopher Bertrand Russell said, "Mathematics, rightly viewed, possesses not only truth, but supreme beauty." The beauty can be compelling; something so beautiful must be true!
But the seductive power of mathematical beauty has come under criticism lately. In Lost in Math, a book published earlier this year, the theoretical physicist Sabine Hossenfelder asserts that mathematical elegance led physics astray. Specifically, she argues that several branches of physics, including string theory and quantum gravity, have come to view mathematical beauty as a truth criterion, in the absence of experimental data to confirm or refute these theories. The theoretical physics community, she argues, is falling victim to group thinking and cognitive bias, seduced by mathematical beauty. .... "
Lost in Math? By Moshe Y. Vardi Communications of the ACM, March 2019, Vol. 62 No. 3, Page 7
When I was 10 years old, my math teacher started a Math Club. It was not popular enough to last more than a few weeks, but that was long enough for me to learn about matrices and determinants. When I came home, my mother asked me how the club had been. "Beautiful," I answered. "Do you mean, 'interesting'?" she inquired. "No," I said, "Beautiful!" While some people find mathematics befuddling, others find it elegant and beautiful. The mathematician Paul Erds often referred to "The Book" in which God keeps the most beautiful proofs of each mathematical theorem. The philosopher Bertrand Russell said, "Mathematics, rightly viewed, possesses not only truth, but supreme beauty." The beauty can be compelling; something so beautiful must be true!
But the seductive power of mathematical beauty has come under criticism lately. In Lost in Math, a book published earlier this year, the theoretical physicist Sabine Hossenfelder asserts that mathematical elegance led physics astray. Specifically, she argues that several branches of physics, including string theory and quantum gravity, have come to view mathematical beauty as a truth criterion, in the absence of experimental data to confirm or refute these theories. The theoretical physics community, she argues, is falling victim to group thinking and cognitive bias, seduced by mathematical beauty. .... "
Monday, February 25, 2019
Wikidata and Assistants
This was new and interesting to me, we looked at Wikidata early on, and there was little of interest to us. And it seemed that it was not being updated. Now here a connection to assistants. Fasinating details about how Wikidata is being used by Assistance. Exploring some possible points of leverage. .
Inside the Alexa-Friendly World of Wikidata in Wired
HUMANS PRICKED BY info-hunger pangs used to hunt and peck for scraps of trivia on the savanna of the internet. Now we sit in screen-glow-flooded caves and grunt, “Alexa!” Virtual assistants do the dirty work for us. Problem is, computers can’t really speak the language.
Many of our densest, most reliable troves of knowledge, from Wikipedia to (ahem) the pages of WIRED, are encoded in an ancient technology largely opaque to machines—prose. That’s not a problem when you Google a question. Search engines don’t need to read; they find the most relevant web pages using patterns of links. But when you ask Google Assistant or one of its sistren for a celebrity’s date of birth or the location of a famous battle, it has to go find the answer. Yet no machine can easily or quickly skim meaning from the internet’s tangle of predicates, complements, sentences, and paragraphs. It requires a guide.
Wikidata, an obscure sister project to Wikipedia, aims to (eventually) represent everything in the universe in a way computers can understand. Maintained by an army of volunteers, the database has come to serve an essential yet mostly unheralded purpose as AI and voice recognition expand to every corner of digital life. “Language depends on knowing a lot of common sense, which computers don’t have access to,” says Denny Vrandečić, who founded Wikidata in 2012. A programmer and regular Wikipedia editor, Vrandečić saw the need for a place where humans and bots could share knowledge on more equal terms. ... "
Inside the Alexa-Friendly World of Wikidata in Wired
HUMANS PRICKED BY info-hunger pangs used to hunt and peck for scraps of trivia on the savanna of the internet. Now we sit in screen-glow-flooded caves and grunt, “Alexa!” Virtual assistants do the dirty work for us. Problem is, computers can’t really speak the language.
Many of our densest, most reliable troves of knowledge, from Wikipedia to (ahem) the pages of WIRED, are encoded in an ancient technology largely opaque to machines—prose. That’s not a problem when you Google a question. Search engines don’t need to read; they find the most relevant web pages using patterns of links. But when you ask Google Assistant or one of its sistren for a celebrity’s date of birth or the location of a famous battle, it has to go find the answer. Yet no machine can easily or quickly skim meaning from the internet’s tangle of predicates, complements, sentences, and paragraphs. It requires a guide.
Wikidata, an obscure sister project to Wikipedia, aims to (eventually) represent everything in the universe in a way computers can understand. Maintained by an army of volunteers, the database has come to serve an essential yet mostly unheralded purpose as AI and voice recognition expand to every corner of digital life. “Language depends on knowing a lot of common sense, which computers don’t have access to,” says Denny Vrandečić, who founded Wikidata in 2012. A programmer and regular Wikipedia editor, Vrandečić saw the need for a place where humans and bots could share knowledge on more equal terms. ... "
Procter to Roll Out Laundry
Apparently profitable, have never used the service though several are nearby. Nice leverage of the Brand Name equity. .
Tide to roll out laundry cleaning service nationwide in Retailwire by Tom Ryan plus expert discussion.
Procter & Gamble announced a commitment to double the size of its current out-of-home laundry footprint by the end of 2020, making Tide Cleaners’ services available in more than 2,000 locations nationwide.
The expansion builds on Tide’s entry nearly a decade ago into dry cleaning that has since included several acquisitions in wash and fold services to meet the needs of the 26 million American households currently using shared facilities or outsourcing laundry. ... "
Tide to roll out laundry cleaning service nationwide in Retailwire by Tom Ryan plus expert discussion.
Procter & Gamble announced a commitment to double the size of its current out-of-home laundry footprint by the end of 2020, making Tide Cleaners’ services available in more than 2,000 locations nationwide.
The expansion builds on Tide’s entry nearly a decade ago into dry cleaning that has since included several acquisitions in wash and fold services to meet the needs of the 26 million American households currently using shared facilities or outsourcing laundry. ... "
Judea Pearl on Causal Inference
Back to the need for better integrated inference. Its not enough to just find patterns, we need to insert them in our cognitive work.
Full paper in the Communications of the ACM Technical, but contains excellent overview pieces that are essential to understand the future of AI beyond Deep Learning. And my point made above.
Ultimately this makes the case of connecting any kind of analytics (like machine learning) to human augmentation and interaction.
Qualcomm Snapdragon to Connect Automotive
Further looking at this, what are the kinds of connections that will be most useful, essential to the driver? How will the home be connected? What kinds of channels, and who will provide them? What speeds of connection? What meta data will the car need to perform the tasks we want?
Qualcomm draws a road map to the self-driving car of the future By Jeremy Kaplin, DigitalTrend
Ultra-high definition video. GPS that’s even more precise than GPS. Massively multiplayer online gaming. 5G connectivity. All the buzzwords that define modern technology? Qualcomm has a single chip for it — designed for your car.
On Tuesday, Qualcomm unveiled its second-generation Connected Car Reference Platform and the QCA6696 chip, which brings next-gen Wi-Fi 6 connectivity to automobiles and enables an enormous array of technologies that promise to bridge the gap between the ordinary cars of today and the self-driving entertainment centers of the future. And at the center of all of those technologies is the new Wi-Fi 6 standard and — you guessed it — 5G cellular connectivity.
“We believe our new Snapdragon Automotive Platforms will help launch the connected vehicle into the 5G era, offering multi-Gigabit low latency speeds, lane-level navigation accuracy, and an integrated and comprehensive C-V2X solution for increased road safety for cars and transportation infrastructure,” said Nakul Duggal, senior vice president of product management for Qualcomm. “With these new wireless solutions, we are excited to support our automaker, Tier-1 and roadside infrastructure customers as they develop faster, safer, and differentiated products for the next-generation of the connected car.” .... '
Qualcomm draws a road map to the self-driving car of the future By Jeremy Kaplin, DigitalTrend
Ultra-high definition video. GPS that’s even more precise than GPS. Massively multiplayer online gaming. 5G connectivity. All the buzzwords that define modern technology? Qualcomm has a single chip for it — designed for your car.
On Tuesday, Qualcomm unveiled its second-generation Connected Car Reference Platform and the QCA6696 chip, which brings next-gen Wi-Fi 6 connectivity to automobiles and enables an enormous array of technologies that promise to bridge the gap between the ordinary cars of today and the self-driving entertainment centers of the future. And at the center of all of those technologies is the new Wi-Fi 6 standard and — you guessed it — 5G cellular connectivity.
“We believe our new Snapdragon Automotive Platforms will help launch the connected vehicle into the 5G era, offering multi-Gigabit low latency speeds, lane-level navigation accuracy, and an integrated and comprehensive C-V2X solution for increased road safety for cars and transportation infrastructure,” said Nakul Duggal, senior vice president of product management for Qualcomm. “With these new wireless solutions, we are excited to support our automaker, Tier-1 and roadside infrastructure customers as they develop faster, safer, and differentiated products for the next-generation of the connected car.” .... '
BMW Wants Natural Conversations with your Car
As I read it this will not be built on existing assistant technologies. Which I recall BMW has supported. See for example an integration with with Alexa. I would think it would be important to integrate with the Smart Home as well. Will it? Looking further.
BMW wants to make interacting with a car as natural as talking to a friend By Ronan Glon
BMW predicts poking a screen to get directions will soon become as outdated as a flip phone. The German automaker traveled to MWC2019 to demonstrate its new, artificial intelligence-powered Natural Interaction technology, which empowers drivers with three onboard means of communications that make interacting with a car as straightforward as talking to a friend.
Natural Interaction builds on technologies such as voice commands and gesture recognition that are already available in select series-produced BMW models, like the 7 Series, and it adds a forward-looking feature (pun intended) called gaze recognition that tracks the driver’s eyes. Drivers don’t need to tell the car how they want to communicate; the software automatically detects instructions, and executes them immediately. Someone driving alone can say “I’m cold” to turn the heater up. If four passengers are having a conversation, or if the radio is on full blast, the driver will likely prefer to turn the heat up with a hand gesture.
BMW noted Natural Interaction lets the passengers perform a variety of functions including opening or closing the windows and the sunroof, adjusting the air vents, or selecting an icon on the screen that displays the infotainment system. They can also point to a button and ask the car what it does. Artificial intelligence helps the car learn each user’s habits. This technology promises to make driving more convenient, and it paves the way for the lounge-like interiors that designers often create for autonomous concept cars.
“People shouldn’t have to think about which operating strategy to use to get what they want. They should always be able to decide freely — and the car should still understand them. BMW Natural Interaction is also an important step for the future of autonomous vehicles, when interior concepts will no longer be geared solely toward the driver’s position and occupants will have more freedom,” said Christoph Grote, the senior vice president of BMW Group Electronics. .... "
BMW wants to make interacting with a car as natural as talking to a friend By Ronan Glon
BMW predicts poking a screen to get directions will soon become as outdated as a flip phone. The German automaker traveled to MWC2019 to demonstrate its new, artificial intelligence-powered Natural Interaction technology, which empowers drivers with three onboard means of communications that make interacting with a car as straightforward as talking to a friend.
Natural Interaction builds on technologies such as voice commands and gesture recognition that are already available in select series-produced BMW models, like the 7 Series, and it adds a forward-looking feature (pun intended) called gaze recognition that tracks the driver’s eyes. Drivers don’t need to tell the car how they want to communicate; the software automatically detects instructions, and executes them immediately. Someone driving alone can say “I’m cold” to turn the heater up. If four passengers are having a conversation, or if the radio is on full blast, the driver will likely prefer to turn the heat up with a hand gesture.
BMW noted Natural Interaction lets the passengers perform a variety of functions including opening or closing the windows and the sunroof, adjusting the air vents, or selecting an icon on the screen that displays the infotainment system. They can also point to a button and ask the car what it does. Artificial intelligence helps the car learn each user’s habits. This technology promises to make driving more convenient, and it paves the way for the lounge-like interiors that designers often create for autonomous concept cars.
“People shouldn’t have to think about which operating strategy to use to get what they want. They should always be able to decide freely — and the car should still understand them. BMW Natural Interaction is also an important step for the future of autonomous vehicles, when interior concepts will no longer be geared solely toward the driver’s position and occupants will have more freedom,” said Christoph Grote, the senior vice president of BMW Group Electronics. .... "
Google Assistant Everywhere
Some of the earliest AI work we examined was in using 'Assistant' tech to augment typical work. So when an employee messaged or wrote documents they would get a separate 'column' of searches, suggestions. links, supporting ideas. The early capabilities made this annoying to many. But now we can do much better. Seems Google is at it again. Ans see the fundamental connection with the idea of a 'conversation', a natural place. Here just an example, much more at the link.
Building the Google Assistant on phones for everyone, everywhere
Manuel Bronstein, Vice President of Product, Google Assistant, in Google Blog
In most countries around the world, phones are the primary way people interact with the Google Assistant. Whether you’re using an Android phone, an iPhone or a phone that’s running Android 9 Pie (Go edition) or KaiOS, the Assistant takes advantage of our phones’ capabilities so you can get help while you’re on the go.
This week, we’re at Mobile World Congress, announcing more phones with a dedicated Google Assistant button and sharing new ways people around the globe can use phones to get help from the Assistant where they want it.
Assistance where you want it, Bringing the Assistant to mobile apps
There are lots of ways the Assistant can help right within some of the apps we use every day. For example, since we launched the Assistant in Google Maps, drivers in the U.S. are getting hands-free help with directions, making calls and listening to music. We’re also seeing more than 15 times the number of queries asking for the Assistant’s help to send messages and read incoming texts out loud compared to before, when you could only use your voice for a few things. In the coming weeks, we’ll bring the Assistant to Google Maps in all Assistant phone languages.
Accessing the Google Assistant in Messages
Conversations are another place where the Google Assistant can lend a helping hand. Over the coming months for English users around the globe, Messages will start showing suggestions so you can get more information from the Google Assistant about movies, restaurants and weather. The Messages app uses on-device AI to offer suggestion chips relevant to your conversation, helping you easily find and share information as you chat one-on-one with your best friend, or in a group chat with your entire family. You can tap on the suggestion chip to learn more from your Assistant, and if you find the info is helpful, you can decide if you want to share that information back into your conversation. If you don’t share that information, the other person won’t see it. ... "
Building the Google Assistant on phones for everyone, everywhere
Manuel Bronstein, Vice President of Product, Google Assistant, in Google Blog
In most countries around the world, phones are the primary way people interact with the Google Assistant. Whether you’re using an Android phone, an iPhone or a phone that’s running Android 9 Pie (Go edition) or KaiOS, the Assistant takes advantage of our phones’ capabilities so you can get help while you’re on the go.
This week, we’re at Mobile World Congress, announcing more phones with a dedicated Google Assistant button and sharing new ways people around the globe can use phones to get help from the Assistant where they want it.
Assistance where you want it, Bringing the Assistant to mobile apps
There are lots of ways the Assistant can help right within some of the apps we use every day. For example, since we launched the Assistant in Google Maps, drivers in the U.S. are getting hands-free help with directions, making calls and listening to music. We’re also seeing more than 15 times the number of queries asking for the Assistant’s help to send messages and read incoming texts out loud compared to before, when you could only use your voice for a few things. In the coming weeks, we’ll bring the Assistant to Google Maps in all Assistant phone languages.
Accessing the Google Assistant in Messages
Conversations are another place where the Google Assistant can lend a helping hand. Over the coming months for English users around the globe, Messages will start showing suggestions so you can get more information from the Google Assistant about movies, restaurants and weather. The Messages app uses on-device AI to offer suggestion chips relevant to your conversation, helping you easily find and share information as you chat one-on-one with your best friend, or in a group chat with your entire family. You can tap on the suggestion chip to learn more from your Assistant, and if you find the info is helpful, you can decide if you want to share that information back into your conversation. If you don’t share that information, the other person won’t see it. ... "
Digital Shelf Tags Influential
A method we studied for years in our innovation centers.
Test Shoppers Find Kroger's New Digital Shelves 'Influential' on Purchase Decisions By Randy Hofbauer - 02/01/2019 In Progressivegrocer
Test Shoppers Find Kroger's New Digital Shelves are "Influential" on Purchase Decisions
Kroger's new digital shelves at a QFC store in Redmond, Wash. (photo courtesy of Field Agent)
A test using eight secret shoppers at Kroger's QFC test store in Redmond, Wash., for the grocer's new EDGE digital shelf technology found the majority of patrons involved favoring the innovation over traditional shelf tags, Fayetteville, Ark.-based market researcher Field Agent has reported.
Part of the grocery giant's new Retail as a Service (RaaS) platform developed in partnership with Redmond-based technology company Microsoft, the new shelves were preferred by seven out of eight mystery shoppers, whereas the remaining one would rather see traditional tags on shelves. ... "
Test Shoppers Find Kroger's New Digital Shelves 'Influential' on Purchase Decisions By Randy Hofbauer - 02/01/2019 In Progressivegrocer
Test Shoppers Find Kroger's New Digital Shelves are "Influential" on Purchase Decisions
Kroger's new digital shelves at a QFC store in Redmond, Wash. (photo courtesy of Field Agent)
A test using eight secret shoppers at Kroger's QFC test store in Redmond, Wash., for the grocer's new EDGE digital shelf technology found the majority of patrons involved favoring the innovation over traditional shelf tags, Fayetteville, Ark.-based market researcher Field Agent has reported.
Part of the grocery giant's new Retail as a Service (RaaS) platform developed in partnership with Redmond-based technology company Microsoft, the new shelves were preferred by seven out of eight mystery shoppers, whereas the remaining one would rather see traditional tags on shelves. ... "
Sunday, February 24, 2019
Best Analysis with Ensembles
Promoting multiple (aka Ensemble) methods. New book. Have for years used what are now called ensemble methods. This piece gives good motivation for its use. Passing it along. Podcast and transcript.
Ensemble Models
How to Get the Best Results from Big Data Analysis
Author Scott E. Page, a complex systems expert, explains how applying multiple data analysis models greatly enhances decision making.
Scott E. Page, professor of complex systems, political science and economics at the University of Michigan, doesn’t want people to limit themselves to linear thinking. In his new book, The Model Thinker: What You Need to Know to Make Data Work for You, he explains how taking a multi-paradigm approach puts more power into solving problems, innovating and understanding the full range of consequences to complex actions. He believes using many models is the best way to make sense out of the reams of data available in today’s digital world. Page recently spoke on the Knowledge@Wharton radio show on Sirius XM about why it’s important to widen your data lens.
An edited transcript of the conversation follows.
Knowledge@Wharton: What is multi-model thinking?
Scott Page: We live in this time where there are two fundamental things going on. One is, there’s just a firehose or hairball of data, right? Tons of data out there. At the same time, we have this recognition that the problems and challenges that we confront are complex. And by that, I mean high-dimensional, lots of interdependencies, difficult to understand. So, what do we do? How do we use that data to confront the complexity?
The philosophy I’m putting forward goes as follows: You have to arrange that data on some sort of model. You want to think of a model as Charlie Munger, the famous investor, describes it — a latticework of understanding on which you can array the data.
But models by definition are simple, so there’s a disconnect. I’m trying to understand something complex with something that’s simple. What I’ve bought with that simplicity is logical coherence. But what I’ve lost in that simplicity is any notion of coverage because there’s too much stuff I’ve got to leave out.
Instead, what I propose you do is bring an ensemble of models to bear. This is a thing. People in machine learning have been doing this; all the fancy stuff’s going on in AI. If you really unpack what’s going on in those sophisticated algorithms, they really are ensembles of little algorithms and little rules. The idea is, any one model is going to be wrong, but many models are going to be not only a lot of coverage, but also a collection of coherent understandings of a complex phenomenon.
Knowledge@Wharton: Is this multi-model approach common in the business world? ...
"
Ensemble Models
How to Get the Best Results from Big Data Analysis
Author Scott E. Page, a complex systems expert, explains how applying multiple data analysis models greatly enhances decision making.
Scott E. Page, professor of complex systems, political science and economics at the University of Michigan, doesn’t want people to limit themselves to linear thinking. In his new book, The Model Thinker: What You Need to Know to Make Data Work for You, he explains how taking a multi-paradigm approach puts more power into solving problems, innovating and understanding the full range of consequences to complex actions. He believes using many models is the best way to make sense out of the reams of data available in today’s digital world. Page recently spoke on the Knowledge@Wharton radio show on Sirius XM about why it’s important to widen your data lens.
An edited transcript of the conversation follows.
Knowledge@Wharton: What is multi-model thinking?
Scott Page: We live in this time where there are two fundamental things going on. One is, there’s just a firehose or hairball of data, right? Tons of data out there. At the same time, we have this recognition that the problems and challenges that we confront are complex. And by that, I mean high-dimensional, lots of interdependencies, difficult to understand. So, what do we do? How do we use that data to confront the complexity?
The philosophy I’m putting forward goes as follows: You have to arrange that data on some sort of model. You want to think of a model as Charlie Munger, the famous investor, describes it — a latticework of understanding on which you can array the data.
But models by definition are simple, so there’s a disconnect. I’m trying to understand something complex with something that’s simple. What I’ve bought with that simplicity is logical coherence. But what I’ve lost in that simplicity is any notion of coverage because there’s too much stuff I’ve got to leave out.
Instead, what I propose you do is bring an ensemble of models to bear. This is a thing. People in machine learning have been doing this; all the fancy stuff’s going on in AI. If you really unpack what’s going on in those sophisticated algorithms, they really are ensembles of little algorithms and little rules. The idea is, any one model is going to be wrong, but many models are going to be not only a lot of coverage, but also a collection of coherent understandings of a complex phenomenon.
Knowledge@Wharton: Is this multi-model approach common in the business world? ...
"
Good Overview View of AI in Life Sciences
From Nathan.AI, nice look at a number of things I had not seen. Subscribe.
6 Impactful Applications of AI to the life Sciences
Nathan.AI Newsletter
A market intelligence newsletter covering AI in the technology industry, research lab and venture capital market.
Life sciences and healthcare are now in the limelight for technologists, in part because AI technologies are well suited to make a positive impact to key workflows. In this essay, I’ll explore 6 areas of life sciences that offer fruitful applications of AI. I hope this will serve as a resource and point of inspiration for those of you who are interested to work in this field.
As usual, just hit reply if you’d like to share thoughts/critique/your work! We’ll resume our regular market coverage in the next issue. -Nathan ....
Two years ago I wrote a piece describing 6 areas of machine learning to watch closely. In this post, I’ll describe 6 areas of life science research where AI methods are making an impact. I describe what they are, why they are important, and how they might be applied in the real world (i.e. outside of academia or industrial research groups). I’ll start from the small scale (molecules) and work up to the larger scale (cells). I look forward to hearing your comments and critique 👉@nathanbenaich. ... "
6 Impactful Applications of AI to the life Sciences
Nathan.AI Newsletter
A market intelligence newsletter covering AI in the technology industry, research lab and venture capital market.
Life sciences and healthcare are now in the limelight for technologists, in part because AI technologies are well suited to make a positive impact to key workflows. In this essay, I’ll explore 6 areas of life sciences that offer fruitful applications of AI. I hope this will serve as a resource and point of inspiration for those of you who are interested to work in this field.
As usual, just hit reply if you’d like to share thoughts/critique/your work! We’ll resume our regular market coverage in the next issue. -Nathan ....
Two years ago I wrote a piece describing 6 areas of machine learning to watch closely. In this post, I’ll describe 6 areas of life science research where AI methods are making an impact. I describe what they are, why they are important, and how they might be applied in the real world (i.e. outside of academia or industrial research groups). I’ll start from the small scale (molecules) and work up to the larger scale (cells). I look forward to hearing your comments and critique 👉@nathanbenaich. ... "
Mail Processing with Deep Learning
This should be carefully noted, sometimes the perceived sexiest technologies should be applied to the most mundane things. first. Note the spread of RPA as an example. Good look here:
Mail Processing with Deep Learning: A Case Study
Businesses increasingly delegate simple, boring, and repetitive tasks to artificial intelligence. In a case study, Alexandre Hubert — lead data scientist of software company Dataiku’s U.K. operations — worked on a team of three to automate mail processing with deep learning.
At ODSC Europe 2018, Hubert detailed how his team created a fairly successful mail processing software for a young insurance company. The deep learning system successfully processed two-thirds of all mail it received at a 1,000 letter-per-hour rate. This marked an improvement over the third-party sorting service the company used before.
Hubert and his team followed a four-part development procedure, detailed here. ... "
Mail Processing with Deep Learning: A Case Study
Businesses increasingly delegate simple, boring, and repetitive tasks to artificial intelligence. In a case study, Alexandre Hubert — lead data scientist of software company Dataiku’s U.K. operations — worked on a team of three to automate mail processing with deep learning.
At ODSC Europe 2018, Hubert detailed how his team created a fairly successful mail processing software for a young insurance company. The deep learning system successfully processed two-thirds of all mail it received at a 1,000 letter-per-hour rate. This marked an improvement over the third-party sorting service the company used before.
Hubert and his team followed a four-part development procedure, detailed here. ... "
Advances in GANS
To me one of the most interesting examples of neural neworks, a kind of mixture between nets and game simulation. We did do something similar by generating process examples to test Monte Carlo simulation. Have never implemented one, but worth knowing about.
Advances in Generative Adversarial Networks
A summary of the latest advances in Generative Adversarial Networks
Written by Bharath Raj with feedback from Rotem Alaluf
Generative Adversarial Networks are a powerful class of neural networks with remarkable applications. They essentially consist of a system of two neural networks — the Generator and the Discriminator — dueling each other.
Given a set of target samples, the Generator tries to produce samples that can fool the Discriminator into believing they are real. The Discriminator tries to resolve real (target) samples from fake (generated) samples. Using this iterative training approach, we eventually end up with a Generator that is really good at generating samples similar to the target samples.
GANs have a plethora of applications, as they can learn to mimic data distributions of almost any kind. Popularly, GANs are used for removing artefacts, super resolution, pose transfer, and literally any kind of image translation, as shown below: .... "
And also, CycleGans, a variation that has been used to generate Art.
Advances in Generative Adversarial Networks
A summary of the latest advances in Generative Adversarial Networks
Written by Bharath Raj with feedback from Rotem Alaluf
Generative Adversarial Networks are a powerful class of neural networks with remarkable applications. They essentially consist of a system of two neural networks — the Generator and the Discriminator — dueling each other.
Given a set of target samples, the Generator tries to produce samples that can fool the Discriminator into believing they are real. The Discriminator tries to resolve real (target) samples from fake (generated) samples. Using this iterative training approach, we eventually end up with a Generator that is really good at generating samples similar to the target samples.
GANs have a plethora of applications, as they can learn to mimic data distributions of almost any kind. Popularly, GANs are used for removing artefacts, super resolution, pose transfer, and literally any kind of image translation, as shown below: .... "
And also, CycleGans, a variation that has been used to generate Art.
Assistant Infrastructure
Good piece on assistant infrastructure. Much broader than just the question asked in the title. From my own experience, since nearly the beginning of assistants, start simple with one, and build others in. depending on need, or your interest in the concept. Read about skills/actions to extend capabilities. Use IFTTT. Read about the experience of others. Good to note that the investment of a single device works, since it can provide you with a vast variety streamed music more efficiently than any other way.
How many smart speakers do you need in your home?
As always, it depends.
in Wirecutter, @wirecutter via Engadget
Whether you're already one of the 20 percent of American adults who own a voice-controlled smart speaker or you're still on the fence about investing in an Amazon Alexa or Google Assistant device or an Apple HomePod, you might be wondering just how many of these intelligently attentive devices you'll need. A smart speaker can offer voice-controlled convenience throughout your home—but only if it can hear you.
The number of speakers you should buy depends on what kind of home you live in, and where and when you'll need your voice assistant to hear you. We have a few suggestions, depending on whether you want to build a smart-home setup, to listen to music and podcasts, to keep in touch with family and friends, or to use a digital assistant to boost your productivity. If you plan to buy multiple smart speakers, we recommend staying in the same family—although you could set up an Alexa-based speaker in one part of the house and a Google Assistant device somewhere else, you'd probably end up forgetting which platform has your to-do lists and which one controls the lights, or you'd have to do lots of redundant setup to get your smart home working with both platforms. .... "
How many smart speakers do you need in your home?
As always, it depends.
in Wirecutter, @wirecutter via Engadget
Whether you're already one of the 20 percent of American adults who own a voice-controlled smart speaker or you're still on the fence about investing in an Amazon Alexa or Google Assistant device or an Apple HomePod, you might be wondering just how many of these intelligently attentive devices you'll need. A smart speaker can offer voice-controlled convenience throughout your home—but only if it can hear you.
The number of speakers you should buy depends on what kind of home you live in, and where and when you'll need your voice assistant to hear you. We have a few suggestions, depending on whether you want to build a smart-home setup, to listen to music and podcasts, to keep in touch with family and friends, or to use a digital assistant to boost your productivity. If you plan to buy multiple smart speakers, we recommend staying in the same family—although you could set up an Alexa-based speaker in one part of the house and a Google Assistant device somewhere else, you'd probably end up forgetting which platform has your to-do lists and which one controls the lights, or you'd have to do lots of redundant setup to get your smart home working with both platforms. .... "
Hololens 2 to have Eye Movement Detection
Good addition of detecting and I assume controlling with eye movement. The control by gesture has never been good enough for many applications. Need more general applications to get people used to the interaction
Microsoft HoloLens 2 augmented reality headset unveiled By Leo Kelion in BBC Tech.
Microsoft has a new version of its augmented reality headset, which now detects where its users are looking and tracks the movements of their hands.
It said that HoloLens 2 wearers would find it easier to touch and otherwise interact with graphics superimposed over their real-world views.
Other improvements include filling more of a user's view and automatically recognising who they are.
The firm is pitching the kit as being ready for use in business environments.
Many experts believe mixing together graphics and real-world views has greater potential than virtual reality, which removes the user from their immediate environment. .... "
Microsoft HoloLens 2 augmented reality headset unveiled By Leo Kelion in BBC Tech.
Microsoft has a new version of its augmented reality headset, which now detects where its users are looking and tracks the movements of their hands.
It said that HoloLens 2 wearers would find it easier to touch and otherwise interact with graphics superimposed over their real-world views.
Other improvements include filling more of a user's view and automatically recognising who they are.
The firm is pitching the kit as being ready for use in business environments.
Many experts believe mixing together graphics and real-world views has greater potential than virtual reality, which removes the user from their immediate environment. .... "
Saturday, February 23, 2019
Skill Sample: Sauces
How will this drive us towards the future of real assistance?
New Alexa Skill Sample: Learn Multimodal Skill Design with Sauce Boss By Franklin Lobb
ASK Alexa Multimodal Tips and Tutorials
When learning something new, I find that starting simple yet practical is great way to go. The new Sauce Boss sample skill does one thing – provide simple recipes – in a voice-first manner that is also multimodal. If you’re looking to learn more about the Alexa Presentation Language (APL) and multimodal skills, check out the Sauce Boss Sample Skill on GitHub.
Because Sauce Boss is an update to our venerable How To skill samples that you may have used to provide brief recipes and “how-to” skills, it’s useful for both new and experienced skill developers learning about multimodal design and APL. If you already have a skill based on the How To sample skill, then you can use the Sauce Boss sample to add multimodal features like images, lists, TouchWrappers, and speech synchronization.
Here’s what you’ll find when you check out the Sauce Boss sample skill: ... "
New Alexa Skill Sample: Learn Multimodal Skill Design with Sauce Boss By Franklin Lobb
ASK Alexa Multimodal Tips and Tutorials
When learning something new, I find that starting simple yet practical is great way to go. The new Sauce Boss sample skill does one thing – provide simple recipes – in a voice-first manner that is also multimodal. If you’re looking to learn more about the Alexa Presentation Language (APL) and multimodal skills, check out the Sauce Boss Sample Skill on GitHub.
Because Sauce Boss is an update to our venerable How To skill samples that you may have used to provide brief recipes and “how-to” skills, it’s useful for both new and experienced skill developers learning about multimodal design and APL. If you already have a skill based on the How To sample skill, then you can use the Sauce Boss sample to add multimodal features like images, lists, TouchWrappers, and speech synchronization.
Here’s what you’ll find when you check out the Sauce Boss sample skill: ... "
Humanity and AI are Better Together
People will work with AI like they ave worked with all forms of automation.
In Andreessen Horowitz
Humanity + AI: Better Together By Frank Chen
This is a written version of a presentation I gave live at the a16z Summit in November 2018. You can watch a video version on YouTube.
Skynet is coming for your children — or is it?
In July 2017, I published a primer about artificial intelligence, machine learning and deep learning. Since then, I’ve been obsessively reading the headlines about machine learning, and in general, you will see two broad categories of articles on the front page. One category of headlines is “Robots are Coming for Your Jobs,” which predicts that we are headed inexorably towards mass unemployment. Even sober organizations like McKinsey seem to be forecasting doom-and-gloom scenarios in which one-third of workers are out of a job due to automation by 2030: ... "
In Andreessen Horowitz
Humanity + AI: Better Together By Frank Chen
This is a written version of a presentation I gave live at the a16z Summit in November 2018. You can watch a video version on YouTube.
Skynet is coming for your children — or is it?
In July 2017, I published a primer about artificial intelligence, machine learning and deep learning. Since then, I’ve been obsessively reading the headlines about machine learning, and in general, you will see two broad categories of articles on the front page. One category of headlines is “Robots are Coming for Your Jobs,” which predicts that we are headed inexorably towards mass unemployment. Even sober organizations like McKinsey seem to be forecasting doom-and-gloom scenarios in which one-third of workers are out of a job due to automation by 2030: ... "
Friday, February 22, 2019
Virtual Makeovers Improving
Worked this space for some time. L'Oreal has leading the pack.
Virtual Makeovers Are Better Than Ever. Beauty Companies Are Trying to Cash In By Rachel Metz
Augmented reality (AR) is being pushed to the mainstream by apps in the beauty industry that serve as a consumer tool for trying on makeup. One example is an iPhone app from Ulta Beauty subsidiary GlamST, aided by innovations in underlying technology such as facial-feature and finger tracking. More powerful front-facing cameras on modern smartphones also have helped boost AR's appeal to beauty companies. AR companies said virtual makeup makes sense, given the deeply entrenched consumer mindset of trying on products before purchase. Advocates like L'Oréal chief digital officer Lubomira Rochet said AR is improving sales, with shoppers typically spending more time on an app or website that has AR makeup or skin-care features; Rochet also saw a 10% greater likelihood that those who virtually try on products will buy them, compared to those who do ... "
Virtual Makeovers Are Better Than Ever. Beauty Companies Are Trying to Cash In By Rachel Metz
Augmented reality (AR) is being pushed to the mainstream by apps in the beauty industry that serve as a consumer tool for trying on makeup. One example is an iPhone app from Ulta Beauty subsidiary GlamST, aided by innovations in underlying technology such as facial-feature and finger tracking. More powerful front-facing cameras on modern smartphones also have helped boost AR's appeal to beauty companies. AR companies said virtual makeup makes sense, given the deeply entrenched consumer mindset of trying on products before purchase. Advocates like L'Oréal chief digital officer Lubomira Rochet said AR is improving sales, with shoppers typically spending more time on an app or website that has AR makeup or skin-care features; Rochet also saw a 10% greater likelihood that those who virtually try on products will buy them, compared to those who do ... "
Trends that Defined Google's Year in Search
A look at last year. Notably interesting is the way people have changed how they search.
The search trends that defined Google’s Year in Search
Natalie Zmuda December 2018 Search, Consumer
For marketers looking to understand the people, events, and cultural moments that defined 2018, search data provides a snapshot. From Stephen Hawking to the royal wedding to “Black Panther,” it’s clear what people cared about in the past year.
Search has always offered a window into what people need, want, and intend to do. That’s become even more true as the words that people type into a search box become increasingly conversational and personal.
In 2018, Google Search turned 20 years old. And while the way people search has changed dramatically in that time, the reasons why people search have not. People turn to search for information and inspiration, to learn how to do something, and to sate their curiosity. And 2018 was no different.
This year, people turned to search to locate their local polling place, to find out Prince Harry’s last name, and to understand why everyone was talking about something called “Fortnite.” They asked, “who won mcgregor vs khabib” and “where is hurricane michael.” And they also explored “how to vote,” “how to buy bitcoin,” and “how to get boogie down emote.” ... '
The search trends that defined Google’s Year in Search
Natalie Zmuda December 2018 Search, Consumer
For marketers looking to understand the people, events, and cultural moments that defined 2018, search data provides a snapshot. From Stephen Hawking to the royal wedding to “Black Panther,” it’s clear what people cared about in the past year.
Search has always offered a window into what people need, want, and intend to do. That’s become even more true as the words that people type into a search box become increasingly conversational and personal.
In 2018, Google Search turned 20 years old. And while the way people search has changed dramatically in that time, the reasons why people search have not. People turn to search for information and inspiration, to learn how to do something, and to sate their curiosity. And 2018 was no different.
This year, people turned to search to locate their local polling place, to find out Prince Harry’s last name, and to understand why everyone was talking about something called “Fortnite.” They asked, “who won mcgregor vs khabib” and “where is hurricane michael.” And they also explored “how to vote,” “how to buy bitcoin,” and “how to get boogie down emote.” ... '
DeRisking Analytics?
Risk exists in all kinds of analytics, from Regression to Optimization to 'AI'. But it also increases with a number of factors. Investments made, management expectations, exposure of decisions to the public, and lots more. All these are changing in context constantly. McKinsey suggests 'Validation Frameworks', usefully stated in the article below. We never used that term, and were usually not that formal, but the downside risk of complex, not very non-transparent methods that can look like dangerous bias, may require it. Regulations may soon specify it. Reducing Risk sounds more accurate than de-Risking. Risk is always there.
Derisking machine learning and artificial intelligence
The added risk brought on by the complexity of machine-learning models can be mitigated by making well-targeted modifications to existing validation frameworks. in McKinsey. .... "
Derisking machine learning and artificial intelligence
The added risk brought on by the complexity of machine-learning models can be mitigated by making well-targeted modifications to existing validation frameworks. in McKinsey. .... "
M13, P&G to Partner on Consumer Innovation
M13 and P&G to Partner on Consumer Innovation Incubator
PR Newswire
LOS ANGELES, Feb. 21, 2019 /PRNewswire/ -- P&G Ventures, an internal startup studio within The Procter & Gamble Company (NYSE:PG) and M13, a full-service venture firm, have announced a partnership in creating a new build studio within M13. This collaboration will leverage expertise in P&G's consumer-inspired innovation and concepts and M13's robust brand expertise, incubation capabilities, and funding to help accelerate the growth of selected consumer businesses. ... "
PR Newswire
LOS ANGELES, Feb. 21, 2019 /PRNewswire/ -- P&G Ventures, an internal startup studio within The Procter & Gamble Company (NYSE:PG) and M13, a full-service venture firm, have announced a partnership in creating a new build studio within M13. This collaboration will leverage expertise in P&G's consumer-inspired innovation and concepts and M13's robust brand expertise, incubation capabilities, and funding to help accelerate the growth of selected consumer businesses. ... "
Thursday, February 21, 2019
Beyond the Supply Chain: Global Value Chains
More ways to think about value being generated. Supplychains are being much influenced by new technologies now, its a good time to think about how they can be improved.
Globalization in transition: The future of trade and value chains in McKinsey By Susan Lund, James Manyika, Jonathan Woetzel, Jacques Bughin, Mekala Krishnan, Jeongmin Seong, and Mac Muir
Global value chains are being reshaped by rising demand and new industry capabilities in the developing world as well as a wave of new technologies.
Even with trade tensions and tariffs dominating the headlines, important structural changes in the nature of globalization have gone largely unnoticed. In Globalization in transition: The future of trade and value chains (PDF–3.7MB), the McKinsey Global Institute analyzes the dynamics of global value chains and finds structural shifts that have been hiding in plain sight.
Although output and trade continue to increase in absolute terms, trade intensity (that is, the share of output that is traded) is declining within almost every goods-producing value chain. Flows of services and data now play a much bigger role in tying the global economy together. Not only is trade in services growing faster than trade in goods, but services are creating value far beyond what national accounts measure. Using alternative measures, we find that services already constitute more value in global trade than goods. In addition, all global value chains are becoming more knowledge-intensive. Low-skill labor is becoming less important as factor of production. Contrary to popular perception, only about 18 percent of global goods trade is now driven by labor-cost arbitrage.
Three factors explain these changes: growing demand in China and the rest of the developing world, which enables these countries to consume more of what they produce; the growth of more comprehensive domestic supply chains in those countries, which has reduced their reliance on imports of intermediate goods; and the impact of new technologies. .... "
Globalization in transition: The future of trade and value chains in McKinsey By Susan Lund, James Manyika, Jonathan Woetzel, Jacques Bughin, Mekala Krishnan, Jeongmin Seong, and Mac Muir
Global value chains are being reshaped by rising demand and new industry capabilities in the developing world as well as a wave of new technologies.
Even with trade tensions and tariffs dominating the headlines, important structural changes in the nature of globalization have gone largely unnoticed. In Globalization in transition: The future of trade and value chains (PDF–3.7MB), the McKinsey Global Institute analyzes the dynamics of global value chains and finds structural shifts that have been hiding in plain sight.
Although output and trade continue to increase in absolute terms, trade intensity (that is, the share of output that is traded) is declining within almost every goods-producing value chain. Flows of services and data now play a much bigger role in tying the global economy together. Not only is trade in services growing faster than trade in goods, but services are creating value far beyond what national accounts measure. Using alternative measures, we find that services already constitute more value in global trade than goods. In addition, all global value chains are becoming more knowledge-intensive. Low-skill labor is becoming less important as factor of production. Contrary to popular perception, only about 18 percent of global goods trade is now driven by labor-cost arbitrage.
Three factors explain these changes: growing demand in China and the rest of the developing world, which enables these countries to consume more of what they produce; the growth of more comprehensive domestic supply chains in those countries, which has reduced their reliance on imports of intermediate goods; and the impact of new technologies. .... "
Knowledge Graphs, Governance, Learning and Much More
Have now been involved in a number of efforts in this area, worth understanding since Google has made an impressive run at this. The article tells a historical journey I have traveled as well, we might finally be getting to real enterprise value. And regulations like GDPR are forcing us to take notice of the need to really understand our data. The article is long, but has good points to make.
The Semantic Zoo - Smart Data Hubs, Knowledge Graphs and Data Catalogs By Kurt Cagle Contributor in Forbes
COGNITIVE WORLDContributor Group
Sometimes, you can enter into a technology too early. The groundwork for semantics was laid down in the late 1990s and early 2000s, with Tim Berners-Lee’s stellar Semantic Web article, debuting in Scientific American in 2004, seen by many as the movement’s birth. Yet many early participants in the field of semantics discovered a harsh reality: computer systems were too slow to handle the intense indexing requirements the technology needed, the original specifications and APIs failed to handle important edge cases, and, perhaps most importantly, the number of real world use cases where semantics made sense were simply not at a large enough scope; they could easily be met by existing approaches and technology.
Semantics faded around 2008, echoing the pattern of the Artificial Intelligence Winter of the 1970s. JSON was all the rage, then mobile apps, big data came on the scene even as Javascript underwent a radical transformation, and all of a sudden everyone wanted to be a data scientist (until they discovered the fact that data science was mostly math). Meanwhile, from the dim recesses of the troughs of despair, semantics was readying itself for its own metamorphosis. Several semantic standards, including the SPARQL query language along with a new update language began seeing implementations by 2015. Servers became faster and cheaper, and a rise of graphics processor units (GPUs) fueled by the gaming and entertainment industry provided tools for a new class of graph databases.
Meanwhile, the Big Data initiatives that had marked the early part of the 2010s was facing some real problems. The original promise of Hadoop as a map / reduce framework had ended up creating large numbers of data lakes that aggregated content but that sat under-utilized. Data scientists struggled to deal with dirty data that was really no cleaner for having been put in data lakes. JSON databases had grown in popularity, but they were proving hard to query in a consistent fashion, and all too many Hadoop projects ended up becoming large, slow, but cheap data graveyards for regulatory data (the kind of data that must be retained for five years). ..... "
The Semantic Zoo - Smart Data Hubs, Knowledge Graphs and Data Catalogs By Kurt Cagle Contributor in Forbes
COGNITIVE WORLDContributor Group
Sometimes, you can enter into a technology too early. The groundwork for semantics was laid down in the late 1990s and early 2000s, with Tim Berners-Lee’s stellar Semantic Web article, debuting in Scientific American in 2004, seen by many as the movement’s birth. Yet many early participants in the field of semantics discovered a harsh reality: computer systems were too slow to handle the intense indexing requirements the technology needed, the original specifications and APIs failed to handle important edge cases, and, perhaps most importantly, the number of real world use cases where semantics made sense were simply not at a large enough scope; they could easily be met by existing approaches and technology.
Semantics faded around 2008, echoing the pattern of the Artificial Intelligence Winter of the 1970s. JSON was all the rage, then mobile apps, big data came on the scene even as Javascript underwent a radical transformation, and all of a sudden everyone wanted to be a data scientist (until they discovered the fact that data science was mostly math). Meanwhile, from the dim recesses of the troughs of despair, semantics was readying itself for its own metamorphosis. Several semantic standards, including the SPARQL query language along with a new update language began seeing implementations by 2015. Servers became faster and cheaper, and a rise of graphics processor units (GPUs) fueled by the gaming and entertainment industry provided tools for a new class of graph databases.
Meanwhile, the Big Data initiatives that had marked the early part of the 2010s was facing some real problems. The original promise of Hadoop as a map / reduce framework had ended up creating large numbers of data lakes that aggregated content but that sat under-utilized. Data scientists struggled to deal with dirty data that was really no cleaner for having been put in data lakes. JSON databases had grown in popularity, but they were proving hard to query in a consistent fashion, and all too many Hadoop projects ended up becoming large, slow, but cheap data graveyards for regulatory data (the kind of data that must be retained for five years). ..... "
Google Makes more Speech Services Available
Impressive array of cognitive speech services, in 120 languages! Now broadly available with demonstrations at the link.
Cloud Speech-to-Text
Speech-to-text conversion powered by machine learning and available for short-form or long-form audio.
Powerful speech recognition
Google Cloud Speech-to-Text enables developers to convert audio to text by applying powerful neural network models in an easy-to-use API. The API recognizes 120 languages and variants to support your global user base. You can enable voice command-and-control, transcribe audio from call centers, and more. It can process real-time streaming or prerecorded audio, using Google’s machine learning technology.
Some of the Betas in particular are indicative of future direction of capabilities:
Cloud Speech-to-Text features
Speech-to-text conversion powered by machine learning.
Automatic Speech Recognition
Automatic Speech Recognition (ASR) powered by deep learning neural networking to power your applications like voice search or speech transcription.
Global Vocabulary
Recognizes 120 languages and variants with an extensive vocabulary.
Phrase Hints
Speech recognition can be customized to a specific context by providing a set of words and phrases that are likely to be spoken. This is especially useful for adding custom words and names to the vocabulary and in voice-control use cases.
Real-time Streaming or Prerecorded Audio Support
Audio input can be streamed from an application’s microphone or sent from a prerecorded audio file (inline or through Google Cloud Storage). Multiple audio encodings are supported, including FLAC, AMR, PCMU, and Linear-16.
Auto-Detect Language BETA
When you need to support multilingual scenarios, you can now specify two to four language codes and Cloud Speech-to-Text will identify the correct language spoken and provide the transcript.
Noise Robustness
Handles noisy audio from many environments without requiring additional noise cancellation.
Inappropriate Content Filtering
Filter inappropriate content in text results for some languages.
Automatic Punctuation BETA
Accurately punctuates transcriptions (e.g., commas, question marks, and periods) with machine learning.
Model Selection BETA
Choose from a selection of four pre-built models: default, voice commands and search, phone calls, and video transcription.
Speaker Diarization BETA
Know who said what - you can now get automatic predictions about which of the speakers in a conversation spoke each utterance.
Multichannel Recognition BETA
In multiparticipant recordings where each participant is recorded in a separate channel (e.g., phone call with two channels or video conference with four channels), Cloud Speech-to-Text will recognize each channel separately and then annotate the transcripts so that they follow the same order as in real life. .... "
Cloud Speech-to-Text
Speech-to-text conversion powered by machine learning and available for short-form or long-form audio.
Powerful speech recognition
Google Cloud Speech-to-Text enables developers to convert audio to text by applying powerful neural network models in an easy-to-use API. The API recognizes 120 languages and variants to support your global user base. You can enable voice command-and-control, transcribe audio from call centers, and more. It can process real-time streaming or prerecorded audio, using Google’s machine learning technology.
Some of the Betas in particular are indicative of future direction of capabilities:
Cloud Speech-to-Text features
Speech-to-text conversion powered by machine learning.
Automatic Speech Recognition
Automatic Speech Recognition (ASR) powered by deep learning neural networking to power your applications like voice search or speech transcription.
Global Vocabulary
Recognizes 120 languages and variants with an extensive vocabulary.
Phrase Hints
Speech recognition can be customized to a specific context by providing a set of words and phrases that are likely to be spoken. This is especially useful for adding custom words and names to the vocabulary and in voice-control use cases.
Real-time Streaming or Prerecorded Audio Support
Audio input can be streamed from an application’s microphone or sent from a prerecorded audio file (inline or through Google Cloud Storage). Multiple audio encodings are supported, including FLAC, AMR, PCMU, and Linear-16.
Auto-Detect Language BETA
When you need to support multilingual scenarios, you can now specify two to four language codes and Cloud Speech-to-Text will identify the correct language spoken and provide the transcript.
Noise Robustness
Handles noisy audio from many environments without requiring additional noise cancellation.
Inappropriate Content Filtering
Filter inappropriate content in text results for some languages.
Automatic Punctuation BETA
Accurately punctuates transcriptions (e.g., commas, question marks, and periods) with machine learning.
Model Selection BETA
Choose from a selection of four pre-built models: default, voice commands and search, phone calls, and video transcription.
Speaker Diarization BETA
Know who said what - you can now get automatic predictions about which of the speakers in a conversation spoke each utterance.
Multichannel Recognition BETA
In multiparticipant recordings where each participant is recorded in a separate channel (e.g., phone call with two channels or video conference with four channels), Cloud Speech-to-Text will recognize each channel separately and then annotate the transcripts so that they follow the same order as in real life. .... "
EU and Copyrights
Had not heard of this particular topic. Of interest.
In the Electronic Frontier Foundation (EFF) Been around a long time, a good follow.
The Final Version of the EU's Copyright Directive Is the Worst One Yet By Cory Doctorow
Despite ringing denunciations from small EU tech businesses, giant EU entertainment companies, artists' groups, technical experts, and human rights experts, and the largest body of concerned citizens in EU history, the EU has concluded its "trilogues" on the new Copyright Directive, striking a deal that—amazingly—is worse than any in the Directive's sordid history.
Goodbye, protections for artists and scientists
The Copyright Directive was always a grab bag of updates to EU copyright rules—which are long overdue for an overhaul, given that it's been 18 years since the last set of rules were ratified. Some of its clauses gave artists and scientists much-needed protections: artists were to be protected from the worst ripoffs by entertainment companies, and scientists could use copyrighted works as raw material for various kinds of data analysis and scholarship.... "
In the Electronic Frontier Foundation (EFF) Been around a long time, a good follow.
The Final Version of the EU's Copyright Directive Is the Worst One Yet By Cory Doctorow
Despite ringing denunciations from small EU tech businesses, giant EU entertainment companies, artists' groups, technical experts, and human rights experts, and the largest body of concerned citizens in EU history, the EU has concluded its "trilogues" on the new Copyright Directive, striking a deal that—amazingly—is worse than any in the Directive's sordid history.
Goodbye, protections for artists and scientists
The Copyright Directive was always a grab bag of updates to EU copyright rules—which are long overdue for an overhaul, given that it's been 18 years since the last set of rules were ratified. Some of its clauses gave artists and scientists much-needed protections: artists were to be protected from the worst ripoffs by entertainment companies, and scientists could use copyrighted works as raw material for various kinds of data analysis and scholarship.... "
Wednesday, February 20, 2019
Explaining Facts Over Knowledge Graphs
Of general interest, linking explanation and knowledge graphs. Technical.
ExFaKT: a framework for explaining facts over knowledge graphs and text in Acolyer
ExFaKT: a framework for explaining facts over knowledge graphs and text Gad-Elrab et al., WSDM’19
Last week we took a look at Graph Neural Networks for learning with structured representations. Another kind of graph of interest for learning and inference is the knowledge graph.
Knowledge Graphs (KGs) are large collections of factual triples of the form \langle subject\ predicate\ object \rangle (SPO) about people, companies, places etc.
Today’s paper choice focuses on the topical area of fact-checking : how do we know whether a candidate fact, which might for example be harvested from a news article or social media post, is likely to be true? For the first generation of knowledge graphs, fact checking was performed manually by human reviewers, but this clearly doesn’t scale to the volume of information published daily. Automated fact checking methods typically produce a numerical score (probability the fact is true), but these scores are hard to understand and justify without a corresponding explanation. ... "
ExFaKT: a framework for explaining facts over knowledge graphs and text in Acolyer
ExFaKT: a framework for explaining facts over knowledge graphs and text Gad-Elrab et al., WSDM’19
Last week we took a look at Graph Neural Networks for learning with structured representations. Another kind of graph of interest for learning and inference is the knowledge graph.
Knowledge Graphs (KGs) are large collections of factual triples of the form \langle subject\ predicate\ object \rangle (SPO) about people, companies, places etc.
Today’s paper choice focuses on the topical area of fact-checking : how do we know whether a candidate fact, which might for example be harvested from a news article or social media post, is likely to be true? For the first generation of knowledge graphs, fact checking was performed manually by human reviewers, but this clearly doesn’t scale to the volume of information published daily. Automated fact checking methods typically produce a numerical score (probability the fact is true), but these scores are hard to understand and justify without a corresponding explanation. ... "
Cutting Through 5G Hype
Seems to be a major issue, especially with regard to how smart homes, and connected automobiles will be implemented. McKinsey reports on the hype, with predictions about what we might get when:
" .... Cutting through the 5G hype: Survey shows telcos’ nuanced views
Operators see a marginally positive business case, expect rollout at scale to take until 2022, and don’t think the increase in capital-expense-to-sales ratio will be as big as skeptics claim. ... "
" .... Cutting through the 5G hype: Survey shows telcos’ nuanced views
Operators see a marginally positive business case, expect rollout at scale to take until 2022, and don’t think the increase in capital-expense-to-sales ratio will be as big as skeptics claim. ... "
Qualcomm Goes All-in with Voice
This was surprising, had a chat with Qualcomm recently and it did not come up. Examining what this looks like. Test examples out there? Send them along.
Qualcomm goes all-in on Amazon Alexa voice control with a new development kit in Digitaltrend
The new kit is enabled by the ClearVoice far-field voice enhancement software solution from Silicon Valley’s Meeami Technologies, which makes it easy for mesh networking manufacturers to layer voice control capabilities onto devices and networks that are powered by Qualcomm’s Wi-Fi mesh platforms. As with Amazon’s Echo or Dot devices, customers can manage, automate, and monitor smart home devices, as well as play music, ask questions, and access tens of thousands of skills.
“Mesh networks have become the new standard to ensure the best possible connected experience in the home. By adding Amazon’s advanced voice capabilities through the Alexa Voice Service, we are unlocking new opportunities for customers to enable exciting new smart home experiences controlled with the simplicity of voice,” said Nick Kucharewski, vice president and general manager of Wireless Infrastructure and Networking for Qualcomm Technologies. “By integrating our mesh platform with Alexa, we create a powerful development kit that enables device manufacturers to quickly and economically bring innovative new products to market and meet the development speed of this fast-growing market.” ... "
Qualcomm goes all-in on Amazon Alexa voice control with a new development kit in Digitaltrend
The new kit is enabled by the ClearVoice far-field voice enhancement software solution from Silicon Valley’s Meeami Technologies, which makes it easy for mesh networking manufacturers to layer voice control capabilities onto devices and networks that are powered by Qualcomm’s Wi-Fi mesh platforms. As with Amazon’s Echo or Dot devices, customers can manage, automate, and monitor smart home devices, as well as play music, ask questions, and access tens of thousands of skills.
“Mesh networks have become the new standard to ensure the best possible connected experience in the home. By adding Amazon’s advanced voice capabilities through the Alexa Voice Service, we are unlocking new opportunities for customers to enable exciting new smart home experiences controlled with the simplicity of voice,” said Nick Kucharewski, vice president and general manager of Wireless Infrastructure and Networking for Qualcomm Technologies. “By integrating our mesh platform with Alexa, we create a powerful development kit that enables device manufacturers to quickly and economically bring innovative new products to market and meet the development speed of this fast-growing market.” ... "
Streaming Applications
Streaming applications, the only system I have worked with that does this is Splunk. Also well known is Flink. Usually large scale. Many other approaches exist. Streaming is a means of processing arriving requests as they arrive. Very successful in systems like Netflix.
Patterns of Streaming Applications in InfoQ
Monal Daxini presents a blueprint for streaming data architectures and a review of desirable features of a streaming engine. He also talks about streaming application patterns and anti-patterns, and use cases and concrete examples using Apache Flink.
Bio
Monal Daxini is the Tech Lead for Stream Processing platform for business insights at Netflix. He helped build the petabyte scale Keystone pipeline running on the Flink powered platform. He introduced Flink to Netflix, and also helped define the vision for this platform. He has over 17 years of experience building scalable distributed systems. .... "
Patterns of Streaming Applications in InfoQ
Monal Daxini presents a blueprint for streaming data architectures and a review of desirable features of a streaming engine. He also talks about streaming application patterns and anti-patterns, and use cases and concrete examples using Apache Flink.
Bio
Monal Daxini is the Tech Lead for Stream Processing platform for business insights at Netflix. He helped build the petabyte scale Keystone pipeline running on the Flink powered platform. He introduced Flink to Netflix, and also helped define the vision for this platform. He has over 17 years of experience building scalable distributed systems. .... "
Wal-Mart Testing Digital Shelf Tags
An idea that has been around for a long time, we tested it extensively. Costs versus value often interfered. Now both Kroger and Wal-Mart appear serious with broader tests.
Walmart testing digital shelf tags at 2 locations in Progressive Grocer
Walmart is testing two types of digital shelf tags in the baked goods section at two stores in Rogers, Ark. Test shoppers sent by market research firm Field Agent preferred the labels to standard ones, but were split in their opinions between the long, full-color LED strips and the smaller digital tags. ... "
Walmart testing digital shelf tags at 2 locations in Progressive Grocer
Walmart is testing two types of digital shelf tags in the baked goods section at two stores in Rogers, Ark. Test shoppers sent by market research firm Field Agent preferred the labels to standard ones, but were split in their opinions between the long, full-color LED strips and the smaller digital tags. ... "
Tuesday, February 19, 2019
Stats on Skills and Usage on Alexa vs Google Assistants
Considerable Details at the Link
Google Assistant Actions up 2.5x in 2018 to reach 4,253 in the US By Sarah Perez in TechCrunch
In addition to competing for smart speaker market share, Google and Amazon are also competing for developer mindshare in the voice app ecosystem. On this front, Amazon has soared ahead — the number of available voice skills for Alexa devices has grown to top 80,000 the company recently announced. According to a new third-party analysis from Voicebot, Google is trailing that by a wide margin with its own voice apps, called Google Assistant Actions, which total 4,253 in the U.S. as of January 2019.
For comparison, 56,750 of Amazon Alexa’s total 80,000 skills are offered in the U.S.
The report notes that the number of Google Assistant Actions have grown 2.5 times over the past year — which is slightly faster growth than seen on Amazon Alexa, whose skill count grew 2.2 times during the same period. But the total is a much smaller number, so growth percentages may not be as relevant here. .... "
Google Assistant Actions up 2.5x in 2018 to reach 4,253 in the US By Sarah Perez in TechCrunch
In addition to competing for smart speaker market share, Google and Amazon are also competing for developer mindshare in the voice app ecosystem. On this front, Amazon has soared ahead — the number of available voice skills for Alexa devices has grown to top 80,000 the company recently announced. According to a new third-party analysis from Voicebot, Google is trailing that by a wide margin with its own voice apps, called Google Assistant Actions, which total 4,253 in the U.S. as of January 2019.
For comparison, 56,750 of Amazon Alexa’s total 80,000 skills are offered in the U.S.
The report notes that the number of Google Assistant Actions have grown 2.5 times over the past year — which is slightly faster growth than seen on Amazon Alexa, whose skill count grew 2.2 times during the same period. But the total is a much smaller number, so growth percentages may not be as relevant here. .... "
Smart Headlights: Adaptive Driving Beams
Good sensor and adaptive example. Made me think of the problem more abstractly. Shining and focusing on data more precisely?
Smart Headlights Inch Closer to American Roads
The New York Times By Eric A. Taub
Adaptive driving beam (A.D.B.) headlights use sensors and cameras to continuously shape a vehicle's high beams to illuminate only areas without oncoming traffic, while sending light elsewhere down the road. Car manufacturers such as Audi, BMW, Mercedes, and Toyota already offer this type of lighting, but ADB lamps currently are illegal in the U.S. The U.S. National Highway Traffic Safety Administration (NHTSA) currently requires vehicles to have distinct high and low beams, disallowing lights that can dynamically adjust. However, in October the NHTSA issued a notice of proposed rule-making that, if approved, would allow these headlamps in the U.S. In anticipation of this, Audi is already selling cars in the U.S. that feature “matrix-designed” LED headlamps, which need only a software upgrade to operate in an adaptive way. ... "
Smart Headlights Inch Closer to American Roads
The New York Times By Eric A. Taub
Adaptive driving beam (A.D.B.) headlights use sensors and cameras to continuously shape a vehicle's high beams to illuminate only areas without oncoming traffic, while sending light elsewhere down the road. Car manufacturers such as Audi, BMW, Mercedes, and Toyota already offer this type of lighting, but ADB lamps currently are illegal in the U.S. The U.S. National Highway Traffic Safety Administration (NHTSA) currently requires vehicles to have distinct high and low beams, disallowing lights that can dynamically adjust. However, in October the NHTSA issued a notice of proposed rule-making that, if approved, would allow these headlamps in the U.S. In anticipation of this, Audi is already selling cars in the U.S. that feature “matrix-designed” LED headlamps, which need only a software upgrade to operate in an adaptive way. ... "
Questions to Ask to Scope AI Methods
Good look below that is worth a read. And I also point out that these questions should be asked of any analytical problem. To take it further I would also ask if AI would provide a better answer than simpler methods? I would test comparatively the harder methods on a restricted problem, getting technology expert opinion as to its applicability in context. And sometimes a pure effort is worth while just to test out requirements and results of new methods.
How to identify an AI opportunity: 5 questions to ask
Could AI solve that problem? Speed that process? Five important things you should ask to unearth AI opportunities in your organization
By Kevin Casey in Enterprise Project ... "
How to identify an AI opportunity: 5 questions to ask
Could AI solve that problem? Speed that process? Five important things you should ask to unearth AI opportunities in your organization
By Kevin Casey in Enterprise Project ... "
Deloitte on AI Getting More Pervasive
A Sponsored piece by Deloitte in the HBR on how Ai is getting more pervasive. Well done:
AI Is Not Just Getting Better; It’s Becoming More Pervasive
Advances in artificial intelligence (AI) software and hardware are giving rise to a multitude of smart devices that can recognize and react to sights, sounds, and other patterns—and do not require a persistent connection to the cloud. These smart devices, from robots to cameras to medical devices, could well unlock greater efficiency and effectiveness at organizations that adopt them.
But that’s only part of the story. In some industries, smart machines may well help expand existing markets, threaten incumbents, and shift the way revenue and profits are apportioned among industry players.
Rapid strides in technology and the growing investment in AI innovation signal how fast AI deployment is moving. Advances in software and hardware are propelling AI outside of the data center into devices and machines we use in our work and our everyday lives. .... "
AI Is Not Just Getting Better; It’s Becoming More Pervasive
Advances in artificial intelligence (AI) software and hardware are giving rise to a multitude of smart devices that can recognize and react to sights, sounds, and other patterns—and do not require a persistent connection to the cloud. These smart devices, from robots to cameras to medical devices, could well unlock greater efficiency and effectiveness at organizations that adopt them.
But that’s only part of the story. In some industries, smart machines may well help expand existing markets, threaten incumbents, and shift the way revenue and profits are apportioned among industry players.
Rapid strides in technology and the growing investment in AI innovation signal how fast AI deployment is moving. Advances in software and hardware are propelling AI outside of the data center into devices and machines we use in our work and our everyday lives. .... "
Eye Tracking Important
Good to see the mention of Zaltman's work, whose expertise we used extensively. Not quite sure what 95% means here, but certainly important.
Why is Eye-Tracking really important for Market Research? in DSC
Posted by Ayush Srivastava in DSC
“95% of the purchase decisions happen in the subconscious”
Yes, you read it right! 95% of the purchase decisions happen in our sub-conscious — according to research performed by Harvard Business School professor Gerald Zaltman. For the uninitiated, or someone new to the consumer insights industry, this is a revelation. Indeed, when I think of it, I cannot give a reasonable explanation to why I chose to drink orange juice over apple juice at a party last night. Both the options were in front of me, some neurons fired in my brain and my hand just steered towards the orange juice.
This research from Prof. Gerald also made me think that how companies can understand what’s happening in consumers subconscious in order to derive insights. And it is important to figure this out because asking consumers (surveys, interviews etc) can only yield ‘conscious’ reasons behind their actions, which explains only 5% of the decisions as per Prof. Gerald. People usually decide in their subconscious and then give a logical explanation according to their “conscious brain” ... "
Why is Eye-Tracking really important for Market Research? in DSC
Posted by Ayush Srivastava in DSC
“95% of the purchase decisions happen in the subconscious”
Yes, you read it right! 95% of the purchase decisions happen in our sub-conscious — according to research performed by Harvard Business School professor Gerald Zaltman. For the uninitiated, or someone new to the consumer insights industry, this is a revelation. Indeed, when I think of it, I cannot give a reasonable explanation to why I chose to drink orange juice over apple juice at a party last night. Both the options were in front of me, some neurons fired in my brain and my hand just steered towards the orange juice.
This research from Prof. Gerald also made me think that how companies can understand what’s happening in consumers subconscious in order to derive insights. And it is important to figure this out because asking consumers (surveys, interviews etc) can only yield ‘conscious’ reasons behind their actions, which explains only 5% of the decisions as per Prof. Gerald. People usually decide in their subconscious and then give a logical explanation according to their “conscious brain” ... "
Vendor Shortlists
I then further ask, is there a way to update a shortlist automatically? OR add candidates that should be separately evaluated.
What is a Shortlist Anymore? by Hank Barnes In the Gartner blog
As Gartner continues to explore the world of B2B buying, we’ve noticed a phenomenom that is perplexing, compelling, and informative.
For years, people have talked about shortlists. “We’ve built our shortlist of vendors to consider.” “We’re on the shortlist.” In short (apologies), the shortlist was a signal that buyers were closing in on a decision.
Well, that concept, at least in its simple, traditional form is gone. In most cases, while the shortlist may exist; it isn’t what it used to be.
Late in 2017, in a survey of people involved in significant B2B technology purchases, we asked if, after creating a shortlist, they ever added vendors to it. The responses were surprising:
What is a Shortlist Anymore? by Hank Barnes In the Gartner blog
As Gartner continues to explore the world of B2B buying, we’ve noticed a phenomenom that is perplexing, compelling, and informative.
For years, people have talked about shortlists. “We’ve built our shortlist of vendors to consider.” “We’re on the shortlist.” In short (apologies), the shortlist was a signal that buyers were closing in on a decision.
Well, that concept, at least in its simple, traditional form is gone. In most cases, while the shortlist may exist; it isn’t what it used to be.
Late in 2017, in a survey of people involved in significant B2B technology purchases, we asked if, after creating a shortlist, they ever added vendors to it. The responses were surprising:
Monday, February 18, 2019
Governance, Oversight and Auditing AI Systems
Good overview of the idea. Increasingly will be proposed and required. My own connection to legal AI systems will start to embrace this as well. Adding this to my own investigation of regulation and liability aspects.
Governance and Oversight Coming to AI and Automation: Independent Audit of AI Systems By Ryan Carrier in CACM
Governance and independent oversight on the design and implementation of all forms of artificial intelligence (AI) and automation is a cresting wave about to break comprehensively on the field of information technology and computing.
If this is a surprise to you, then you may have missed the forest for the trees on a myriad of news stories over the past three to five years. Privacy failures, cybersecurity breeches, unethical choices in decision engines and biased datasets have repeatedly sprung up as corporations around the world deploy increasing numbers of AIs throughout their organizations.
The world, broadly speaking, combined with legislative bodies, regulators and a dedicated body of academics operating in the field of AI Safety, have been pressing the issue. Now guidelines are taking hold in a practical format.
IEEE's Ethically Aligned Design is the Gold Standard for drawing together a global voice, using open source crowdsourcing techniques to assert some core ethical guidelines. Additionally, the standards body is deeply into the process of creating 13 different sets of standards covering areas from child and student data governance to algorithmic bias. ..... "
Governance and Oversight Coming to AI and Automation: Independent Audit of AI Systems By Ryan Carrier in CACM
Governance and independent oversight on the design and implementation of all forms of artificial intelligence (AI) and automation is a cresting wave about to break comprehensively on the field of information technology and computing.
If this is a surprise to you, then you may have missed the forest for the trees on a myriad of news stories over the past three to five years. Privacy failures, cybersecurity breeches, unethical choices in decision engines and biased datasets have repeatedly sprung up as corporations around the world deploy increasing numbers of AIs throughout their organizations.
The world, broadly speaking, combined with legislative bodies, regulators and a dedicated body of academics operating in the field of AI Safety, have been pressing the issue. Now guidelines are taking hold in a practical format.
IEEE's Ethically Aligned Design is the Gold Standard for drawing together a global voice, using open source crowdsourcing techniques to assert some core ethical guidelines. Additionally, the standards body is deeply into the process of creating 13 different sets of standards covering areas from child and student data governance to algorithmic bias. ..... "
Labels:
AI,
Automation,
Bias,
CACM,
ethics,
Governance,
IEEE,
Law,
Oversight
High Order Optimization Queries
Package Queries, Technical, having this looked at by some practitioners.
Scalable Computation of High-Order Optimization Queries
By Matteo Brucato, Azza Abouzied, Alexandra Meliou
Communications of the ACM, January 2019, Vol. 62 No. 2, Pages 108-116
10.1145/3299881
Constrained optimization problems are at the heart of significant applications in a broad range of domains, including finance, transportation, manufacturing, and healthcare. Modeling and solving these problems has relied on application-specific solutions, which are often complex, error-prone, and do not generalize. Our goal is to create a domain-independent, declarative approach, supported and powered by the system where the data relevant to these problems typically resides: the database. We present a complete system that supports package queries, a new query model that extends traditional database queries to handle complex constraints and preferences over answer sets, allowing the declarative specification and efficient evaluation of a significant class of constrained optimization problems—integer linear programs (ILP)—within a database. .... "
Also Related:
https://cacm.acm.org/magazines/2019/2/234345-technical-perspective-to-do-or-not-to-do/abstract
Scalable Computation of High-Order Optimization Queries
By Matteo Brucato, Azza Abouzied, Alexandra Meliou
Communications of the ACM, January 2019, Vol. 62 No. 2, Pages 108-116
10.1145/3299881
Constrained optimization problems are at the heart of significant applications in a broad range of domains, including finance, transportation, manufacturing, and healthcare. Modeling and solving these problems has relied on application-specific solutions, which are often complex, error-prone, and do not generalize. Our goal is to create a domain-independent, declarative approach, supported and powered by the system where the data relevant to these problems typically resides: the database. We present a complete system that supports package queries, a new query model that extends traditional database queries to handle complex constraints and preferences over answer sets, allowing the declarative specification and efficient evaluation of a significant class of constrained optimization problems—integer linear programs (ILP)—within a database. .... "
Also Related:
https://cacm.acm.org/magazines/2019/2/234345-technical-perspective-to-do-or-not-to-do/abstract
Amazon and Outside US Cashier Free Stores
The idea continues to spread. How soon will it become expected? We did one of the earliest tests of this technology in our laboratory store.
Amazon may be close to opening its first cashier-free store outside U.S. By Trevor Mogg in Digitaltrends
Amazon could be on the verge of opening its first cashier-free store outside of the U.S. after a report claimed the company has secured retail space for a location in central London.
We’ve known for several months that the company is looking to take its Amazon Go store beyond American shores, with a report last year suggesting it was looking at sites in the U.K.
But The Grocer’s report, published over the weekend, suggests the company has now chosen at least one spot in the heart of the capital city, though the specific location and opening date remain unknown. ... "
Amazon may be close to opening its first cashier-free store outside U.S. By Trevor Mogg in Digitaltrends
Amazon could be on the verge of opening its first cashier-free store outside of the U.S. after a report claimed the company has secured retail space for a location in central London.
We’ve known for several months that the company is looking to take its Amazon Go store beyond American shores, with a report last year suggesting it was looking at sites in the U.K.
But The Grocer’s report, published over the weekend, suggests the company has now chosen at least one spot in the heart of the capital city, though the specific location and opening date remain unknown. ... "
Cisco Looks at the Connected Car
Even before we become self driving, we will be very connected. Cisco is starting a series of pieces on the implications, and thought provoking stats.
Connected Car – The Driven Hour
By Joel Obstfeld in the Cisco Blog
It’s time to get real about Connected Cars and the volume of data that they will generate.
Connected vehicles are often mentioned as being a major driver for 5G, especially with respect to the provision of low-latency communications for safety-related use-cases. With the 5G capability to provide ‘network slicing’, it is likely that automotive manufacturers will be attracted to such offerings with a view to improved security, resource allocation and service differentiation.
While viable business models for the support of safety-related use-cases are still to be defined, modern vehicles are already connected to service provider networks using cellular technology for a range of applications. ... "
Connected Car – The Driven Hour
By Joel Obstfeld in the Cisco Blog
It’s time to get real about Connected Cars and the volume of data that they will generate.
Connected vehicles are often mentioned as being a major driver for 5G, especially with respect to the provision of low-latency communications for safety-related use-cases. With the 5G capability to provide ‘network slicing’, it is likely that automotive manufacturers will be attracted to such offerings with a view to improved security, resource allocation and service differentiation.
While viable business models for the support of safety-related use-cases are still to be defined, modern vehicles are already connected to service provider networks using cellular technology for a range of applications. ... "
Sunday, February 17, 2019
Replication Crisis in Science
Following this for some time. Reproducing results is often context dependent. It is a serious problem, especially in the social sciences, but elsewhere too. One reason you is need to consider context, and related biases carefully. Of course if what is driving having reproduced results is confirmation bias, that's another problem. Have seen both. And by the way having worked with DARPA, I disagree with calling it a 'mad-science' wing, they are very serious and rational there.
DARPA Wants to Solve Science's Replication Crisis With Robots
By Wired, reprinted in CACM
At the same instant that a significant chunk of policymakers seem to disbelieve the science behind global warming, a bunch of scientists come along and point out that vast swaths of the social sciences don't stand up to scrutiny. They don't replicate—which is to say, if someone else does the same experiment, they get different (often contradictory) results.
Researchers are trying to fix the problem. They're encouraging more sharing of data sets and urging each other to preregister their hypotheses. The idea is to cut down on the statistical shenanigans and memory-holing of negative results that got the field into this mess.
And self-appointed teams are even going back through old work, manually, to see what holds up and what doesn't. That means doing the same experiment again, or trying to expand it to see if the effect generalizes. To the Defense Advanced Research Projects Agency, the Pentagon's mad-science wing, the problem demands an obvious solution: Robots. .... "
DARPA Wants to Solve Science's Replication Crisis With Robots
By Wired, reprinted in CACM
At the same instant that a significant chunk of policymakers seem to disbelieve the science behind global warming, a bunch of scientists come along and point out that vast swaths of the social sciences don't stand up to scrutiny. They don't replicate—which is to say, if someone else does the same experiment, they get different (often contradictory) results.
Researchers are trying to fix the problem. They're encouraging more sharing of data sets and urging each other to preregister their hypotheses. The idea is to cut down on the statistical shenanigans and memory-holing of negative results that got the field into this mess.
And self-appointed teams are even going back through old work, manually, to see what holds up and what doesn't. That means doing the same experiment again, or trying to expand it to see if the effect generalizes. To the Defense Advanced Research Projects Agency, the Pentagon's mad-science wing, the problem demands an obvious solution: Robots. .... "
How do the Great Winemakers Sell?
Intriguing view of selling. Works if you already have the established brand equity: real, bought or imagined.
Should You Ignore What Your Customers Want? The Great Winemakers Do.
Rather than follow consumer taste, they push it in a new direction.
Based on the research of:
Ashlee Humphreys, Gregory Carpenter at Kellogg
A wine expert guides a consumer in a shopping cart through a river of wine, to a particular group of bottles. Michael Meier
Among French wines, Château Pétrus is legendary. Consumers pay over $1,000 for a single bottle. Talking with Christian Moueix, the owner and long-time winemaker of Pétrus, Kellogg’s Gregory Carpenter asked an innocent question: When crafting a wine, how do you think about the consumer?
Taken aback, the vintner paused, leaned back, and opened his eyes wide. “He said, ‘I don’t! I make what pleases me,’” recalls Carpenter, a professor of marketing at the Kellogg School.
That may come as a surprise to those who think that winning customers requires exhaustive surveys and precise analytics to discover what people want. Yet this consumer-skeptic attitude is common among winemakers. “They suspect that consumers don’t really appreciate and respect wine,” says Carpenter, “so there’s no point asking them what they think.”
But from a business point of view, that presents a challenge: How do you create devoted customers and turn a profit if you essentially ignore what customers want?
Winemakers are not the only ones facing this quandary. Marketing scholars have a term—“market-driving firms”—for businesses that, rather than reacting to consumer tastes, attempt to influence those tastes to their advantage. But prior research on market-driving firms has focused on high-tech innovators like Apple and Tesla. These companies shape consumer preferences by introducing unprecedented products and services, which often render the competition obsolete. As Steve Jobs famously stated: “Our job is to figure out what [customers] are going to want before they do.”
Carpenter wanted to know how a company can influence consumers without a disruptive new technology to offer. So, working with Ashlee Humphreys, an associate professor of integrated marketing communications at Northwestern’s Medill School, he turned to the wine industry. “Winemaking hasn’t changed in thousands of years,” Carpenter says. .... "
Should You Ignore What Your Customers Want? The Great Winemakers Do.
Rather than follow consumer taste, they push it in a new direction.
Based on the research of:
Ashlee Humphreys, Gregory Carpenter at Kellogg
A wine expert guides a consumer in a shopping cart through a river of wine, to a particular group of bottles. Michael Meier
Among French wines, Château Pétrus is legendary. Consumers pay over $1,000 for a single bottle. Talking with Christian Moueix, the owner and long-time winemaker of Pétrus, Kellogg’s Gregory Carpenter asked an innocent question: When crafting a wine, how do you think about the consumer?
Taken aback, the vintner paused, leaned back, and opened his eyes wide. “He said, ‘I don’t! I make what pleases me,’” recalls Carpenter, a professor of marketing at the Kellogg School.
That may come as a surprise to those who think that winning customers requires exhaustive surveys and precise analytics to discover what people want. Yet this consumer-skeptic attitude is common among winemakers. “They suspect that consumers don’t really appreciate and respect wine,” says Carpenter, “so there’s no point asking them what they think.”
But from a business point of view, that presents a challenge: How do you create devoted customers and turn a profit if you essentially ignore what customers want?
Winemakers are not the only ones facing this quandary. Marketing scholars have a term—“market-driving firms”—for businesses that, rather than reacting to consumer tastes, attempt to influence those tastes to their advantage. But prior research on market-driving firms has focused on high-tech innovators like Apple and Tesla. These companies shape consumer preferences by introducing unprecedented products and services, which often render the competition obsolete. As Steve Jobs famously stated: “Our job is to figure out what [customers] are going to want before they do.”
Carpenter wanted to know how a company can influence consumers without a disruptive new technology to offer. So, working with Ashlee Humphreys, an associate professor of integrated marketing communications at Northwestern’s Medill School, he turned to the wine industry. “Winemaking hasn’t changed in thousands of years,” Carpenter says. .... "
Great Finishes
Simple point, made well. From Queue ACM.
The Importance of a Great Finish
You have to finish strong, every time.
By Kate Matsudaira
Have you ever felt super excited about the start of a project, but as time went on your excitement (and motivation) started to wane?
Unfortunately, not all work is created equal. It is often the work through the bulk of a project that is not remembered or recognized.
The work that tends to be remembered from any given project is the work that happened last. It is the final step that most people will think of, because it happened most recently. This is especially true of the people who have the most power over your promotions and future opportunities, who don't see what you accomplish day to day. They just see the results.
I have worked with hundreds of engineers during my career, and I have seen this happen over and over again. Projects start with a bang and end with a whimper, and the people on the team are surprised when their hard work isn't viewed as positively as they think it should be. .... "
The Importance of a Great Finish
You have to finish strong, every time.
By Kate Matsudaira
Have you ever felt super excited about the start of a project, but as time went on your excitement (and motivation) started to wane?
Unfortunately, not all work is created equal. It is often the work through the bulk of a project that is not remembered or recognized.
The work that tends to be remembered from any given project is the work that happened last. It is the final step that most people will think of, because it happened most recently. This is especially true of the people who have the most power over your promotions and future opportunities, who don't see what you accomplish day to day. They just see the results.
I have worked with hundreds of engineers during my career, and I have seen this happen over and over again. Projects start with a bang and end with a whimper, and the people on the team are surprised when their hard work isn't viewed as positively as they think it should be. .... "
Saturday, February 16, 2019
Speak, Spell, Language Training?
An instructive article in Penn Language Lab on the 'Speak and Spell' toy reminded me of using as it model for training. Could still see this as possible. First produced by Texas Instruments in 1978. A new version is coming out, with some perhaps limiting aspects;
" ... Where the new Speak & Spell differs from the original—and this could be a deal-breaker for some nostalgia-seekers—is its voice. Instead of using a synthesizer that generates spoken words from a bunch of coded instructions, Basic Fun!’s Speak & Spell uses voice recordings that have been processed to sound like they’re being generated by a computer. The monotonous, stilted delivery sounds very close to the original version, but it’s definitely different. ... " From Gizmodo.
No mention of assistants in this article, but still there is a skill there for training. demo:
" ... Where the new Speak & Spell differs from the original—and this could be a deal-breaker for some nostalgia-seekers—is its voice. Instead of using a synthesizer that generates spoken words from a bunch of coded instructions, Basic Fun!’s Speak & Spell uses voice recordings that have been processed to sound like they’re being generated by a computer. The monotonous, stilted delivery sounds very close to the original version, but it’s definitely different. ... " From Gizmodo.
No mention of assistants in this article, but still there is a skill there for training. demo:
Subscribe to:
Posts (Atom)