/* ---- Google Analytics Code Below */

Sunday, July 21, 2019

State of Competitive Intelligence

Get the full report at the link. For a long time was part of my work at the enterprise.

2019 State of Competitive Intelligence

The Latest Best Practices, Challenges, and Opportunities in the Market and Competitive Intelligence Field

In an increasingly crowded marketplace, how are companies getting intelligence on their competitors, customers, and thought leaders? Crayon's State of Competitive Intelligence report dives into exactly that—to share the best practices, challenges, and opportunities in the field of market and competitive intelligence.

Download the full report to browse 50+ pages, 45+ graphs, and hundreds of statistics on market and competitive intelligence.  .... " 

Advanced Packaging Solutions

One of our favorite topics in innovation space enhancement of both protection and consumer engagement.

Packaging solutions: Poised to take off?  in McKinsey

Major trends are reshaping packaging solutions—and that could open new opportunities for players that are prepared to move fast.

By Paolo Baldesi, David Feber, Nick Santhanam, Paolo Spranzi, Abir Tewari, and Shekhar Varanasi 

In the early 19th century, the French government offered a prize of 12,000 francs to the inventor who could create the best container for preserving food for Napoleon’s army and navy. That contest gave rise to the tin can. The number of packaging innovations that have surfaced since that time are too numerous to count. Just a few notable examples are gable-topped milk  

Packaging originally served as just a container, but its role has evolved to include more sophisticated functions, such as advertising. Some packaging is so iconic and easily recognizable that consumers automatically look for it when shopping. Recently companies have begun investigating smart packaging equipped with sensors. The packaging industry has evolved as well. Packaging solutions (PS) is now a $900 billion market and an important segment within the broader industrial sector. Within PS, players are classified as either packaging converters, which produce or provide materials for packaging, or equipment companies. (For an insider view of the industry and its evolution, see our video with WestRock CEO Steve Voorhees). .... "

Saturday, July 20, 2019

Responsible, Ethical AI?

Thought provoking.  Responsibility should be linked back to the business outcomes that emerge.  But you could argue that AI and analytics methods allow these to emerge more readily and in stronger context,  so perhaps that aspect does need to be 'responsible'.   So if I am now able to instantaneously recognize someone via AI methods,  what ethical issues does that create?     Still being considered.  Responsibility delegated or otherwise

Want Responsible AI? Think Business Outcomes  In K@W:

Mala Anand, author of this opinion piece, is president of intelligent enterprise solutions and industries at SAP.

The rising concern about how AI systems can embody ethical judgments and moral values are prompting the right questions. Too often, however, the answer seems to be to blame the technology or the technologists.

Delegating responsibility is not the answer.
Creating ethical and effective AI applications requires engagement from the entire C-suite. Getting it right is both a critical business question and a values’ statement that requires CEO leadership.

The ethical concerns AI raises vary from industry to industry. The dilemmas associated with self-driving cars, for instance, are nothing like the question of bias in facial recognition or the privacy concerns associated with emerging marketing applications. Still, they share a problem: Even the most thoughtfully designed algorithm makes decisions based on inputs that reflect the world view of its makers.

AI and its sister technologies, machine learning and RPA (robotic process automation), are here to stay. Between 2017 and 2018, research from McKinsey & Company found the percentage of companies embedding at least one AI capability in their business processes more than doubled. In a different study, McKinsey estimates the value of deploying AI and analytics across industries to be between $9.5 trillion to $15.4 trillion a year. In our own work we have seen leaders in industry after industry embrace the technology to both find new efficiencies in their current businesses and to test opportunities with new business models.

There is no turning back. .... " 

Rewarding Autonomous AIs

Thoughtful piece.    Can a human just provide some sort of reward function?    Or is creating that alone very hard?  Especially if we include some measures of risk as well.  The latter we found in practice at least if you are honest about risk.  This also clouds some the the 'future is unsupervised', things I have heard recently.   What exactly does 'unsupervised' mean?    More than just a simple lack of a measure of success, we discovered.

Stanford researchers teach robots what humans want
Researchers are developing better, faster ways of providing human guidance to autonomous robots.

By Taylor Robota

Told to optimize for speed while racing down a track in a computer game, a car pushes the pedal to the metal … and proceeds to spin in a tight little circle. Nothing in the instructions told the car to drive straight, and so it improvised.

Researchers are trying to make it easier for humans to tell autonomous systems, such as vehicles and robots, what they want them to do. 

This example – funny in a computer game but not so much in life – is among those that motivated Stanford University researchers to build a better way to set goals for autonomous systems.

Dorsa Sadigh, assistant professor of computer science and of electrical engineering, and her lab have combined two different ways of setting goals for robots into a single process, which performed better than either of its parts alone in both simulations and real-world experiments. The researchers presented the work June 24 at the Robotics: Science and Systems conference.

“In the future, I fully expect there to be more autonomous systems in the world and they are going to need some concept of what is good and what is bad,” said Andy Palan, graduate student in computer science and co-lead author of the paper. “It’s crucial, if we want to deploy these autonomous systems in the future, that we get that right.”

The team’s new system for providing instruction to robots – known as reward functions – combines demonstrations, in which humans show the robot what to do, and user preference surveys, in which people answer questions about how they want the robot to behave.

“Demonstrations are informative but they can be noisy. On the other hand, preferences provide, at most, one bit of information, but are way more accurate,” said Sadigh. “Our goal is to get the best of both worlds, and combine data coming from both of these sources more intelligently to better learn about humans’ preferred reward function.”  .... " 

Brain Simulation with 3D games Hardware

Perhaps, but there is still considerable work to do in understanding how intelligence works in the brain.  Exact replication still seems to be far away.

Computer Hardware for 3D Games Could Hold the Key to Replicating the Brain
University of Sussex (United Kingdom)
By Neil Vowles

Researchers at the University of Sussex in the U.K. have developed what they describe as the fastest, most energy-efficient simulation of part of a rat brain, using off-the-shelf computer hardware. The researchers' model beat a top 50 supercomputer by running brain simulations using their GeNN (GPU-enhanced Neuronal Networks) software and graphics processing units (GPUs). The researchers’ goals were to increase understanding into brain function, and to identify how damage to particular structures in neurons can lead to deficits in brain function. The researchers used the GeNN software to implement and test two established computational neuroscience models: one of a cortical microcircuit consisting of eight populations of neurons, and a balanced random network with spike-timing dependent plasticity. The team achieved energy savings of 10 times compared to either the SpiNNaker or supercomputer simulations.  ... "

Patrick Winston, AI Pioneer, Dies at 76

We met Patrick Winston at MIT, and worked with many of his students and colleagues.   Also used his book 'Artificial Intelligence',  as a basic text during our early experiments with AI.

Professor Patrick Winston, former director of MIT’s Artificial Intelligence Laboratory, dies at 76
Beloved professor conducted pioneering research on imbuing machines with human-like intelligence, including the ability to understand stories.

Adam Conner-Simons and Rachel Gordon | CSAIL , July 19,  
Patrick Winston, a beloved professor and computer scientist at MIT, died on July 19 at Massachusetts General Hospital in Boston. He was 76.

A professor at MIT for almost 50 years, Winston was director of MIT’s Artificial Intelligence Laboratory from 1972 to 1997 before it merged with the Laboratory for Computer Science to become MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL).

A devoted teacher and cherished colleague, Winston led CSAIL’s Genesis Group, which focused on developing AI systems that have human-like intelligence, including the ability to tell, perceive, and comprehend stories. He believed that such work could help illuminate aspects of human intelligence that scientists don’t yet understand.

“My principal interest is in figuring out what’s going on inside our heads, and I’m convinced that one of the defining features of human intelligence is that we can understand stories,'” said Winston, the Ford Professor of Artificial Intelligence and Computer Science, in a 2011 interview for CSAIL. “Believing as I do that stories are important, it was natural for me to try to build systems that understand stories, and that shed light on what the story-understanding process is all about.”

He was renowned for his accessible and informative lectures, and gave a hugely popular talk every year during the Independent Activities Period called “How to Speak.” 

“As a speaker he always had his audience in the palm of his hand,” says MIT Professor Peter Szolovits. “He put a tremendous amount of work into his lectures, and yet managed to make them feel loose and spontaneous. He wasn’t flashy, but he was compelling and direct. ”

Winston’s dedication to teaching earned him many accolades over the years, including the Baker Award, the Eta Kappa Nu Teaching Award, and the Graduate Student Council Teaching Award.

“Patrick’s humanity and his commitment to the highest principles made him the soul of EECS,” MIT President L. Rafael Reif wrote in a letter to the MIT community. “I called on him often for advice and feedback, and he always responded with kindness, candor, wisdom and integrity.  I will be forever grateful for his counsel, his objectivity, and his tremendous inspiration and dedication to our students.”  ... " 

Enterprise AI Strategy

Useful overview of what it takes.  Integrate it with real process, real goals, real results,  not just small tests.

Anatomy of an enterprise-scale AI strategy
Looking to move beyond point solutions and proofs of concept? Here’s what it takes to develop to a holistic AI strategy honed for business results.
 By Maria Korolov in CIO

When it comes to AI, companies typically test the waters proof of concepts or small-scale use cases, taking advantage of vendor offerings, such as new features in their existing SaaS platforms.

If things go well, they pursue another project, then another — and soon they’re relying on a sprawl of incompatible systems, competing data lakes, problems with cost overruns, duplication of efforts, and an inability to scale, not to mention privacy, compliance or ethics problems.

[ Cut through the hype with our practical guide to machine learning in business and find out the10 signs you’re ready for AI — but might not succeed. | | Get the latest insights with our CIO Daily newsletter. ]

At some point, the benefits of AI become obvious enough, and the pain of continuing on their present path so acute, that companies step back to develop a cohesive strategy for an enterprise-wide AI-powered transformation.

"The tendency to get overwhelmed in individual technologies is not only drowning organizations in technical debt but discouraging them because they don't see a path forward to sustainable and scalable AI," said Traci Gusher, partner in data, analytics and artificial intelligence practice at KPMG.

Here’s a look at how organizations can ensure the shift from pilot projects to full-scale AI fluency goes right.  .... " 

Augmented Reality Changes Real-World Behavior

I would expect, but will it change the specific useful goal behaviors in the real world?

Augmented reality can change your behavior in the real world
Even after you take the goggles off.

By Holly Brockwell, @holly

A new study from Stanford's School of Humanities and Sciences has found that augmented reality (AR) experiences significantly affect people's behavior in the real world, even after they've taken the headset off.

Using 218 participants and a pair of AR goggles, researchers led by Professor Jeremy Bailenson conducted three experiments.

The first showed a realistic 3D person called Chris sitting on a real chair in the room (AR layers digital images over the physical world, rather than creating a whole new world like VR). Participants had to complete anagram tasks while Chris watched, and as with the presence of a real person in the room, his presence meant they found hard puzzles more difficult than without 'someone' watching them.

The second experiment looked at whether participants would sit in the chair previously occupied by Chris. Even though he was no longer there, none of the participants still wearing the AR headset sat in that chair. Without the headset, 72% still avoided Chris's chair and sat in the one next to it instead. ... "

Friday, July 19, 2019

European Public Procurement Knowledge Graph

Quite interesting view of how this kind of knowledge can be represented and exercised in semantic representations.    Driving key decisions in context.  Also a good asset measurement?     Note its open source.

Researchers Launch First Knowledge Graph on European Public Procurement     University of Southampton     July 15, 2019

A newly released knowledge graph was designed by open data specialists to boost procurement data analytics and decision-making capabilities across Europe. Researchers working with the "TheyBuyForYou" project integrated tender and company data to develop the open source knowledge graph for public procurement. Said University of Southampton's Elena Simperl, "Knowledge graphs bring together data from a variety of sources into a common format that can be easily extended and reused by organizations. By releasing the graph open source, we hope to encourage developers to use it in their own products and give us feedback on how we could improve it." ... ' 

Kroger Data Platform for Brands

A means to get value from data.

Kroger's 84.51 Launches Customer Data Platform for Brands
Stratum could be additional alternative revenue stream for retailer
By Rebekah Marcarelli

As The Kroger Co. continues to seek alternative revenue streams on the back end, its customer data, analytics and personalized marketing subsidiary 84.51 has launched a new paid analytics solution to help brands position their products both in-store and online. 

Dubbed Stratum, the new platform utilizes data from both brick-and-mortar and e-commerce transactions and aims to "draw conclusions that are representative of consumer behavior nationally," according to company officials. 

Mike Donnelly, Kroger's EVP and chief operating officer, said the retailer is excited about this next iteration of customer insights, calling Stratum a "science-powered insight tool that is designed with the end user in mind. Stratum will be an accelerator for our brands and CPG partners alike."  .... " 

Missing in Action

Former colleague wrote this excellent book about his civil ancestor based on decades of research.  Just finished.  Very good read.

MISSING IN ACTION, 1863: Lieutenant Andrew Jackson Lacy and Tennessee's Confederate Cavalry    By Mark E. Lacy  | Mar 22, 2019

Lieutenant Andrew Jackson Lacy served in Tennessee's Confederate cavalry under Nathan Bedford Forrest and disappeared during the Civil War. His great-great-grandson took it upon himself to tackle this 150-year-old missing persons case. Drawing on a large number of family letters from the Civil War, Mark Lacy has reconstructed the story of his ancestor's service to the Confederacy as well as the chaotic circumstances under which the young lieutenant vanished behind enemy lines, only twenty-five miles from home. The result is a detailed account that brings the war down to a personal level and highlights the hardships brought on by guerrillas attacking families and their farms.  ... " 

Apple Crypto Wallet

Have not heard any more of this.  Considerable detail at the link.

Apple may be prepping to turn your iPhone into a crypto wallet
Apple's CryptoKit is likely the first step in enabling the exchange of private and public keys that will unlock the ability to make purchases using bitcoin and other cryptocurrencies stored on your iPhone. ... " 
Lucas Mearian By Lucas Mearian
Senior Reporter, Computerworld 

ICub Robot Imagines itself in the World of Others

New directions and capabilities of AI in robotics.

The iCub is the humanoid robot developed at IIT as part of the EU project RobotCub and subsequently adopted by more than 20 laboratories worldwide. It has 53 motors that move the head, arms & hands, waist, and legs. It can see and hear, it has the sense of proprioception (body configuration) and movement (using accelerometers and gyroscopes). We are working to improve on this in order to give the iCub the sense of touch and to grade how much force it exerts on the environment. .... 

AI passes theory of mind test by imagining itself in another's shoes
By Donna Lu in Newscientist

Artificial intelligence has passed a classic theory of mind test used with chimpanzees. The test probes the ability to perceive the world from the view of another individual and so AIs with this skill could be better at cooperating and communicating with humans and each other.

AIs with theory of mind are key to building machines that can understand the world around them. In recent years, the skill has emerged in a robot whose memories are modelled on human brains and in DeepMind’s ToM-net, which understands that others can have false beliefs. .... " 

Will Malware Use AI?

 Very Likely.  Should we also be looking for the traces of AI online to identify Malware?   Fraud?   Scams?  While this example is about malware. Consider that malware is just some code that takes advantage of a context, data, architecture and results to provide an advantage.  Ultimately this will be AI vs AI.  Need to be ready for that.

Report sees peril in cybercriminals’ looming use of AI
By Paul Gillen in SiliconAngle

A new report this week by anti-malware vendor Malwarebytes Inc. paints an ominous picture of the potential impact of artificial intelligence technologies such as machine learning and deep learning once criminals have the skills and incentive to use them.
That hasn’t happened yet, but the report’s authors suggest it could be as little as a year or two before AI-powered malware makes its way into the wild.

“Almost by definition, cybercriminals are opportunistic,” the report noted. “You only need one smart cybercriminal to develop malicious AI in an attack for this method to catch on."

Malwarebytes Lab Director Adam Kujawa drew an analogy to ransomware, which was detected as early as 2010 but was considered only a screen-locking nuisance until 2013, when Cryptolocker debuted with the ability to encrypt files. “Suddenly we saw a lot of variants emerging,” Kujawa said. “For the most part we don’t see a move by criminals en masse until one version completely destroys its target.

In the short term, the advantage is to the good guys, who are using AI to supplement human labor. In the field of malware, for example, machine learning can be used to create “smart detections that can capture future versions of the same malware, or other variants in the same malware family,” the report’s authors note.  ... " 

Thursday, July 18, 2019

Getting Data to Drive Decisions

Some of the earliset work we did was to engineer ways we could get data to decision makers.  Visually if possible. And to groups of people that could formulate key decisions.    And get feed back to these same people/groups to tailor a decision, and track its outcome usefully.   There is still lots of work to do to link process, decisions and driving data and analytics.

Data Engineers: The C-Suite’s Savior       By Richa Dhanda in DataNami

Today’s competitive marketplace requires companies to be data-driven. Why? Because data has become the fuel for organizations to deliver better and faster decisions, quickly respond to customers, and analyze, understand and act on new opportunities (or threats) ahead of the competition. However, becoming a data-driven organization is not easy.  As highlighted in a recent Gartner report,  “despite massive investments in data and analytics initiatives, nearly 50% of organizations report difficulties in bringing them into production.”

That’s where data engineers come in. Their role is to help the company make the best use of data to accomplish business objectives. And, as demonstrated by a recent survey of Fortune 1,000 decision makers, this role is absolutely critical as established companies are being disrupted by technology-savvy new entrants.

Surveys say 80% of big data C-suite executives acknowledge the potential threats of technology disruption and displacement, yet only 7.3% of them feel confident that they were well-prepared for the future.  .... " 

More Experiments with Checkout-Free

Experiments continue.   Will this be inevitable in the near future?  Certainly it will be less-stress, once consumers get used to the approach.

Giant Eagle piloting checkout-free experience

Giant Eagle has teamed with Grabango to pilot checkout-free technology at a trial store. The partnership marks the first of its kind for a large-format grocery store in the US, with Giant Eagle's Dan Donovan noting, "Giant Eagle is committed to advancing technologies that create an improved, stress-free shopping experience for our customers while still protecting their privacy."  .... " 

Nestle and Enterra for Intelligent Daily Business Decisions

We had talked to Enterra regarding supply chain applications.  Intriguing company and solutions.  Nestle's decision to automate complex decision making for CPG is also interesting.  Like the idea of emphasizing key processes and decisions.  Good move towards the intelligent enterprise. 

Press Release.    Supporting Video.

Nestlé Selects AI-Driven Analytics Firm Enterra To Build Platform for Marketing, Autonomous Sales

David Bloom Senior Contributor in Forbes

Nestlé USA and Pennsylvania cognitive-computing company Enterra Solutions are working together to use artificial-intelligence tools to transform the way the consumer-packaged-goods giant makes daily business decisions. Following.  

The companies are deploying two Enterra software packages that provide advanced sales and marketing insights and automate complex decision-making. The companies also are creating and staffing a center for advanced analytics on the Nestlé campus.

Enterra’s cognitive computing platform will be a crucial part of Nestlé USA's Intelligent Enterprise project. The shift to advanced artificial-intelligence applications that can transform decision-making and business processes is happening with companies across sectors such as entertainment, transportation and manufacturing.  

Enterra is also building the Center for Advanced Analytics and Insights at Nestlé’s Virginia headquarters, and staffing it with an advanced-analytics team of mathematicians, artificial intelligence and data scientists, consumer-packaged-goods experts and data-management and -visualization specialists.

The new initiatives will focus on gathering a wide range of pertinent data and integrating it to make smarter, faster decisions and break silos between Nestlé, its suppliers and customers.  ... "

Etsy Uses an Algorithm for Style

Interesting example, with forthcoming details.   Note its alliance to something most everyone does today, utilize e-commerce.  Look forward to seeing the details of the paper.

 How Etsy taught style to an algorithm  By Harry McCrackenin in FastCompany

From “romantic” to “rustic,” the marketplace for handcrafted goods that express distinctive aesthetics is teaching its search engine to know what’s what.

 ... After about a year of work, Fisher says, Etsy has trained a machine-learning model to effectively suss out the styles of items on the site, based on both textual and visual cues. The company is about to start testing results based on this new algorithm on the Etsy site. But it also believes that the technology it’s developed could have applications well beyond making e-commerce more relevant. Which is why three of Etsy’s data scientists have written a paper about it—coauthored with a Twitter employee—which they’ll present at the Association for Computing Machinery’s KDD (Knowledge Discovery and Data Mining) conference in August. .... " 

Future of AI is Tiny

In O'Reilly, short excerpt from a recent conference by O'Reilly.  Not too different than the use of small drones or tiny sensors.

The future of machine learning is tiny
Pete Warden digs into why embedded machine learning is so important, how to implement it on existing chips, and shares new use cases it will unlock.

By Pete Warden of Google in O'Reilly Media.  

Wednesday, July 17, 2019

Gartner Publishes first Magic Quadrant for RPA

Have followed RPA since it emerged, as a way to install logic in process to efficiently automate in-context tasks.   Should be used in combination with AI and Big Data analytics to transparently improve process.  Not sure if that is common, but should be.  Reminds me of the work we did with Prolog based knowledge systems.

Gartner publishes first Magic Quadrant for Robotic Process Automation market  By Mike Wheatley in SiliconAngle

Gartner Inc. this week published its first Magic Quadrant for the robotic process automation software market, shining a light on the leading players and key trends in the rapidly growing technology segment.

RPA relates to the use of software with artificial intelligence and machine learning capabilities to handle high-volume, repeatable tasks that previously required humans to perform. These tasks can include queries, calculations and maintenance of records and transactions. RPA software relies on robots that can mimic a human worker by logging into an application, entering data or calculating and completing tasks and then logging out once the task is done.

The technology is believed to save companies huge amounts of time and money, so it’s not much of a surprise to see Gartner estimating the market for this type of software will reach $2.4 billion a year by 2024, from just $850 million today. Gartner also said RPA is the fastest-growing subsegment of enterprise software it tracks, with an annual growth rate of 63% in 2018.   .... "  (Quadrant at Link) 

Usupervised Learning is the AI Future

An outline of Yann LeCuns recent talk.    See the LeCun tag below for links to talk and slides.

The AI technique that could imbue machines with the ability to reason
Yann LeCun, Facebook’s chief AI scientist, believes unsupervised learning will bring about the next AI revolution.  .... 

by Karen Hao in Technology Review

Unilever: Virtual Plants via Digital Twins

Followed arch-rival Unilever for many years, always impressed by their tech expertise.  We did some similar things, creating an experimental supply chain that could be tested via crowd sourced management (ask me for a paper copy, or a consult)  But they seem have take not much further, can see why.  Adding direct sensors, for example.   Why not always have a supply chain you can

Unilever Uses Virtual Factories to Tune Up Its Supply Chain 
The Wall Street Journal
By Jennifer Smith

Unilever is using data streaming from sensor-equipped machines to create virtual versions of its factories that can track physical conditions and allow for testing of operational changes. The "digital twin" technique uses machine learning and artificial intelligence to analyze massive amounts of information from connected devices in an effort to make production more efficient and flexible. The technology, which Unilever developed with the help of Microsoft, lets the company make real-time changes to optimize output and use materials more precisely, helping to limit waste. Unilever now has eight such digital twins of plants in North America, South America, Europe, and Asia. ...  "

Alexa Determining Skills for Customer Needs

Its ultimately a concierge type problem.   How do we determine the best augmenting skill?    Specified need, context, technical ability,  past interactions .... Lots can be at play.  Ultimately a component of all intelligent conversations with a goal.

Rank: How Alexa Determines What Skill Can Best Meet a Customer’s Need    By Young-Bum Kim  Amazon Developer

Amazon Alexa currently has more than 40,000 third-party skills, which customers use to get information, perform tasks, play games, and more. To make it easier for customers to find and engage with skills, we are moving toward skill invocation that doesn’t require mentioning a skill by name (as highlighted in a recent post).

To enable name-free skill interaction, Alexa currently uses a two-step, scalable, and efficient neural shortlisting-reranking approach. (I described our approach to shortlisting in a post yesterday). The shortlisting step uses a scalable neural model to efficiently find the optimal (k-best) candidate skills for handling a particular utterance; the re-ranking step uses rich contextual signals to find the most relevant of those skills. We use the term “re-ranking” since we improve upon the initial confidence score provided by the shortlisting step.

This week, at the Human Language Technologies conference of the North American chapter of the Association for Computational Linguistics (NAACL 2018), my colleagues and I presented a paper, “A Scalable Neural Shortlisting-Reranking Approach for Large-Scale Domain Classification in Natural Language Understanding,” that describes our approach.

A high-level flow of the two-step shortlisting-reranking approach

The Challenge
The problem here is essentially a domain classification problem over the k-best candidate skills returned by the shortlisting system, which we call Shortlister. The goal of Shortlister is to achieve high recall — to identify as many pertinent skills as possible — with maximum efficiency. On the other hand, the goal of the reranking network, HypRank, is to use rich contextual signals to achieve high precision — to select the most pertinent skills. Designing HypRank comes with its own challenges:

•    Hypothesis representation: It needs to use available contextual signals to produce an effective hypothesis representation for each skill in the k-best list;
•    Cross-hypothesis feature representation: It needs to efficiently and automatically compare features, such as a skill’s intent confidence, to those of other candidate skills in the k-best list; 
•    Generalization: It needs to be language-agnostic; and
•    Robustness: It needs to be able to accommodate changes, such as independent modifications to Shortlister or to the natural-language-understanding models that provide skill-specific semantic interpretation of utterances. .... "

Alphabet's Wing is Doing Drone Traffic Control

Could this mean we will see many more drones in the sky, now more safely managed and directed to tasks?    Informative site:

Empowering everyone to safely access the sky.

Flying is complex. Through automation and data, our OpenSky platform empowers you to take flight with confidence—whether that means flying a single drone or an entire fleet.

We handle the journey, so you can focus on the destination.
Whether you’re a hobbyist who loves to fly, or a business that uses unmanned aircraft to survey land or deliver goods, OpenSky makes it easy to find out where and how to fly, tailored to your operation.

A version of the OpenSky app is approved by CASA in Australia and available now.  .... " 

Also comments in Bloomberg

Micro Eye Movement Identification

We tested iris based identification.  Here another biometric approach I had not seen, which claims better accuracy:

DeepEyedentification: identifying people based on micro eye movements   by Ingrid Fadelli  in TechExplore

Past cognitive psychology research suggests that eye movements can differ substantially from one individual to another. Interestingly, these individual characteristics in eye movements have been found to be relatively stable over time and largely independent of what one is looking at. In other words, people present different patterns in the way they move their eyes and these unique 'eye movements' could be used as a means for identification.  ....  " 

Tuesday, July 16, 2019

Power and Limits of Deep Learning By Yann LeCun

I mentioned this talk just recently.  Here are the links to slides and talk.

The Power and Limits Of Deep Learning
ACM TechTalk

Yann LeCun
New York University
Facebook AI Research

Talk:   https://event.on24.com/wcc/r/2014818/04C58DF355DF00190DE4F046CE243077?
Slides:  https://drive.google.com/file/d/1f0sPHv7ozHafASPwIOfuvF_RvP3FDPY0/view


Artificial Intelligence / Machine Learning
The AI technique that could imbue machines with the ability to reason

Yann LeCun, Facebook’s chief AI scientist, believes unsupervised learning will bring about the next AI revolution.
by Karen Hao  in Technology Review

Combinatoric Design

What to do when there are too many, but precisely specified choices.  And a way to evaluate them.    This has some interesting possibilities.

Automated system generates robotic parts for novel tasks
When designing actuators involves too many variables for humans to test by hand, this system can step in.

By Rob Matheson | MIT News Office 

An automated system developed by MIT researchers designs and 3-D prints complex robotic parts called actuators that are optimized according to an enormous number of specifications. In short, the system does automatically what is virtually impossible for humans to do by hand.  

In a paper published today in Science Advances, the researchers demonstrate the system by fabricating actuators — devices that mechanically control robotic systems in response to electrical signals — that show different black-and-white images at different angles. One actuator, for instance, portrays a Vincent van Gogh portrait when laid flat. Tilted an angle when it’s activated, however, it portrays the famous Edvard Munch painting “The Scream.” The researchers also 3-D printed floating water lilies with petals equipped with arrays of actuators and hinges that fold up in response to magnetic fields run through conductive fluids.

The actuators are made from a patchwork of three different materials, each with a different light or dark color and a property — such as flexibility and magnetization — that controls the actuator’s angle in response to a control signal. Software first breaks down the actuator design into millions of three-dimensional pixels, or “voxels,” that can each be filled with any of the materials. Then, it runs millions of simulations, filling different voxels with different materials. Eventually, it lands on the optimal placement of each material in each voxel to generate two different images at two different angles. A custom 3-D printer then fabricates the actuator by dropping the right material into the right voxel, layer by layer.

“Our ultimate goal is to automatically find an optimal design for any problem, and then use the output of our optimized design to fabricate it,” says first author Subramanian Sundaram PhD ’18, a former graduate student in the Computer Science and Artificial Intelligence Laboratory (CSAIL). “We go from selecting the printing materials, to finding the optimal design, to fabricating the final product in almost a completely automated way.”

The shifting images demonstrates what the system can do. But actuators optimized for appearance and function could also be used for biomimicry in robotics. For instance, other researchers are designing underwater robotic skins with actuator arrays meant to mimic denticles on shark skin. Denticles collectively deform to decrease drag for faster, quieter swimming. “You can imagine underwater robots having whole arrays of actuators coating the surface of their skins, which can be optimized for drag and turning efficiently, and so on,” Sundaram says.

Joining Sundaram on the paper are: Melina Skouras, a former MIT postdoc; David S. Kim, a former researcher in the Computational Fabrication Group; Louise van den Heuvel ’14, SM ’16; and Wojciech Matusik, an MIT associate professor in electrical engineering and computer science and head of the Computational Fabrication Group.

Navigating the “combinatorial explosion”

Robotic actuators today are becoming increasingly complex. Depending on the application, they must be optimized for weight, efficiency, appearance, flexibility, power consumption, and various other functions and performance metrics. Generally, experts manually calculate all those parameters to find an optimal design.      .... "  

Intel's Neuromorphic Chips

Meaning chips that are closer in structure to neural networks, which themselves are just considerable simplifications of networks of biological neurons.   The result is being able to do such neural computation, key to deep learning, much faster.   No quantum computing here.   Below what Intel Corp writes about this, and then a piece by Technology Review.

Intel's Neuromorphic Computing

The emergent capabilities in artificial intelligence being driven by Intel Labs have more in common with human cognition than with conventional computer logic.

Neuromorphic computing research emulates the neural structure of the human brain.

The Loihi research chip includes 130,000 neurons optimized for spiking neural networks.

Intel Labs is making Loihi-based systems available to the global research community.

Probabilistic computing addresses the fundamental uncertainty and noise of natural data.

Collaborations on next-generation AI extend to worldwide industry and academic researchers.

What Is Neuromorphic Computing
The first generation of AI was rules-based and emulated classical logic to draw reasoned conclusions within a specific, narrowly defined problem domain. It was well suited to monitoring processes and improving efficiency, for example. The second, current generation is largely concerned with sensing and perception, such as using deep-learning networks to analyze the contents of a video frame.

A coming next generation will extend AI into areas that correspond to human cognition, such as interpretation and autonomous adaptation. This is critical to overcoming the so-called “brittleness” of AI solutions based on neural network training and inference, which depend on literal, deterministic views of events that lack context and commonsense understanding. Next-generation AI must be able to address novel situations and abstraction to automate ordinary human activities.

Intel Labs is driving computer-science research that contributes to this third generation of AI. Key focus areas include neuromorphic computing, which is concerned with emulating the neural structure and operation of the human brain, as well as probabilistic computing, which creates algorithmic approaches to dealing with the uncertainty, ambiguity, and contradiction in the natural world.  .... " 

Also from Technology Review:
Intel’s new AI chips can crunch data 1,000 times faster than normal ones  ......  "

Marketing Creativity

Creativity via AI always an interest.  Certainly creativity can be augmented by AI.   But any augmentation has the ability to replace.

Agencies’ creative perspective, the very currency of the business, is at risk and can only be realized by shifting billions from tech to fund creative differentiation.  in Forrester

“The value of agency creativity is at risk of disappearing.”
The marketing industry is woefully out of balance, from agency/client relationships to new business requirements and compensation. The healthy tension of creativity that once balanced the needs of the brand with the needs of its customers; the commercial effectiveness of the work versus its cultural impact; and the needs of agency economics versus the client’s growth is all eroding. These are now one-sided issues. The tension is no longer healthy. Nowhere is this more evident than in agency economics. Agencies today barely grow at the current rate of inflation in the United States. Insourcing, margin compression, cost-cutting, new competitors, and tech priorities threaten the existence of agencies and undermine their value.

“Customer experience has stagnated.”
Strong evidence of creativity’s languish is already underway. Customer experience has stagnated. Forrester’s Customer Experience Index (CX Index™), a study of 100,000 consumers and 300 brands that has been run for more than a decade and acts as a barometer for CX performance, is flat for the fourth consecutive year. Most brands are stuck in the middle, struggling to improve over competitors. Zero brands are rated to have an excellent experience. Forrester determined that there are four types of CX performance — the Languishers, Lapsers, Locksteppers, and Laggards. No brand is performing well. Worse still, for every 1-point drop in CX Index score, companies lose 2% of their returns. It’s only a matter of time before companies’ growth is impacted..... "

How Much Knowledge Has been Created?

We explored this early on with image tagging in the enterprise.   And while we have developed lots of specific usage cases, nothing as broadly usable as we wanted.

The data that trains AI increasingly calls into question AI
After 10 years of ImageNet, AI researchers are digging into the details of test sets and some are asking just how much knowledge has really been created with machine learning.
By Tiernan Ray in ZDNet

It's been 10 years since two landmark data sets appeared in the world of machine learning, ImageNet and CIFAR10, collections of pictures that have been used to train untold numbers of models of computer vision deep learning neural networks. The venerable nature of the data has prompted some AI researchers to ask what goes on with those data sets, and what their longevity means about machine learning in the bigger picture.

As a result, 2019 may mark the year the data indicted some of the fundamental beliefs about AI.

Researchers in machine learning are getting much more specific and rigorous about understanding how the choice of data affects the success of neural networks.

And the results are somewhat unsettling. Recent work suggests at least some of the success of neural networks, including state-of-the-art deep learning models, is tied to small, idiosyncratic elements of the data used to train those networks.

Exhibit A is a study put out in February and revised in June by Benjamin Recht and colleagues at UC Berkeley, with the amusing title "Do ImageNet Classifiers Generalize to ImageNet?"

They tried to reconstruct ImageNet, in a sense, by duplicating the process of gathering images from Flickr and curating them, having people on Amazon's Mechanical Turk service look at the images and assign labels.   

The original screen from back in 2009 instructing Amazon Mechanical Turk workers to pick images that fit with labels. It kicked off a decade of development of more and more advanced computer vision neural networks.

The goal was to create a new "test" set of images, a set that's like the original group of pictures, but never seen before, to see how well all the models that have been developed on ImageNet in the past decade generalize to new data.

The results were mixed. The various deep learning image recognition models that followed one another in time, such as the classic "AlexNet" and, later, more-sophisticated networks such as "VGG" and "Inception," still showed improvement from generation to generation. In fact, on this new test set, levels of improvement were actually amplified.  .... " 

Data Privacy in the Hands of Users

Another privacy play of interest, have been exploring the intricacies and implications of several recently.   Here adding metadata tags that indicate allowed uses and compliance rules.

Putting Data Privacy in the Hands of Users
MIT News
By Rob Matheson

Researchers at the Massachusetts Institute of Technology and Harvard University have developed a platform to ensure Web services comply with users' explicit preferences for retaining and sharing their data in the cloud. Riverbed is engineered so a Web browser or smartphone app communicates with the cloud using a proxy, which operates on a user's device. When the service attempts to upload user data to a remote service, the proxy tags the data with a set of permissible uses for their data, or "policies." Users can choose any number of predefined restrictions, and the proxy tags all data with the preferred policies. In the data center, Riverbed assigns the uploaded data to a partitioned cluster of software components, with each cluster processing only data tagged with the same policies; Riverbed also tracks the server-side code so it adheres to user policies, and terminates service if compliance is not met. ....  "

Monday, July 15, 2019

Zappos Uses Genetic Algorithms

Surprising application in place and an indication of multiple algorithms in parallel.

At Zappos, Algorithms Teach Themselves 
The Wall Street Journal  (with paywall)
Jared Council

Online shoe and clothing retailer Zappos sees promise in a self-learning algorithm's ability to address the problem of its search engine producing irrelevant results. Zappos' chief data scientist Ameen Kazerouni said several years ago his team began testing a genetic algorithm, which has since become critical to boosting the search engine's relevancy. Genetic algorithms generate various solutions to a problem, using natural-selection principles like reproduction and mutation to return the optimal or "fittest" solution. The algorithms were designed to parse out the intent of a search phrase, with those that perform best on an internal "relevance test," which models how users engage with search results, having the greatest odds of having their traits inherited by the next generation. Zappos uses three genetic algorithm engines in parallel to generate better search results.   .... " 

Serious Games

Have long looked for games that were serious enough, and fun enough.   Here at least the data is serious enough.  But how do we connect very different goals?  Perhaps in the creative space.

Google Maps adds a city-themed 'Snake' game

Elements of serious games?  Probably not, but looking for that angle.    Ideas?

Expectation Influences Perception

Known for some time.   Now how do we best  make use of this in AI interactions?   Can our brains be primed with signals to make them ready for interaction?  Can we measure what is needed using Bayesian methods?

How expectation influences perception

Neuroscientists find brain activity patterns that encode our beliefs and affect how we interpret the world around us.

MIT neuroscientists have identified patterns of brain activity that underlie our ability to interpret sensory input based on our expectations and past experiences.

By Anne Trafton | MIT News Office 

For decades, research has shown that our perception of the world is influenced by our expectations. These expectations, also called “prior beliefs,” help us make sense of what we are perceiving in the present, based on similar past experiences. Consider, for instance, how a shadow on a patient’s X-ray image, easily missed by a less experienced intern, jumps out at a seasoned physician. The physician’s prior experience helps her arrive at the most probable interpretation of a weak signal.

The process of combining prior knowledge with uncertain evidence is known as Bayesian integration and is believed to widely impact our perceptions, thoughts, and actions. Now, MIT neuroscientists have discovered distinctive brain signals that encode these prior beliefs. They have also found how the brain uses these signals to make judicious decisions in the face of uncertainty.

“How these beliefs come to influence brain activity and bias our perceptions was the question we wanted to answer,” says Mehrdad Jazayeri, the Robert A. Swanson Career Development Professor of Life Sciences, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study. .... "

Data Independence

A look at the value of the data that depends on how it can be used.   Came up in a conversation about data valuation versus risk just the other day.   Sometimes we are forced to make a choice based on connected industries that we work with.

Celebrating Data Independence
By Alex Woodie in Datanami

Every company wants the independence to do what they wish with their data. That’s one of the first assumptions underlying this whole big data movement. But depending on where and how a business stores its data — such as in proprietary formats, whether on-prem or the cloud – users may inadvertently limit their data freedom going forward.

Enticed by cheap and abundant storage and the flexibility to scale compute resources as needed, customers are moving exabytes of data from on-prem systems to object storage systems in the cloud. Hadoop, as the preeminent on-premise big data storage system, stands to lose a good chunk of mindshare and marketshare in the process, while the three major cloud platforms — AWS, Microsoft Azure, and Google Cloud – are capitalizing on the trend and growing very fast.

In addition to minimizing Hadoop’s influence, this great migration of data to the cloud is helping to shake up the analytics market too. Instead of writing their software to run on Hadoop, vendors are now forced to be much more agnostic about where the data lives. That means supporting not just Hadoop, but multiple cloud and hybrid deployments that include cloud and on-premise systems.

“Every Fortune 2000 company has 10 different vendors that have data that they want to access holistically, but they can’t because of the vendors,” says Chris Lynch, the CEO of analytics software vendor AtScale. “You can’t have big data unless you have all the data. That’s the most important asset that any company has.”

A typical bank might house retail data on one vendor’s system and loan origination data on another system, Lynch says. “And that data isn’t easily aggregated to analyze. How archaic is that? And only because it runs on two different vendors’ systems,” he says. .... "

Graphs and Machine Learning

The following talk looks interesting, from Neo4j.  I like that this deals with an important aspect of risk in machine learning  I plan to attend.   I note they also have several excellent free introductory ebooks at their link.

Hi Franz!
I want to share some ideas on how graphs are used to improve Machine Learning accuracy, precision, and recall. In this webinar on July 23, my colleague, Jennifer Reif, and I will walk through a process combing Spark and Neo4j with tips on how we dealt with our biggest mistakes. In particular, we’ll focus on feature engineering using graph algorithms.

Sneak peek below. Join us for the details!   https://go.neo4j.com/190723-register.html

Password Inventor Dies

Had not known of this particular invention claim. But when we first used using time sharing computers they were not password protected.   I remember using a form of non-digital key which acted as a password.

Computer password inventor dies aged 93

Share this with Facebook Share this with Messenger Share this with Twitter Share this with  

Dr Corbato became interested in computers while studying physics
Computer pioneer Fernando Corbato, who first used passwords to protect user accounts, has died aged 93.

Dr Corbato introduced the basic security measure while developing methods that let more people use a computer at the same time.

He developed a technique, called time-sharing, that divided up the processing power of a computer so it could serve more than one person at once.  .... " 

Language and Conversation

Straight forward statements about language and conversation.  We are making more conversations with machines these days, but still not complex ones. 

Using Language 1st Edition   by Herbert H. Clark via Amazon

Herbert Clark argues that language use is more than the sum of a speaker speaking and a listener listening. It is the joint action that emerges when speakers and listeners, writers and readers perform their individual actions in coordination, as ensembles. In contrast to work within the cognitive sciences, which has seen language use as an individual process, and to work within the social sciences, which has seen it as a social process, the author argues strongly that language use embodies both individual and social processes. .... " 

Sunday, July 14, 2019

Argonne Makes Biggest File Transfer

Impressive, as suggested, useful for very large combinatorial problems with related data.

Argonne Team Makes Largest Single File Transfer in History
By Oliver Peckham in DataNami

A team of scientists at Argonne National Laboratory has broken a data transfer record by moving a staggering 2.9 petabytes of data for a research project.

The data – from three large cosmological simulations – was generated and stored on the Summit supercomputer at the Oak Ridge Leadership Computing Facility (OLCF), which is currently rated as the world’s fastest supercomputer on the Top500 list at nearly 149 Linpack petaflops.

“We carried out three different simulations on Summit (each simulation resulted in a file transfer of 2-3PB) to model three different scenarios of the makeup of the Universe,” said Dr. Katrin Heitmann, a physicist at Argonne and lead researcher on the project. “We are trying to understand the subtle differences in the distribution of matter in the Universe when we change the underlying model slightly.”  ... '  

The Revolution of AI will be Unsupervised

I attended the ACM Webinar by Yann LeCun  mentioned below.   Well done, about the history and future of AI.   I will point to the slides and audio when they are announced this week.  A great, although often technical Deep Learning and AI.   Too hurried, but still very good if you are willing to go back through it.    A view of the history and predicted future of deep learning.    I think there will be some breakthrough in the addition of 'deep logic' to get reasonably general AI.   And also get better transparency.  Yes, we may get the logic of babies,  but ultimately need to break beyond to that of reasoning adults.  Continue to watch this thread.

Also watching the evolution of intelligence in my granddaughter at age 2, and right, there is lots that is unsupervised.   But you can readily insert so much supervised learning.   And logic does emerge very early and often.

Artificial Intelligence / Machine Learning in Technology Review

The AI technique that could imbue machines with the ability to reason
Yann LeCun, Facebook’s chief AI scientist, believes unsupervised learning will bring about the next AI revolution.   by Karen Hao

At six months old, a baby won’t bat an eye if a toy truck drives off a platform and seems to hover in the air. But perform the same experiment a mere two to three months later, and she will instantly recognize that something is wrong. She has already learned the concept of gravity.

“Nobody tells the baby that objects are supposed to fall,” said Yann LeCun, the chief AI scientist at Facebook and a professor at NYU, during a webinar on Thursday organized by the Association for Computing Machinery, an industry body. And because babies don’t have very sophisticated motor control, he hypothesizes, “a lot of what they learn about the world is through observation.”  ... "


The Power and Limits Of Deep Learning
ACM TechTalk

Yann LeCun
New York University
Facebook AI Research
Talk:   https://event.on24.com/wcc/r/2014818/04C58DF355DF00190DE4F046CE243077?

Slides:  https://drive.google.com/file/d/1f0sPHv7ozHafASPwIOfuvF_RvP3FDPY0/view

Samsung Reality Glasses

Having been involved in early looks at 'smart glasses' for the enterprise, am still intrigued by when they might become become more common as a replacement for smartphones.  In particular what design will make them work best.   What interaction mode?   Voice, gesture, glance interaction?  Could they even completely replace significant amounts of smartphone use?    Or are they just for specialty applications?    Will they need a 'fashion' component?   Assistance dimensions?

Below, Samsung has a new patent.

Samsung may develop foldable augmented reality glasses
The tech giant has filed a patent application for a pair.

By Mariella Moon, @mariella_moon ....   in Engadget

Saturday, July 13, 2019

Blockstack for Digital Rights

Blockstack    Brought to my attention, has been around for some time.

 .... Decentralized computing network and app ecosystem
Blockstack apps protect your digital rights and are powered by the Stacks blockchain.
Secure your data with Blockstack and get a universal login
We provide private data lockers and a universal login with blockchain-based security and encryption — protecting your data from big internet companies. .... 

FAQ of uses.

Fill the Data Moats?

Had never experienced the idea formally,   But after reading the description here understand the caution.   In how many domains can you assure having the data that forms a moat?  Perhaps by having a transitional algorithm to create it.

In Andreesson Horowitz:

The Empty Promise of Data Moats   by Martin Casado and Peter Lauten

Data has long been lauded as a competitive moat for companies, and that narrative’s been further hyped with the recent wave of AI startups. Network effects have been similarly promoted as a defensible force in building software businesses. So of course, we constantly hear about the combination of the two: “data network effects” (heck, we’ve talked about them at length ourselves).

But for enterprise startups — which is where we focus — we now wonder if there’s practical evidence of data network effects at all. Moreover, we suspect that even the more straightforward data scale effect has limited value as a defensive strategy for many companies. This isn’t just an academic question: It has important implications for where founders invest their time and resources. If you’re a startup that assumes the data you’re collecting equals a durable moat, then you might underinvest in the other areas that actually do increase the defensibility of your business long term (verticalization, go-to-market dominance, post-sales account control, the winning brand, etc).  ... " 

Cloud Kitchens

Been a close watcher, but no practitioner of restaurant world.  Most recently seen: Cloud Kitchens.  Not new, but their digital implementation could be.  Further automation of selection cooking, assembly, delivery?

Are cloud kitchens the next evolution of food delivery?  in Retailwire with expert comments:   By Tom Ryan

Cloud kitchens, which offer shareable cooking facilities to support food delivery, are just getting started but are already seen by some as a threat to restaurants and the overall grocery channel.

Also called ghost, virtual and dark kitchens, cloud kitchens are basically delivery-only restaurants. Similar to co-working spaces, the centralized cooking spaces gain efficiencies by being lined up assembly-like side by side. The sole on-the-go focus allows the businesses to build in further workflow efficiencies. .... " 

Google on Design of Data Viz

Useful view.  Its good to set up a standard and teach that within an enterprise.   Consider also its application to mobile devices.

Six Principles for Designing Any Chart
An introduction to Google’s new data visualization guidelines   By Manuel Lima from Medium

In August 2017, a group of passionate designers, researchers, and engineers at Google came together to create a comprehensive set of data visualization guidelines — covering everything from color, shape, typography, iconography, interaction, and motion. The success of this collaboration sparked the formation of Google’s first fully dedicated Data Visualization team, which kicked off in May 2018.

Over the last year, we’ve continued to work on understanding the needs, requirements, and desires shaping how people visualize and interact with information. Now, we want to share our insights with creators everywhere. We’ve launched detailed public guidelines for creating your own data visualizations, and distilled our top principles and considerations. Below, six strategies for designing any chart.  ..... "

Google Goes CallJoy for Small Business

Google approaches small business to AI automate phone use. like blocking unwanted calls, and providing basic response information about your services and hours.  A concierge like service with a virtual agent to save you time and people resources.   Recording and quality control via search is facilitated.  Nicely thought through.  I can also imagine that they might enhance this with more general chatbots.    Can this be abused, as was the worry with the Duplex system they demonstrated?

Helping small business phones get smart with CallJoy
By Bob Summers,   Google Blog
General Manager, CallJoy
Published May 1, 2019

Think about how many times you’ve called a small business lately. I call local businesses near my home and office all the time—just last weekend, I was on the phone with the nearest exotic pet store to see if they had food in stock for our family's pet lizard.

My team within Area 120, Google’s workshop for experimental projects, conducted testing and found that small businesses receive an average of 13 phone calls every day. If you apply that average to America's 30.2 million small businesses, that would equal roughly 400 million incoming daily calls to local businesses from consumers placing a to-go order, booking an appointment, inquiring about inventory and more. That’s why we built CallJoy, a cloud-based phone agent that enables small business owners to measure, improve and automate customer service.

Meet CallJoy

With CallJoy, small businesses have access to the same customer service options that have historically only been available to larger corporations. If you’re associated with small business using CallJoy, here’s how it works: After a quick setup, you’ll receive a local phone number. CallJoy will immediately begin blocking unwanted spam calls so you receive the calls that matter—the ones from customers. Then, when the phone rings, the automated CallJoy agent answers, greets callers with a custom message and provides basic business information (like hours of operation).

If the customer calling would like to complete a task which can be done online, like place a to-go order or book an appointment, CallJoy’s virtual agent will send the customer an SMS text message containing a URL. Whether the customer speaks with you, talks to an employee, or just interacts directly with the CallJoy agent, the call will be recorded and transcribed for quality purposes. This allows small business owners to tag and search each conversation based on topic. For example, a hair salon owner can search how many times a day callers ask about “men’s haircut pricing” or “wedding hairstyles.” From here, CallJoy compiles your data in an online dashboard and emails you a daily update, which includes metrics like call volume and new versus returning callers.  .... " 

Quantum Computing and Chemical Industry

Was unaware of this particular connection.  Description below.  Improved Modeling.

The next big thing? Quantum computing’s potential impact on chemicals
The chemical industry is poised to be an early beneficiary of the vastly expanded modeling and computational capabilities of quantum computing. Companies must act now to capture the benefits.

Sent from McKinsey Insights .... 

Friday, July 12, 2019

Storing Data in Music

Intriguing idea, in theory not that hard.   Why might it be used?  A kind of Steganography?

Storing Data in Music 
ETH Zurich
By Fabio Bergamin

Researchers at ETH Zurich in Switzerland have developed a method for embedding data in music in a way that is imperceptible to the human ear, and transmitting it to a smartphone. The researchers found that under ideal conditions, the technique can transfer up to 400 bits per second without the average listener noticing. The researchers used the dominant notes in a piece of music, overlaying each of them with two marginally deeper and two marginally higher notes that are quieter than the dominant note. The team also used the harmonics of the strongest note, inserting slightly deeper and higher notes there as well. The data is stored in these additional notes. Said ETH’s Simon Tanner, “What we’re doing is embedding the data in the music itself; transmitting data from the loudspeaker to the mic.”  .... ' 

Amazon Wants a Rolling Home Assistant

Have now seen a number of rolling smart home assistants fail,  will this take it forward by using the Alexa infrastructure?

Amazon continues work on mobile home robot as it preps new high-end Echo, says report
Prototypes for the wheeled robot can be summoned using voice commands
By James Vincent in TheVerge

 Amazon is still working on a mobile home robot, according to a report from Bloomberg’s Mark Gurman. It’s also planning to add a high-end Echo to its lineup of Alexa devices.

We first heard about Amazon’s plans to build a wheeled home robot in April last year. The project is reportedly codenamed “Vesta” (after the Roman goddess of the hearth), and rumors suggest it’s a sort of “mobile Alexa” that’s able to follow users around their homes.

Today’s report doesn’t add significantly to this picture, but it seems Amazon is still keen to build the mobile device. It was apparently slated to launch this year but wasn’t ready for mass-production. Engineers have reportedly been pulled from other projects to work on Vesta, and Gurman reports that prototypes are “waist-high and navigate with the help of an array of computer-vision cameras.” They can also be summoned using voice commands.   ....  "

Humanoid Sign Language

Robotic Sign Language

A robotic hand making a gesture in sign language. UC3M Programs a Humanoid Robot to Communicate in Sign Language 
Carlos III University of Madrid (Spain)
July 8, 2019

Researchers at Carlos III University of Madrid (UC3M) in Spain have programmed a humanoid robot, named TEO, to communicate in sign language. The team used simulations to indicate the specific position of each phalanx (finger bone) in depicting specific signs from Spanish Sign Language. The researchers then reproduced each position with the robot’s hand, trying to make the movements similar to those a human hand would make. To date, TEO has mastered the fingerspelling alphabet of sign language, as well as a very basic vocabulary related to household tasks. Said UC3M researcher Jennifer J. Gago, "The deaf people who have been in contact with the robot have reported 80% satisfaction, so the response has been very positive."

Conversation on Future of US Manufacturing

Podcast and transcript from K@W in a realm of advanced automation and robotics.

How the U.S. Can Regain Its Manufacturing Edge

BCG's Justin Rose discusses what the U.S. can do to regain its manufacturing edge.

Manufacturing in the U.S. has often been assumed to be in long-term decline, with the competitive advantage moving to low-cost countries such as Mexico and China. However, with advanced technologies likely to automate as much as 60% of factory tasks, in the future low-cost countries may no longer enjoy a competitive advantage, and the U.S. could well regain lost ground.

Still, the U.S. needs to be more aggressive in developing and adopting robotic technologies, according to Justin Rose, a partner and managing director in the Boston Consulting Group’s Chicago office. “If we want to remain a manufacturing powerhouse, I truly see this as the only choice. And the U.S. needs to lead here. It’s not enough to just let it happen naturally over time,” says Rose.

In a conversation with Knowledge@Wharton, Rose, who leads the operations and digital clients practice for BCG’s industrial clients globally, talks about the future of U.S. manufacturing and other related issues.

An edited transcript of the conversation follows:

Microsoft Teams Surges Ahead of Slack

Microsoft shows the numbers, somewhat unexpectedly.  Have used Teams since its rollout.  A quite fast pickup.   Its well done for basic chat and beyond.  Probably because its integrated with the commonly installed Office.

Microsoft Teams Surges ahead with 13M users, likely surpassing Slack   By Dunan Riley in SiliconAngle

Microsoft Corp. has finally revealed user numbers for its Teams service, likely confirming what was first reported in a survey in December: It’s now more popular than rival Slack Inc.

More than 13 million people were using Microsoft Teams on a daily basis as of June, with more than 19 million people using the service weekly, Microsoft said today. By comparison, Slack reported it had 10 million daily users in January.  ... " 

IOTA and Mobility

A further look at uses of IOTA in Automotive Industry

IOTA in the mobility sector: a complete rundown
in IOTAarchive

The days of assembling “dumb” parts into something that moves are (nearly) over. Vehicle sales for personal ownership are dwindling while new mobility concepts are on a steady rise.
In addition, a shift from combustion to electric engines and leaps in autonomous driving technology require a major overhaul of existing manufacturing processes, designs, infrastructure and business models.

The coming decade will represent the biggest disruption any industry lasting longer than a century has ever seen. The New York Times recently titled “The Car Industry is under siege”:
“It’s going to be the biggest change we’ve seen in the last 100 years, and it’s going to be really expensive even for the biggest companies” […] Major auto companies will spend well over $400 billion during the next five years […]

They must retool factories, retrain workers, reorganize their supplier networks and rethink the whole idea of car ownership. […] this upfront investment is a matter of survival. If they don’t adapt, they could become obsolete.

The electric carmaker Tesla, despite all its problems, is still worth more on the stock market than either Fiat Chrysler or Renault. Uber is worth much more than the two combined, even after reporting a $1 billion quarterly loss.

In essence, software is given a much higher priority, not only in terms of research and development for future mobility solutions, electric or autonomous vehicles but also in order to establish or streamline near-term mobility solutions.

Overall connectivity, automation, the reduction of friction between individual transportation solutions and processes, security, electric charging solutions and infrastructure, convenience features and new revenue models are only a few of the monumental challenges mobility providers face.

With the above comes an unfathomable amount of data, for which manufacturers and service providers have to decide who owns it, who to share it with and how to secure it.

Even though it is very young technology that hasn’t been declared production ready yet, IOTA, as an immutable, feeless, permissionless and open-source communication and value-transfer protocol has seen vast interest by mobility providers — in fact much more than any other existing cryptocurrency protocol. .... "   

(Intro, considerably more at the link)

Thursday, July 11, 2019

Shopping Malls and Face Recognition

Not unexpected,  how will shoppers react?   Will the use be clearly posted?   There are already lots of security cameras.  We did lots of work tracking traffic in stores, and tested technical methods.  Many malls used the same methods.   

Shopping Centers Exploring Facial Recognition in Brave New World of Retail    By The Wall Street Journal 

Facial recognition is showing up in malls and shopping centers.

U.S. mall owners and retailers are ramping up their use of facial recognition to learn about shoppers' traffic patterns, employees' performance, and consumer responses to displays and marketing.

U.S. mall owners and retailers are ramping up their use of facial recognition to ascertain shoppers' traffic patterns, employee performance, and consumer response to displays and marketing.

Said Sandy Sigal, CEO of shopping center owner NewMark Merrill, "We definitely at the minimum want features that accurately identify who your customers are, where in the shopping centers they go, and how long they spend there."

Data/intelligence provider Springboard Research sells tracking technology that plugs into security cameras, to anonymously monitor individuals and vehicles in malls.

Meanwhile, Remark Holdings' products allow clients to use facial-recognition data to track down customers and enroll them in loyalty programs.

From The Wall Street Journal 
View Full Article - May Require Paid Subscription  .... "

Journalism AI

Interesting to see what Google believes to be journalism.  Most importantly,  what are the goals of the idea?  Truth or readership?   Another example where behavior of consumers, as it relates to some goals,  needs to be better understood.

How AI could shape the future of journalism
By Mattia Peretti

Manager, Journalism AI
Published Jul 11, 2019

Editor’s note: What impact can AI have on journalism? That is a question the Google News Initiative is exploring through a partnership with Polis, the international journalism think tank at the London School of Economics and Political Science. The following post is written by Mattia Peretti, who manages the program, called Journalism AI. ... "

Machine Learning Interpretability: Free eBook

Its the ultimate need for any problem solving method, interpreting the results you get from complex methods.

Just scanned this new O'Reilly eBook.  30+ pages, nicely done, includes links and technical details.  But could still be utilized by engineering oriented management.   Good simple graphics, and emphasis on readily understood methods like decision trees.   Could have used some more industry specific examples, to indicate the breadth of needs.    Link to a download below. 

The latest data science superpower—interpreting ML
Amp up your superpowers with this free ebook  ... 

Machine learning algorithms are incredibly useful and increasingly complex. But when the complexity outpaces interpretability, human trust suffers, leading to stalled adoption, regulation, and difficulties with model documentation.

That's why the latest data science superpower is the ability to interpret machine learning—amp up your powers with this free ebook.

An Introduction to Machine Learning Interpretability takes you through the basics and provides a set of machine learning techniques, algorithms, and models that will help you improve the accuracy of your predictive models while maintaining interpretability.Machine learning algorithms are incredibly useful and increasingly complex. But when the complexity outpaces interpretability, human trust suffers, leading to stalled adoption, regulation, and difficulties with model documentation.

That's why the latest data science superpower is the ability to interpret machine learning—amp up your powers with this free ebook.

An Introduction to Machine Learning Interpretability takes you through the basics and provides a set of machine learning techniques, algorithms, and models that will help you improve the accuracy of your predictive models while maintaining interpretability.  ..... " 

Integrating Metadata

In the latest ACM, interesting piece on transferring data.   And notable about mentioning the metadata ultimately involved ....

Extract, Shoehorn, and Load  By Pat Helland
Communications of the ACM, July 2019, Vol. 62 No. 7, Pages 32-3310.1145/333113

A lot of data is moved from system to system in an important and increasing part of the computing landscape. This is traditionally known as ETL (extract, transform, and load). While many systems are extremely good at this process, the source for the extraction and the destination for the load frequently have different representations for their data. It is common for this transformation to squeeze, truncate, or pad the data to make it fit into the target. This is really like using a shoehorn to fit into a shoe that is too small. Sometimes it's a needed step. Frequently it's a real pain!

Two major parts of ETL are the extraction and the load. These processes are where the rubber meets the participating data stores.

Extraction pulls data out of a source system. This may be relational data kept in a database. If so, it may be converted to an object relational format where each object transforms the join of multiple relational rows into a cohesive thing. Data is frequently organized as messages when it is sucked out. It's also common for data to be extracted from key-value stores where it is kept in a semi-structured representation.

Load happens when the data is placed into the target system. The target will have its own metadata describing the shape and form of the data in its belly. If the target is an analytics system, then its data will likely be loaded into a relational form.

While it may be counterintuitive, it is frequently useful to take relational data out of a system as objects; convert, massage, and shoehorn the data from one object representation to another; and load it into the target system in relational form.  .... "

Nestle Open Blockchain Pilot

Always impressed by Nestle's efforts in tech,    Note the consumer facing direction.  Again another example of seriousness in CPG, Retail and Supply Chain.  Here transparency to consumer seems to be the goal. 

Nestle Embarks on Open Blockchain Pilot  By Jacqueline Barba  in ConsumerGoods
Source: Nestle

Nestle is teaming up with blockchain platform OpenSC to launch a pilot that will trace milk back to its origins as a way of bringing more transparency to the packaged goods company's supply chain.

“This open blockchain technology will allow anyone, anywhere in the world to assess our responsible sourcing facts and figures,” said Benjamin Ware, global head of responsible sourcing for Nestle S.A., in a press release. ..... " 

Originating Press release:

Wednesday, July 10, 2019

Wal-Mart and the Workforce of the Future

In HBS Working Knowledge, interesting view of changes in the workplace once it is digitized and moves towards automation.     With Wal-Mart as example.

Walmart's Workforce of the Future     by Julia Hanna
A case study by William Kerr explores Walmart's plans for future workforce makeup and training, and its search for opportunities from digital infrastructure and automation.

Any discussion of the future of retail—or how we work—has to include Walmart. As of 2017, 90 percent of the US population lived within 10 miles of a Walmart store; with 11,766 locations worldwide and $514 billion in annual revenues, the discount store also has the distinction of being the largest private employer in the United States, with 1.5 million workers (2.2 million worldwide).

But that size and dominance doesn’t make Walmart immune to pressures faced by any other retail operation. In the second-year Harvard Business School course Managing the Future of Work, Professor William Kerr explores how technology and demographics are changing the way companies like Walmart, and their workers, operate.

“The pace of change in the retail sector is truly extraordinary,” says Kerr, the D’Arbeloff Professor of Business Administration and co-director of Harvard’s Managing the Future of Work initiative. “That requires a lot of reskilling of employees and hard choices, in an uncertain environment, in terms of how to deploy capital.”


Children's Intelligence in Connections and Flexibility

I have a very young granddaughter, so I am alert to  examples of intelligence and how it can come about.  It is an old AI model,  if you could build a very flexible learning model, with the correct sensors to understand its environment in context,  you could learn your way to AI.  Like a human baby.   Still we have yet to do this for general and flexible situations.   Here something different yet, looking at the number of connections and their flexibility over time and learning.

A Separate Kind of Intelligence
A Talk By Alison Gopnik

 looks as if there’s a general relationship between the very fact of childhood and the fact of intelligence. That might be informative if one of the things that we’re trying to do is create artificial intelligences or understand artificial intelligences. In neuroscience, you see this pattern of development where you start out with this very plastic system with lots of local connection, and then you have a tipping point where that turns into a system that has fewer connections but much stronger, more long-distance connections. It isn’t just a continuous process of development. So, you start out with a system that’s very plastic but not very efficient, and that turns into a system that’s very efficient and not very plastic and flexible.

It’s interesting that that isn’t an architecture that’s typically been used in AI. But it’s an architecture that biology seems to use over and over again to implement intelligent systems. One of the questions you could ask is, how come? Why would you see this relationship? Why would you see this characteristic neural architecture, especially for highly intelligent species?

ALISON GOPNIK is a developmental psychologist at UC Berkeley. Her books include The Philosophical Baby and, most recently, The Gardener and the Carpenter: What the New Science of Child Development Tells Us About the Relationship Between Parents and Children... "