/* ---- Google Analytics Code Below */

Saturday, February 29, 2020

Replacing Data Scientists With AutoML?

Excerpt from a current KDNuggets article, which links further to a poll that asks the questions of practitioners.  My answer is yes. AutoML will replace the current needs for data science analysis.  Within a decade.  Of course the needs are likely to expand as well, so there will always be research and new requirements emerging.  And interpretation for specific context needs.  Just as there are needs for statisticians and analytics specialists for the same purposes.

When Will AutoML (Automated Machine Learning) Replace Data Scientists (if ever)?

Soon after tech giants Google and Microsoft introduced their AutoML services to the world, the popularity and interest in these services skyrocketed. We first review AutoML, compare the platforms available, and then test them out against real data scientists to answer the question: will AutoML replace us?

Introduction of AutoML:
One cannot introduce AutoML without mentioning the machine learning project’s life cycle, which includes data cleaning, feature selection/engineering, model selection, parameter optimization, and finally, model validation. As advanced as technology has become, the traditional data science project still incorporates a lot of manual processes and remains time-consuming and repetitive. ... "

Extraterrestrial Protein Found

Proteins found in an existential source for the first time.   With differences from terrestial proteins.

For the first time, scientists found a complete protein molecule in a meteorite — and they’re pretty sure it didn’t come from Earth.  in Futurism.com

Technical paper: https://arxiv.org/abs/2002.11688

After analyzing samples from the meteorite Acfer 086, a team of researchers from Harvard University and the biotech companies PLEX Corporation and Bruker Scientific found that the protein’s building blocks differed chemically from terrestrial protein. As they write in their research, which they shared on ArXiv on Saturday, “this is the first report of a protein from any extra-terrestrial source.”  .... "

'Autonomous' Translation at Hand?

A number of mobile devices how have  instantaneous translation, so we have the promise of using them in any multilingual environment.   I have a Google assistant that will translate phrases.    How is this done?  Are we there yet, what more do we need?  Good review of the current tech.s.  How is the barrier been removed?

Across the Language Barrier
By Keith Kirkpatrick
Communications of the ACM, March 2020, Vol. 63 No. 3, Pages 15-17
10.1145/3379495

The greatest obstacle to international understanding is the barrier of language," wrote British scholar and author Christopher Dawson in November 1957, believing that relying on live, human translators to accurately capture and reflect a speaker's meaning, inflection, and emotion was too great of a challenge to overcome. More than 60 years later, Dawson's theory may finally be proven outdated, thanks to the development of powerful, portable real-time translation devices.

The convergence of natural language processing technology, machine learning algorithms, and powerful portable chipsets has led to the development of new devices and applications that allow real-time, two-way translation of speech and text. Language translation devices are capable of listening to an audio source in one language, translating what is being said into another language, and then translating a response back into the original language.

About the size of a small smartphone, most standalone translation devices are equipped with a microphone (or an array of microphones) to capture speakers' voices, a speaker or set of speakers to allow the device to "speak" a translation, and a screen to display text translations. Typically, audio data is captured by the microphones, processed using a natural language processing engine mated to an online language database located either in the cloud or on the device itself, and then the translation is output to the speakers or the screen. Standalone devices, with their dedicated translation engines and small portable form factors, are generally viewed as being more powerful and convenient than accessing a smartphone translation application. Further, many of these devices offer the ability to access translation databases stored locally on the device or access them in the cloud, allowing their use in areas with limited wireless connectivity.

Instead of trying to translate speech using complex rules based on syntax, grammar, and semantics, these language processing algorithms employ machine learning and statistical modeling. These initial models are trained on huge databases of parallel texts, or documents that are translated into several different languages, such as speeches to the United Nations, famous works of literature, or even multinational marketing and sales materials. The algorithms identify matching phrases across sources and measure how often and where words occur in a given phrase in both languages, which allows translators to account for differences in syntax and structure across languages. This data is then used to construct statistical models that link phrases in one language to phrases in the second, which allows for accurate and fast translation.

In practice, this means devices can translate between languages more quickly than ever before by using such modeling. Incorporating high-powered processors, quality microphones, and speakers into the device, a person can carry on a real-time, two-way conversation with someone who speaks an entirely different language. These devices represent a significant increase in accuracy and functionality above manual, text-based translation applications such as Google Translate. ... "

Technological Advances

Technology Review put out their annual review of tech that will be important in the coming year. Not just the tech areas, but how they will be specifically used.  Most definitions are obvious, some are developed.   More detail at the link.

Here is our annual list of technological advances that we believe will make a real difference in solving important problems. How do we pick? We avoid the one-off tricks, the overhyped new gadgets. Instead we look for those breakthroughs that will truly change how we live and work.

Unhackable internet
Hyper-personalized medicine
Digital money
Anti-aging drugs
AI-discovered molecules
Satellite mega-constellations
Quantum supremacy
Tiny AI
Differential privacy
Climate change attribution  ... "

Goodbye Freeman Dyson

Physicist, mathematician and well known writer Freeman Dyson has passed away.   Followed his work for years.   A 
powerful, widely thinking, iconoclastic mind.

Freeman Dyson: 1923 - 2020

Most recently see his work in The Edge:

And his Wikipedia entry.  https://en.wikipedia.org/wiki/Freeman_Dyson

 ".... Freeman John Dyson FRS (15 December 1923 – 28 February 2020) was an English-born American theoretical physicist and mathematician known for his work in quantum electrodynamics, solid-state physics, astronomy and nuclear engineering.[7][8] He was professor emeritus in the Institute for Advanced Study in Princeton, a member of the Board of Visitors of Ralston College[9] and a member of the Board of Sponsors of the Bulletin of the Atomic Scientists.[10]

Dyson originated several concepts that bear his name, such as Dyson's transform, a fundamental technique in additive number theory,[11] which he developed as part of his proof of Mann's theorem;[12] the Dyson tree, a hypothetical genetically-engineered plant capable of growing in a comet; the Dyson series, a perturbative series where each term is represented by Feynman diagrams; the Dyson sphere, a thought experiment that attempts to explain how a space-faring civilization would meet its energy requirements with a hypothetical megastructure that completely encompasses a star and captures a large percentage of its power output; and Dyson's eternal intelligence, a means by which an immortal society of intelligent beings in an open universe could escape the prospect of the heat death of the universe by extending subjective time to infinity while expending only a finite amount of energy. .... " 

China Digital Currency and Virus

Origin of this comments makes this interesting, a means of regulating otherwise hard to control financial results coming from less controllable events like epidemics?

Coronavirus outbreak could accelerate China's digital currency issuance, says former central bank president    by Celia Wan  in TheBlock

Efforts to combat the coronavirus could accelerate the Chinese central bank's plans to issue a digital currency, according to a former president of the People's Bank of China.

In a February 16 interview with China Daily, Lihui Li argued that a digital currency's efficiency, cost-effectiveness, and convenience make it especially desirable during an epidemic. Li previously helmed the People's Bank of China and now serves as the blockchain lead for the state-run National Internet Finance Association. .... " 

Friday, February 28, 2020

The Hutter Prize

Just informed of this work, via a podcast referenced below.  Still aiming to make some further sense of this in general.  Technical.

The Hutter Prize Site
Being able to compress well is closely related to intelligence as explained below. While intelligence is a slippery concept, file sizes are hard numbers. Wikipedia is an extensive snapshot of Human Knowledge. If you can compress the first 1GB of Wikipedia better than your predecessors, your (de)compressor likely has to be smart(er). The intention of this prize is to encourage development of intelligent compressors/programs as a path to AGI. ... 

Interview with Lex Fridman (26.Feb'20) (Video, Audio, Tweet)  ... " 

In the Wikipedia (The Hutter Prize)
"... The goal of the Hutter Prize is to encourage research in artificial intelligence (AI). The organizers believe that text compression and AI are equivalent problems. Hutter proved that the optimal behavior of a goal seeking agent in an unknown but computable environment is to guess at each step that the environment is probably controlled by one of the shortest programs consistent with all interaction so far.[4] However, there is no general solution because Kolmogorov complexity is not computable. Hutter proved that in the restricted case (called AIXItl) where the environment is restricted to time t and space l, a solution can be computed in time O(t2l), which is still intractable.

The organizers further believe that compressing natural language text is a hard AI problem, equivalent to passing the Turing test. Thus, progress toward one goal represents progress toward the other.[5] They argue that predicting which characters are most likely to occur next in a text sequence requires vast real-world knowledge. A text compressor must solve the same problem in order to assign the shortest codes to the most likely text sequences.  .... " 

Tackling Flu with Virus Simulations

We worked with the San Diego Supercomputer center.  Quite a global challenge is emerging here and the emerging approaches will be good to track for reapplications.   Whats the best process for future reapplications?

Researchers Tackle the Flu with Virus Simulations
UC San Diego News Center
Cynthia Dillon
February 25, 2020

University of California, San Diego (UCSD) and University of Pittsburgh researchers have developed new molecular virus simulations to help fight influenza. The researchers used the Influenza A H1N1 2009 pathogen to analyze two binding sites in the virus' molecular environment, yielding an all-atom, solvated, experimentally based integrative model. Assembling the simulation required combining different types of experimental data at different resolutions, and the researchers utilized the Blue Waters supercomputer at the University of Illinois at Urbana-Champaign to model atomic movements in the viral envelope. UCSD's Rommie Amaro said the research could enable a new approach to anti-flu drug development, through its finding "that an often-overlooked so-called 'secondary site' may be the first place the natural substrate of the flu binds."  ... '

AI and Aerospace

Model based systems engineering is along time approach in this space. we did much of i in the enterprise.  AI/machine learning can be used to focus subtasks with broader process or resource implications. 

Modeling and simulation: Achieve next-level results with AI
Artificial intelligence-based approaches are key to realizing model-based systems engineering benefits.

Aerospace executives can now optimize manufacturing processes by leveraging artificial intelligence (AI) with high-performance computing (HPC) technologies and the digital thread. A digital thread follows the lifecycle of a product from design inception through engineering and product lifecycle management, to manufacturing instructions, supply chain management, and through to service events. You'll be able to enhance the aerospace design process to protect budgets, avoid static production rates, and nudge your business ahead of competitors.

Even better, as aerospace design becomes more complex, AI can help keep your business ahead of the innovation curve.... '

Thursday, February 27, 2020

P&G Launches Brand that kills some Coronaviruses

A surface cleaner, and its unclear if it is useful for the particular kind of virus involved.   The article talks about that later.  Overall NOT a claim for efficacy.  It is good to see companies looking at the possibilities. 

P&G launches brand that kills some coronaviruses, other germs   By Barrett J. Brunsman  – Staff reporter, Cincinnati Business Courier

Feb 27, 2020, 5:15am EST Updated 6 hours ago

Procter & Gamble Co. has launched a new line of home sanitizing products formulated to kill bacteria as well as cold and flu viruses – including some forms of the human coronavirus.

The new brand’s name, Microban 24, refers to the product’s ability to protect surfaces against germs for an entire day.

“Microban 24 provides a protective shield that keeps killing bacteria for a full 24 hours, even when the surface is touched or contacted multiple times,” stated the Cincinnati-based maker of consumer goods (NYSE: PG). ..... ' 

Next Level of Police Enabling Technology

Obvious next step.   I see this is being done with our local police nearby, will look into this and see if I can get a demonstration.

Axon Rolls Out the Next Level of Police Technology: Live-Streaming Body Cameras   The Washington Post     By Tom Jackman

Police body-camera supplier Axon has deployed live-streaming cameras to the Cincinnati Police Department, allowing officers to show dispatchers or commanders crises as they unfold in real time, and helping rescuers find officers in trouble. The system automatically turns the camera on when a gun is drawn, emergency lights are activated, or a Taser is powered up. While the cameras will not include facial recognition software, they will have face-detection capabilities so police can quickly find video segments with people and react faster when footage is required for wider dissemination. Said Barry Friedman, a New York University law professor and founder of the Policing Project, “Body cameras go into sensitive places. With streaming, it won’t just be the officer, but somebody else. There have to be serious limits as to whom the video is streamed.”....'

Using Google Earth on More Browsers

In its earlier days we built some training systems that used Google Earth as a hack for backgrounds to show how we manufactured, shipped and sold goods worldwide.  But they would only run on some  browsers.  Google has expanded the range of possibilities for such application.  I consider Google Earth to be an amazing thing for geography understanding.

It took Google three years to add Firefox, Edge and Opera support to Google Earth    by Martin Brinkmann in gHacks

When Google unveiled the new Google Earth back in 2017, it switched Google Earth from being a desktop application to a web application. The company made Google Earth Chrome-exclusive at the time stating that the company's own Chrome browser was the only browser to support Native Client (NaCl) technology at the time and that the technology "was the only we [Google] could make sure that Earth would work well on the web".

The emergence of new web standards, WebAssembly in particular, allowed Google to switch to the standard supported by other browsers. The company launched a beta of Google Earth for browsers that support WebAssembly, Firefox, Edge and Opera are mentioned specifically six months ago.

Today, Google revealed that it has made Google Earth available officially for the web browsers Mozilla Firefox, Microsoft Edge (Chromium-based), and Opera. ... "

Robots as our Bosses

Does not necessarily mean there will soon be android devices wandering the aisles of your workplace.  And most workplaces are measured by goals attained.  Measured and analyzed digitally.  But the complete systemization of that has never been there,  and we are moving that way.  Will we see this more ominously?
 
In warehouses, call centers, and other sectors, intelligent machines are managing humans, and they’re making work more stressful, grueling, and dangerous

By Josh Dzieza@joshdzieza  in TheVerge
 
OnOn conference stages and at campaign rallies, tech executives and politicians warn of a looming automation crisis — one where workers are gradually, then all at once, replaced by intelligent machines. But their warnings mask the fact that an automation crisis has already arrived. The robots are here, they’re working in management, and they’re grinding workers into the ground.

The robots are watching over hotel housekeepers, telling them which room to clean and tracking how quickly they do it. They’re managing software developers, monitoring their clicks and scrolls and docking their pay if they work too slowly. They’re listening to call center workers, telling them what to say, how to say it, and keeping them constantly, maximally busy. While we’ve been watching the horizon for the self-driving trucks, perpetually five years away, the robots arrived in the form of the supervisor, the foreman, the middle manager. .... "

Wednesday, February 26, 2020

Should All Children Learn Code?

Nothing wrong with learning coding.   But should it be emphasized for everyone?   Required?   Better to learn more about math and logic.   As coding gets more complex, it gets closer to the use of math.  At lower levels it's more like logic combined with being able to pay attention to detail.  And knowing how to find useful resources.  That part, the automation of coding, is quickly progressing.   Will we need hand-coding in a decade?   Coding is a good way to learn that you have to pay close attention to detail to get complex things done.   Not for the 'Vocational vs Liberal Ed' reasons mentioned below.

Should All Children Learn to Code by the End of High School?
The Wall Street Journal
By Robert Sedgewick; Larry Cuban

Princeton University's Robert Sedgewick and Stanford University's Larry Cuban disagree on whether computer coding should be a graduation requirement for high school students. Sedgewick feels incorporating coding skills into the K-12 curriculum benefits students and society, while Cuban said it threatens to turn public schools into job-training sites for technology companies. Sedgewick sees coding as critical to cultivating logical thinking, creativity, and problem-solving in students. Cuban sees no clear evidence that coding skills are transferable to cognitive domains like math, English, history, and science, and he warned that imposing such vocational training would undermine public schools' wider mission to foster social mobility, individual development, and civic engagement. ... '

Gartner Magic Quadrant for Data Science and Machine Learning

KD Nuggets publishes and analyzes the most recent Gartner quadrant analysis. While I am skeptical of this approach, it does have a useful list of participants which can fill in the gaps.   Clip at link below to get to the 'Magic Quadrant'.   Some of the included analysis by KDN is more interesting, with  short, general, non-technical descriptions of what many companies are doing.

The Gartner 2020 Magic Quadrant for Data Science and Machine Learning Platforms has the largest number of leaders ever. We examine the leaders and changes and trends vs previous years.
By Gregory Piatetsky, KDnuggets.

Gartner has released last week its highly-anticipated report and magic quadrant (MQ) for Data Science and Machine Learning Platforms (DSML) and you can get copies from several vendors - see a list at the bottom of this blog. In previous years, the MQ name kept changing but the 4 leaders remained the same. Now the name has remained the same as in 2019 MQ and 2018 MQ reports, reflecting a more mature understanding of the DSML field, but the contents, especially the leader quadrant, have changed dramatically, reflecting accelerating progress and competition in the field.

The 2020 MQ report went back to evaluating 16 vendors (down from 17 last year), placed as usual in 4 quadrants, based on completeness of vision (vision for short) and ability to execute (ability for short).

We note that the report included only vendors with commercial products, and did not consider open-source platforms like Python and R, even though those are very popular with Data Scientists and Machine Learning professionals.   ... )

Wharton on Global Economic Impact of Coronavirus

Considerable piece here from Wharton.  Notable for other implications for epidemics and their influence on economic systems.  The embedding of uncertainty in large scale effects.  Notable the mention of SARS example now two decades ago.

Containing the Coronavirus: What’s the Risk to the Global Economy?

On Monday, February 24, stock indices tumbled, spooked by reports that the coronavirus outbreak that emerged in China is spreading to countries including Italy, Iran and South Korea. A day later, trading in stocks across world markets remained choppy, reflecting hope that the economic fallout might be manageable — just as damage from the SARS epidemic was some two decades ago — but also fear that the economic impact could be significant and linger longer.

The markets’ movements mirror the uncertainty that prevails and persists not just in the U.S. but all over the world. Several weeks into the coronavirus outbreak that has brought the world’s second largest economy to its knees, some of the most basic aspects of the virus remain unknown. It’s not yet clear how widely beyond China COVID-19 will spread; this week, numbers of infected individuals have surged outside China. Still, exactly how it is transmitted, how easily, and how lethal it might be are aspects of this coronavirus that remain to be uncovered, according to University of Pennsylvania scientists.

As the human toll mounts, so does the economic damage. The business realm, of course, tends to shudder in the face of uncertainty, and right now, with reports on the seriousness of the coronavirus evolving each day if not each hour, the eyes of commerce are on epidemiology.  ... " 

Google Updates DialogFlow

Seeing augmenting capabilities as virtual agents.   Might also be a means to best think of such agents connecting to human agents.   Makes since in many kinds of service systems.   See ISSIP, which I am part of.

Google updates Dialogflow AI engine to help customers create better virtual agents   By Mike Wheatley, SiliconAngle 

Google LLC today debuted some important updates to its Dialogflow, the main technology that powers its Contact Center AI service for automating interactions with customers in call centers.

Dialogflow is a conversational artificial intelligence engine used to create virtual agents that can understand and respond to all manner of queries from callers, using both voice and text as a medium.

The main update today is a new “Dialogflow Mega Agent” in beta test mode that increases the number of “intents” available to virtual agents by up to 10 times, to 20,000 in total.   ... "

Building Biometrics National IDs

Despite the pushback on some of these ideas, I think it inevitable that they will be widely implemented.  Identity is a key aspect of conversational understanding that leads to intelligent interaction.

Countries Debate Openness of Future National IDs
IEEE Spectrum
By Lucas Laursen

More than half of African countries are developing some form of biometric or digital national identification (ID) in response to major international calls to establish legal IDs for the nearly 1 billion people who currently lack them. However, this ID boom often moves faster than data protection laws. For countries that move forward with digital ID laws, opportunistic vendors can lock them into their products. For example, Kenya is using software that is only accessible to government agencies and contractors, a fact that is concerning to some critics. Meanwhile, India's Modular Open Source Identity Platform (MOSIP) may not solve all the security issues associated with early national ID ecosystems, but it could empower governments to expect more from the vendors that support future national IDs. ...."

Amazon Continues Work on Cashierless Systems

Despite some reports that Amazon was decreasing work on cashierless systems:

Amazon Opens Cashierless Supermarket in Latest Push to Sell Food in the WSJ

The e-commerce giant is also looking into licensing the checkout-free technology to rival retailers
By Sebastian Herrera and Aaron Tilley 

Amazon.com Inc. rolled out its checkout-free “Go” technology in a large grocery store and plans to license the cashierless system to other retailers.

Amazon Go Grocery opened in Seattle on Tuesday. It uses an array of cameras, shelf sensors and software to allow shoppers to pick up items as varied as organic produce and wine and walk out without stopping to pay or scan merchandise. Accounts are automatically charged through a smartphone app once shoppers leave the store..... " 

...

When Your AI Learns to Lie

Some very important questions as these methods evolve.

AI Deception: When Your Artificial Intelligence Learns to Lie

We need to understand the kinds of deception an AI agent may learn on its own before we can start proposing technological defenses

This piece was written as part of the Artificial Intelligence and International Stability Project at the Center for a New American Security, an independent, nonprofit organization based in Washington, D.C. Funded by Carnegie Corporation of New York, the project promotes thinking and analysis on AI and international stability. Given the likely importance that advances in artificial intelligence could play in shaping our future, it is critical to begin a discussion about ways to take advantage of the benefits of AI and autonomous systems, while mitigating the risks. The views expressed here are solely those of the author and do not represent positions of IEEE Spectrum or the IEEE.

In artificial intelligence circles, we hear a lot about adversarial attacks, especially ones that attempt to “deceive” an AI into believing, or to be more accurate, classifying, something incorrectly. Self-driving cars being fooled into “thinking” stop signs are speed limit signs, pandas being identified as gibbons, or even having your favorite voice assistant be fooled by inaudible acoustic commands—these are examples that populate the narrative around AI deception. One can also point to using AI to manipulate the perceptions and beliefs of a person through “deepfakes” in video, audio, and images. Major AI conferences are more frequently addressing the subject of AI deception too. And yet, much of the literature and work around this topic is about how to fool AI and how we can defend against it through detection mechanisms.

I’d like to draw our attention to a different and more unique problem: Understanding the breadth of what “AI deception” looks like, and what happens when it is not a human’s intent behind a deceptive AI, but instead the AI agent’s own learned behavior. These may seem somewhat far-off concerns, as AI is still relatively narrow in scope and can be rather stupid in some ways. To have some analogue of an “intent” to deceive would be a large step for today’s systems. However, if we are to get ahead of the curve regarding AI deception, we need to have a robust understanding of all the ways AI could deceive. We require some conceptual framework or spectrum of the kinds of deception an AI agent may learn on its own before we can start proposing technological defenses.  ... " 
------
" ... I'm not worried about the machine that passes the Turing Test, I'm worried about the machine smart enough to pretend to fail it. ..."

Tuesday, February 25, 2020

Using Business Rules and Expertise

Via DSC, what looks to be a good podcast on this topic.  It has been a favorite approach of mine since the beginning.  Narrow machine learning methods can be very valuable, but to deliver them they have to be part of existing or proposed tasks or businesses.  Operationally embedded.   That requires real-life decision rules.  Access information to the podcast at the link below.  More on this topic to follow. 

Data Science Fails: Ignoring Business Rules & Expertise 

Nowadays, we have unprecedented access to data, plus the computing power and advanced algorithms to find correlations. We look at a cautionary case study of a cancer center that embarked on an ambitious plan to use AI to eradicate cancer. When AI is being asked to make decisions with significant consequences, such as life and death healthcare recommendations, it needs to be trustworthy. But if you don't follow best practices, if you don't include the knowledge of subject matter experts, and if you don't enforce business rules, your AI project will not be successful.

In this latest Data Science Central podcast, learn four AI governance practices that can help you achieve AI success.

Speaker: Colin Priest, VP of AI Strategy - DataRobot
Hosted by: Sean Welch, Host and Producer - Data Science Central .... 

Magnetoencephalography

This approach was just brought again to my attention.  Have been asked how it could be applied to the same kinds of diagnostic queries as EEG.  And if it has ever been applied to the same problems that have been addressed by Neuromarketing.  Links to such studies?  The article below describes and compares the two methods.

Intro in the Wikipedia:

Magnetoencephalography (MEG) 

Magnetoencephalography (MEG) is a functional neuroimaging technique for mapping brain activity by recording magnetic fields produced by electrical currents occurring naturally in the brain, using very sensitive magnetometers. Arrays of SQUIDs (superconducting quantum unit interference devices) are currently the most common magnetometer, while the SERF (spin exchange relaxation-free) magnetometer is being investigated for future machines.[1][2] Applications of MEG include basic research into perceptual and cognitive brain processes, localizing regions affected by pathology before surgical removal, determining the function of various parts of the brain, and neurofeedback. This can be applied in a clinical setting to find locations of abnormalities as well as in an experimental setting to simply measure brain activity.[3]... ' 

Pepsico Takes Analytics Closer to Consumer

Fascinating view of what a big, close to the consumer company, is doing to figure out how their marketing works.   A relatively rare mention of ROI as it measures efforts.

PepsiCo getting closer to the consumer with data analytics in FoodBusiness

BOCA RATON, FLA. — PepsiCo, Inc. is investing in its data analytics capabilities to stay ahead of the evolving consumer marketplace, said Hugh F. Johnston, chief financial officer, during the company’s Feb. 20 presentation at the Consumer Analyst Group of New York conference. As an example, he said PepsiCo is digitizing its marketing and consumer insights efforts.

“By enhancing our ecosystem of data and consumer insights, we can increase the efficiency of our promotions and become more dynamic with our A&M (advertising and marketing) spending, in effect, getting more growth for the same spend,” he said. “This includes leveraging our in-house automation capabilities to deploy targeted higher R.O.I. (return on investment) marketing programs in a manner that's both efficient and scalable.”

With algorithms the company continues to “train,” Mr. Johnston said the company is improving marketing R.O.I. on a continuous basis.

“We can tailor and target ads with greater precision, optimizing the R.O.I. of individual campaigns or of the digital elements of the media mix and building a single view of the consumer by integrating consumer data from various sources, such as C.R.M. data, brand sites, cookie and I.D. data as well as second-party and third-party assets,” Mr. Johnston said. “By capturing and analyzing more granular consumer-level data, we can understand the consumer in a more individualized way to both customize communication and execute in every store with precisely the right products in the right location at the right price.” .... ' 

On Editing Your Own Self Image

This came to my attention recently when I was asked for a simple self image to use on a startup web site.  How much should we edit? We have the tools now.  At very least by taking lots of images in many contexts.   But in the past, or now,  we could go to a professional portraitist and get advice on how to produce an image.  Below an intro, much more at the link. 

Editing Self-Image
By Ohad Fried, Jennifer Jacobs, Adam Finkelstein, Maneesh Agrawala
Communications of the ACM, March 2020, Vol. 63 No. 3, Pages 70-79
10.1145/3326601

Self-portraiture has become ubiquitous. Once an awkward feat, the "selfie"—a picture of one's self taken by one's self, typically at arm's length—is now easily accomplished with any smartphone, and often shared with others through social media. A 2013 poll indicated selfies accounted for one-third of photos taken within the 18-to-24 age group. Google estimated in 2014 that 93 billion selfies were taken per day just by Android users alone.  More recently, selfie taking has begun to influence human behavior in the physical world. Museums   have started to develop environments that cater specifically to Instagram and Snapchat users. Even facial plastic surgeons have observed an increase in the number of patients that seek plastic surgery specifically to look better in selfies (55% of surgeons had such patients in 2017, up 13% from 2016).2 Perhaps most strikingly, plastic surgeons have begun reporting a new phenomenon termed "Snapchat dysmorphia," where patients seek surgery to adjust their features to correspond to those achieved through digital filters ... "

Framework to Predict Success of Big Data

In the HBR,  thoughts on predicting the success of Big Data.

Use This Framework to Predict the Success of Your Big Data Project
By Carsten Lund Pedersen, Thomas Ritter

Big data projects that revolve around exploiting data for business optimization and business development are top of mind for most executives. However, up to 85% of big data projects fail, often because executives cannot accurately assess project risks at the outset. We argue that the success of data projects is largely determined by four important components — data, autonomy, technology, and accountability — or, simply put, by the four D.A.T.A. questions. These questions originate from our four-year research project on big data commercialization.  ... " 

Simulation Systems for Manufacturing

 We broadly did simulations for many applications in manufacturing and beyond, see many posts here at tag below.

More Manufacturers Bet on Simulation Software
The Wall Street Journal
By Angus Loten

Manufacturers increasingly are using simulation software to test new or revamped production lines prior to operation. Market research firm ABI Research estimated roughly 110,000 companies worldwide will employ simulation software within the next five years. ABI's Michael Larner said manufacturers use simulation software to assess how planned production-line changes will likely impact production. The software integrates computer-aided design apps, business process management software, and other systems to test changes before implementation. Said Jodi Euerle Eddy with medical-devices manufacturer Boston Scientific, "Simulation software allows us to improve our capabilities and optimize spend, enabling faster problem-solving and continuous improvement."  .... ' 

Monday, February 24, 2020

Algorithm Predicts Corn Yields

What seems to be some direct uses of CNN for yield prediction.

AI Algorithm Better Predicts Corn Yield
By Illinois ACES
February 24, 2020
  
An interdisciplinary research team at the University of Illinois at Urbana-Champaign has developed a convolutional neural network that generates crop yield predictions.

An interdisciplinary research team at the University of Illinois at Urbana-Champaign has developed a convolutional neural network (CNN) that generates crop yield predictions, incorporating information from topographic variables such as soil electroconductivity, nitrogen levels, and seed rate treatments.

The team worked with data captured in 2017 and 2018 from the Data Intensive Farm Management project, in which seeds and nitrogen fertilizer were applied at varying rates across 226 fields in the Midwest U.S., Brazil, Argentina, and South Africa.

In addition, on-ground measurements were combined with high-resolution satellite images from PlanetLab to predict crop yields.

Said Illinois's Nicolas Martin, while "we don’t really know what is causing differences in yield responses to inputs across a field … the CNN can pick up on hidden patterns that may be causing a response.”    ... '

Chess Grandmaster Says AI will Destroy Most Jobs

Kasparov is hardly an expert at jobs,  but  major changes are likely to occur.

The chess grandmaster who was beaten by a computer predicts that AI will 'destroy' most jobs
By Aaron Holmes in BusinessInsider

- Chess grandmaster Garry Kasparov, who lost to IBM's Deep Blue computer in 1997, predicts that AI will 'destroy' most jobs in the US.
- Kasparov gave an interview with WIRED's Will Knight last week at an AI summit in New York.
- "For several decades we have been training people to act like computers, and now we are complaining that these jobs are in danger," Kasparov said. "Of course they are."  ... '

Drones for Disaster Relief

Here looking at how drones can react and recover from disaster disruptions to provide ongoing support.

Could Nearly Invincible Drone Be the Future of Disaster Relief?
USC Viterbi News
By Caitlin Dawson

At the University of Southern California Viterbi School of Engineering, researchers have developed autonomous drones that can recover from collisions and other disruptions. Using reinforcement learning to train the controller that stabilizes the drones in simulation mode, the researchers presented randomized challenges to the controller until it learned to navigate them. The researchers tested the controller by subjecting them to disruptions like kicking and pushing; the drones were able to recover 90% of the time. Viterbi’s Gaurav Sukhatme said the research resolves two important issues in robotics: robustness (“if you’re building a flight control system, it can’t be brittle and fall apart when something goes wrong”), and generalization (“sometimes you might build a very safe system, but it will be very specialized”). ... ' 

MS Azure Sphere Aims to Secure IOT

The security of many more IOT devices is of increasing concern.     Microsoft advances its solution.

Microsoft Azure Sphere launches in general availability
  By Kyle Wiggers in Venturebeat

In April 2018, nearly two years ago, Microsoft announced Azure Sphere, a program to better secure the 41.6 billion internet for things (IoT) devices expected to be connected to the internet by 2025. Now, following a lengthy preview, the tech giant is this week launching Azure Sphere in general availability.

Eligible customers will be able to sign up in the coming days. Azure Sphere doesn’t have ongoing fees associated with it, but there’s a one-time cost for a chip (as little as less than $8.65) that includes access to all of Sphere’s components, plus OS updates for the lifetime of the chip. Alternatively, developers can license Visual Studio and Microsoft’s Azure IoT services to develop apps for Sphere “more efficiently, according to Azure IoT CVP Sam George.  .... " 

Google Changes its Terms of Service

Intriguing look at how Google has changed its terms of service.    Attributable in part to some pressure by consumers on the operation of big tech.    Below a summary, then links to much more detail.

SUMMARY OF CHANGES TO GOOGLE’S TERMS OF SERVICE

This summary should help you understand the key updates we made to our Terms of Service. We hope this page is helpful, but we urge you to read the Terms in full.

What’s covered in these terms
This section describes the purpose of the Terms and provides an overview of key sections.

This section is new
We added links to documents that help shape the Terms, including certain things we’ve always believed to be true and a new overview of the way Google’s business works  ..... '

Improving Machine Learning

Treatment and analysis of outliers and noise are key to improve these methods.   The claim here is that this will be improved.  Remains to be seen how well this will work.  Technical.

Mathematicians propose new way of using neural networks to work with noisy, high-dimensional data   by RUDN University 

Mathematicians from RUDN University and the Free University of Berlin have proposed a new approach to studying the probability distributions of observed data using artificial neural networks. The new approach works better with so-called outliers, i.e., input data objects that deviate significantly from the overall sample. The article was published in the journal Artificial Intelligence.  ... "

Sunday, February 23, 2020

Design Thinking, Improv

Upcoming short free talk.

You are invited to attend:

ISSIP Service Design Speaker Series

Radical cross-collaboration at Cisco with Design Thinking & Improv
on Wednesday, February 26th 2020, 11:00 AM - 11.45 AM US Eastern Standard Time

Plants need nourishment from the soil, water, and sun. Humans need an authentic connection, empathy, understanding, being seen and heard, a sense of direction and understand what comes next. How Might We Achieve Radical Collaboration with Design Thinking & Improv?

Speaker:   Asli Ors – Cisco

About the speaker:

Asli Ors is a Design Thinker, Innovation Catalyst, Improviser and helps teams create empathic solutions built on intelligent networks that solve our customers’ challenges with design thinking. Believes, when Play and Work tangles magic happens. By magic, she means meeting the business goals by aligning on core requirements, design principles and project priorities in a timely manner.
Service Design Series Chaired by - Payal Vaidya and Daniela Sangiorgi

Zoom information provided on registration. 

Register

Game Based Learning and the Power of the Human Element

Of interest to my readers,  following up:

Franz,

My new, co-authored book has been launched and may be of interest to members of your community.

Emotify!: The Power of the Human Element in Game-Based Learning, Serious Games and Experiential Education,

 is available at https://www.amazon.com/gp/product/1704604680/
or AMAZON UK: https://www.amazon.co.uk/dp/product/1704604680/

You may find the following links of interest:

https://www.linkedin.com/pulse/death-lecture-michael-sutton-phd-cmc-fbei-mit/
https://www.seriousgamemarket.com/2019/11/new-book-emotify-dynamics-of-successful.html
https://www.shortsims.com/single-post/2019/11/29/Episode-29-With-Dr-Michael-Sutton-on-The-Use-of-Quests-and-Other-Activities-to-Develop-Emotional-Intelligence-from-the-New-Book-Emotify

Please contact me at michaeljdsutton@gmail.com with any questions, comments, or constructive criticism.   Have fun reading!         Michael Sutton

Google Power Generating Kites to be Axed

Remember reading the early reports on this, and seemed very interesting in possibility. but cost and risk was unclear.

Google parent pulls the plug on power-generating kite project
First moonshot project to be axed since Google cofounders stepped back from management.  ....

By  Dave Lee, Financial Times, ArsTechnica

Training Autonomous AI Combatants

Expect this, at all sorts of levels of autonomy.  Offensive and Defensive.

How to Train Your AI Soldier Robots (and the Humans Who Command Them)  in The RAND Blog by Thomas Hamilton 

This article was submitted in response to a call for ideas issued by the co-chairs of the National Security Commission on Artificial Intelligence, Robert Work and Eric Schmidt. You can find all of RAND's submissions here.

Artificial intelligence (AI) is often portrayed as a single omnipotent force—the computer as God. Often the AI is evil, or at least misguided. According to Hollywood, humans can outwit the computer (“2001: A Space Odyssey”), reason with it (“Wargames”), blow it up (“Star Wars: The Phantom Menace”), or be defeated by it (“Dr. Strangelove”). Sometimes the AI is an automated version of a human, perhaps a human fighter's faithful companion (the robot R2-D2 in “Star Wars”).

These science fiction tropes are legitimate models for military discussion—and many are being discussed. But there are other possibilities. In particular, machine learning may give rise to new forms of intelligence; not natural, but not really “artificial” if the term implies having been designed in detail by a person. Such new forms of intelligence may resemble that of humans or other animals, and we will discuss them using language associated with humans, but we are not discussing robots that have been deliberately programmed to emulate human intelligence. Through machine learning they have been programmed by their own experiences. We speculate that some of the characteristics that humans have evolved over millennia will also evolve in future AI, characteristics that have evolved purely for their success in a wide range of situations that are real, for humans, or simulated, for robots.…

The remainder of this commentary is available at warontherocks.com......

Thomas Hamilton is a senior physical scientist at the nonprofit, nonpartisan RAND Corporation. He has a Ph.D. in physics from Columbia University and was a research astrophysicist at Harvard, Columbia, and Caltech before joining RAND. At RAND he has worked extensively on the employment of unmanned air vehicles and other technology issues for the Defense Department.

This commentary originally appeared on War on the Rocks on February 21, 2020. Commentary gives RAND researchers a platform to convey insights based on their professional expertise and often on their peer-reviewed research and analysis. .... '

Autonomous Cars

Comment, with more detail:

' ... My name is Jennifer Bowen, and I'm working with a site called PartCatalog.com.  We just published an article that answers a question that's getting asked a lot these days: When will we have fully autonomous cars?  I noticed you shared a related article from Wired.com on this page.

I like that piece -- but ours adds a lot to the story.  We start by carefully reviewing the six levels of autonomy.  Then we point out some of the really useful autonomous features that have already arrived.  Finally, we cover "expert" opinions about when full autonomy will arrive.  It's a good read, and I think it would be a great addition to your page. .... " 

Saturday, February 22, 2020

Google Moving UK Data

Note that 'where' data is held may start to become an important issue.  Note also the specific considerations that Google is indicating here, as part of Brexit and other data uses. Also related to spatial web designs?

Google Moves U.K. User Data to U.S. to Avert Brexit Risks 
Financial Times  (Paywall)    By Madhumita Murgia

Google will move all data related to U.K.-based users of its services—including Gmail, YouTube, and the Android Play store—from Ireland to the U.S. as it aims to avoid legal issues following Brexit. If the U.K. and the EU fail to agree on a data-sharing deal by the end of this year, it will be illegal to transfer and process data between Britain and the European bloc. Google is likely skeptical of the U.K.'s ability to retain its "adequacy" status with the EU, which would allow free flow of data. Said Michael Veale at University College London Faculty of Laws, “Google would be exposed to considerable risk of illegality in relation to data transfers between Ireland and the U.K., as it would have to find another way to legalize the processing—and these ways are fast disappearing.” ... "

The Spatial Web

Brought back to my attention,  dated  (2018) ,  The Spatial Web,  still relevant?

The Spatial Web Will Map Our 3D World—And Change Everything In the Process  By Peter H. Diamandis, MD

The boundaries between digital and physical space are disappearing at a breakneck pace. What was once static and boring is becoming dynamic and magical.

For all of human history, looking at the world through our eyes was the same experience for everyone. Beyond the bounds of an over-active imagination, what you see is the same as what I see.

But all of this is about to change. Over the next two to five years, the world around us is about to light up with layer upon layer of rich, fun, meaningful, engaging, and dynamic data. Data you can see and interact with.

This magical future ahead is called the Spatial Web and will transform every aspect of our lives, from retail and advertising, to work and education, to entertainment and social interaction.

Massive change is underway as a result of a series of converging technologies, from 5G global networks and ubiquitous artificial intelligence, to 30+ billion connected devices (known as the IoT), each of which will generate scores of real-world data every second, everywhere.

The current AI explosion will make everything smart, autonomous, and self-programming. Blockchain and cloud-enabled services will support a secure data layer, putting data back in the hands of users and allowing us to build complex rule-based infrastructure in tomorrow’s virtual worlds.

And with the rise of online-merge-offline (OMO) environments, two-dimensional screens will no longer serve as our exclusive portal to the web. Instead, virtual and augmented reality eyewear will allow us to interface with a digitally-mapped world, richly layered with visual data.

Welcome to the Spatial Web. Over the next few months, I’ll be doing a deep dive into the Spatial Web (a.k.a. Web 3.0), covering what it is, how it works, and its vast implications across industries, from real estate and healthcare to entertainment and the future of work. In this blog, I’ll discuss the what, how, and why of Web 3.0—humanity’s first major foray into our virtual-physical hybrid selves (BTW, this year at Abundance360, we’ll be doing a deep dive into the Spatial Web with the leaders of HTC, Magic Leap, and High-Fidelity).

Let’s dive in.   .... "

Books by Stephen Few

A look at all of  Stephen Few's books on the viz and use  of data, including his latest:

The Data Loom, Stephen Few, $15.95 (U.S.), Analytics Press, May 15, 2019

Data, in and of itself, isn't valuable. It only becomes valuable when we make sense of it. Weaving data into understanding involves several distinct but complementary thinking skills. Foremost among them are critical thinking and scientific thinking. Until information professionals develop these capabilities, we will remain in the dark ages of data. If you're an information professional and have never been trained to think critically and scientifically with data, this book will set your feet on the path that will lead to an Information Age worthy of the name. ... " 

Simple innovations are the Most Used

Anyone in this space for a while knows the drill.

Larry Tesler: Computer Scientist Behind Cut, Copy, and Paste Dies at Age 74     BBC News

Larry Tesler, inventor of the "cut," "copy," and "paste" commands, recipient of ACM SIGCHI's Lifetime Practice Award in 2011 and inducted to the CHI Academy in 2010, has died at the age of 74. Tesler's innovations helped make personal computers simple to learn and use. After New York-born Telser graduated from Stanford University, he specialized in user-interface design, the process of making computer systems more user-friendly. He worked for a number of technology firms during his long career, including the Xerox Palo Alto Research Center (Parc), Apple, Amazon, and Yahoo. Tesler "combined computer science training with a counterculture vision that computers should be for everyone," according to Silicon Valley's Computer History Museum.  ... '

AI and Clinical Trials

General overview of the state of the technical advances.

Artificial Intelligence Ushers in a New Era of Cost-Effective Clinical Trials 
Contributed Commentary by James Streeter, Global Vice President Life Sciences Product Strategy, Oracle Health Sciences 

Clinical trials have changed significantly over the past several years. As drugs and devices—and the conditions they are trying to impact—have become increasingly more complex, so has the design and structure of clinical trials. But protocols are costly to change and identifying and enrolling the right patient cohorts is also no easy feat—especially when rare diseases are the target. So, how are study teams keeping up with this rapid pace of change?  

Pharmaceutical companies, biotechs, and CROs have been incorporating technology at various stages of the trial process to address these challenges; but, ironically, some of these technologies have introduced new challenges such as the sheer volume of data that are being generated.   ... "  ... ' 

Friday, February 21, 2020

Tag of Everything to Protect Supply Chain

Tagging everything uniquely.  Authenticity.  Devil in the details.  " ... Tiny, battery-free ID chip can authenticate nearly any product to help combat losses to counterfeiting. .. "

Cryptographic 'Tag of Everything' Could Protect Supply Chain
MIT News
Rob Matheson

Massachusetts Institute of Technology (MIT) researchers have created a cryptographic identity tag that can be attached to virtually any product in order to verify its authenticity. The millimeter-sized "tag of everything" operates on low levels of power from photovoltaic diodes, and transmits data via a power-free backscatter technique. The tag employs algorithm optimization to run an elliptic curve cryptography scheme to ensure secure communications that requires little power. MIT's Mohamed I. Ibrahim said, "We think we can have a reader as a central hub that doesn't have to come close to the tag, and all these chips can beam-steer their signals to talk to that one reader."  ... ' 

A Short History of Ethereum

Ethereum provides an instuctive aspect of blockchains, including a more generalized aspect of smart contracts that is useful.

Blockchain: A Very Short History Of Ethereum Everyone Should Read
in Forbes   By Bernard Marr

Even those who are not familiar with blockchain are likely to have heard about Bitcoin, the cryptocurrency and payment system that uses the technology. Another platform called Ethereum, that also uses blockchain, is predicted by some experts to overtake Bitcoin this year.

What is Ethereum?

Ethereum is an open-source, public service that uses blockchain technology to facilitate smart contracts and cryptocurrency trading securely without a third party. There are two accounts available through Ethereum: externally owned accounts (controlled by private keys influenced by human users) and contract accounts. Ethereum allows developers to deploy all kinds of decentralized apps. Even though Bitcoin remains the most popular cryptocurrency, it’s Ethereum’s aggressive growth that have many speculating it will soon overtake Bitcoin in usage.

How is Ethereum different than Bitcoin?
While there are many similarities between Ethereum and Bitcoin, there are also significant differences. Here are a few:
Bitcoin trades in cryptocurrency, while Ethereum offers several methods of exchange including cryptocurrency (Ethereum’s is called Ether), smart contracts and the Ethereum Virtual Machine (EVM).

They are based on different security protocols: Ethereum uses a ‘proof of stake’ system as opposed the ‘proof of work’ system used by Bitcoin.

Bitcoin allows only public (permissionless or censor-proof) transactions to take place; Ethereum allows both permissioned and permissionless transactions.
The average block time for Ethereum is significantly less than Bitcoin’s; 12 seconds versus 10 minutes. This translates into more block confirmations which allows Ethereum’s miners to complete more blocks and receive more Ether.

It is estimated that by 2021 only half of the Ether coins will be mined (a supply of more than 90 million tokens), but the majority of Bitcoins already have been mined (its supply is capped at 21 million).

For Bitcoin, the computers (called miners) running the platform and verifying the transactions receive rewards. Basically, the first computer that solves each new block gets bitcoins (or a fraction of one) as a reward. Ethereum does not offer block rewards and instead allows miners to take a transaction fee. ... "

Stephen Few on Logarithms

Data viz expert Stephen Few on Logarithms.   I recall very early in my experience in the enterprise,  having to work with execs on this concept and how it could improve their understanding of visual measures, but also confuse them.  Here a considerable and interesting view.  'Sensemaking' is a good term here.

Logarithms Unmuddled  by Stephen Few

I often write about topics that I myself have struggled to understand. If I’ve struggled, I assume that many others have struggled as well. Over the years, I’ve found several mathematical concepts confusing, not because I’m mathematically disinclined or disinterested, but because my formal training in mathematics was rather limited and, in some cases, poorly taught. My formal training consisted solely of basic arithmetic in elementary school, basic algebra in middle school, basic geometry in high school, and an introductory statistics course in undergraduate school. When I was in school, I didn’t recognize the value of mathematics—at least not for my life. Later, once I became a data professional, a career that I stumbled into without much planning or preparation, I learned mathematical concepts on my own and on the run whenever the need arose. That wasn’t always easy, and it occasionally led to confusion. Like many mathematical topics, logarithms can be confusing, and they’re rarely explained in clear and accessible terms. How logarithms relate to logarithmic scales and logarithmic growth isn’t at all obvious. In this article, I’ll do my best to cut through the confusion.

Until recently, my understanding (and misunderstanding) of logarithms stemmed from limited encounters with the concept in my work. As a data professional who specialized in data visualization, my knowledge of logarithms consisted primarily of three facts:

Along logarithmic scales, each labeled value that typically appears along the scale is a consistent multiple of the previous value (e.g., multiples of 10 resulting in a scale such as 1, 10, 100, 1,000, 10,000, etc.).

Logarithmic scales make it easy to compare rates of change in line graphs because equal slopes represent equal rates of change.

Logarithmic growth exhibits a pattern that goes up by a constantly decreasing amount.
If you, like me, became involved in data sensemaking (a.k.a., data analysis, business intelligence, 
analytics, data science, so-called Big Data, etc.) with a meagre foundation in mathematics, your understanding of logarithms might be similar to mine—similarly limited and confused. For example, if you think that the sequence of values 1, 10, 100, 1,000, 10,000, and so on is a sequence of logarithms, you’re mistaken, and should definitely read on.

Before reading on, however, I invite you to take a few minutes to write a definition for each of the following concepts:   ... " 

Superforecasting Short Course from The Edge

Quite interesting, free, have followed the concept for some time, but have yet to apply it.  Videos embedded at the link.   Good to see this followup.  Is enough said about embedded risk analysis?  Following.

A Short Course in Superforecasting—Philip Tetlock: An EDGE Master Class
(ED. NOTE: In 2015, Edge presented "A Short Course in Superforecasting" with political and social scientist Philip Tetlock. Superforecasting is back in the news this week thanks to the UK news coverage of comments by Boris Johnson's chief adviser Dominic Cummings, who urged journalists to "read Philip Tetlock's Superforecasters [sic], instead of political pundits who don't know what they're talking about.")

PHILIP E. TETLOCK, political and social scientist, is the Annenberg University Professor at the University of Pennsylvania, with appointments in Wharton, psychology and political science. He is co-leader of the Good Judgment Project, a multi-year forecasting study, author of Expert Political Judgment, co-author of Counterfactual Thought Experiments in World Politics (with Aaron Belkin), and co-author of Superforecasting: The Art & Science of Prediction (with Dan Gardner). Further reading on Edge: "How to Win at Forecasting: A Conversation with Philip Tetlock" (December 6, 2012). Philip Tetlock's Edge Bio Page.

CLASS I — Forecasting Tournaments: What We Discover When We Start Scoring Accuracy
It is as though high status pundits have learned a valuable survival skill, and that survival skill is they've mastered the art of appearing to go out on a limb without actually going out on a limb. They say dramatic things but there are vague verbiage quantifiers connected to the dramatic things. It sounds as though they're saying something very compelling and riveting. There's a scenario that's been conjured up in your mind of something either very good or very bad. It's vivid, easily imaginable.

It turns out, on close inspection they're not really saying that's going to happen. They're not specifying the conditions, or a time frame, or likelihood, so there's no way of assessing accuracy. You could say these pundits are just doing what a rational pundit would do because they know that they live in a somewhat stochastic world. They know that it's a world that frequently is going to throw off surprises at them, so to maintain their credibility with their community of co-believers they need to be vague. It's an essential survival skill. There is some considerable truth to that, and forecasting tournaments are a very different way of proceeding. Forecasting tournaments require people to attach explicit probabilities to well-defined outcomes in well-defined time frames so you can keep score.   ... " 

Thursday, February 20, 2020

Marketing Automation

Been thinking the topic late.  Goals,  risks,  Results?

Automation Maturity Still Woeful?  
Howard Sewell in Customerthink

Almost 5 years ago, our agency conducted a survey to determine whether B2B companies were getting maximum value from their investment in marketing automation.  The conclusion: most B2B companies were failing to follow even the most basic lead management best practices, even in areas that one would assume were a primary business case for purchasing marketing automation in the first place.

In the succeeding five years, the marketing technology landscape has exploded, marketing automation has become a cornerstone of the martech stack, and demand for marketing operations professionals has never been higher.  You’d assume, therefore, that marketing automation maturity – the level of sophistication at which companies employ the technology – has improved.

Not in the least, according to a recent UK survey, which reported, amongst other findings, that just 2% of B2B marketers use marketing automation to its full capacity. Equally striking, the percentage of respondents who reported that their usage was only “basic” – defined as not using many of the available features – was unchanged from a prior 2016 survey, at a mere 28%.

Why does marketing automation as a whole continue to suffer from gross underutilization?  Because our firm services more than forty marketing automation clients, we have a first-hand view of the issues and challenges underlying findings like this most recent survey.  My suspicions are these:

A lack of experienced marketing ops practitioners.

Nowhere is the current marketing talent crunch felt more keenly than in marketing operations, where companies can spend months finding or replacing even the most entry-level marketing automation users.  Anecdotally, the average tenure of a power user, admin, or Marketing Operations Manager has also shortened, as high demand causes people to job hop to higher-paying positions.

Consultants, agencies and other service providers can help fill this void, but don’t necessarily replicate the institutional knowledge that an experienced MA user can build into a system.  Moreover, many consultants and consulting firms are more adept at managing a marketing automation platform then they are in recommending meaningful change, best practices, or lead management strategy.

This knowledge gap means that many companies are either 1) implementing marketing automation in only the most basic manner, or 2) are stalled indefinitely in their pursuit of a more robust, sophisticated use of the system.  .... "

Coronavirus Influences P&G China Business

My former employer reports.   Would seem now that the prediction of epidemics and their influence needs to be added to supply chain demand. 

Some in-store demand has shifted to the Internet, but delivery capability is limited, P&G says.
P&G warns coronavirus is disrupting China business in FoxBusiness

The country is P&G's second-largest market

Procter & Gamble warned the coronavirus outbreak is curbing in-store sales and limiting the ability of its digital operations to meet demand.

Shares of the Cincinnati-based consumer goods maker were little changed following the update.
“China is our second-largest market -- sales and profit. Store traffic is down considerably, with many stores closed or operating with reduced hours,” Jon Moeller, chief financial officer, said in a U.S. Securities and Exchange Commission filing on Thursday. “Some of the demand has shifted online, but supply of delivery operators and labor is limited.”  ... '

Personalization of Cyberattacks

Disconcerting news in this

The New Attack Surface is Your Life   in Andreessen Horowitz
by Martin Casado  a 19 minute talk and supporting Deck, Notes

From business email compromise to SIM ports, cyberattacks have shifted from networks to you. And it’s been an incredibly profitable pivot, with cyberhackers like GandCrab claiming earnings of $2.5M per week. How can you protect yourself when the new attack surface is your life and phishing attacks are more sophisticated than ever?  In the never-ending game of cybersecurity cat-and-mouse, what trends are in the good guys’ favor? And how might both software and hardware work together to protect you and your company? .... " 

Advances in Event-Based Cameras

This was brought to my attention based on a project we addressed some time ago, the term 'event based camera'  was brought up as a potential solution.  To address the significant monitoring of lab work.    Here advances in the idea are described.  I am passing this on directly to those that were involved in that work, it may be relevant.

Prophesee’s Event-Based Camera Reaches High Resolution
Embedded vision startup Prophesee teams with Sony to shrink its pixel size to less than 5 micrometers    In IEEE Spectrum  By Samuel K. Moore

There’s something inherently inefficient about the way video captures motion today. Cameras capture frame after frame at regular intervals, but most of the pixels in those frames don’t change from one to the other, and whatever is moving in those frames is only captured episodically.

Event-based cameras work differently; their pixels only react if they detect a change in the amount of light falling on them. They capture motion better than any other camera, while generating only a small amount of data and burning little power.

Paris-based startup Prophesee has been developing and selling event-based cameras since 2016, but the applications for their chips were limited. That’s because the circuitry surrounding the light-sensing element took up so much space that the imagers had a fairly low resolution. In a partnership announced this week at the IEEE International Solid-State Circuits Conference in San Francisco, Prophesee used Sony technology to put that circuitry on a separate chip that sits behind the pixels.
... " 

Microsoft Defender Expands

Been a user of Defender for some time now, as long as it has been available..  It seems to work,   at least in the fact that no malware has gotten through recently, that I know about.   Good to have multiple capabilities at work.

Microsoft’s Defender security software is coming to iOS and Android
It’s available for Linux today.    By  Christine Fisher, @cfisherwrites in Engadget

Despite Apple and Google's best efforts, malware and malicious apps are still a big concern on iOS and Android. So today, Microsoft announced that it's bringing its Defender Advanced Threat Protection (ATP) to the mobile operating systems. In other words, Microsoft is stepping in to fix a problem that Apple and Google can't seem to resolve.   .... " 

Autonomous Cars and the Drive Thru

A considerable discussion and walk-through example of the potential interactions of drive thrus and autonomous cars.  With detailed consideration of the business implications.    Had given this some thought before as how the interaction might work.   Makes the point that drive-thrus are very big business.

AI Trends: Drive-Thrus and Autonomous Cars in AITrends

By Dr. Lance Eliot, CEO, Techbrium Inc. - techbrium.com - and is a regular contributor as our AI Trends Insider, and serves as the Executive Director of the Cybernetic AI Self-Driving Car Institute and has published 11 books on the future of driverless cars ... '

Wednesday, February 19, 2020

Self Checkout is Harder than it Should be

More on the use of the automated checkout process and related issues, with links to a survey and review:

Shoppers have a love/hate relationship with self-checkouts    by Tom Ryan in Retailwire

Consumers are increasingly favoring self-checkout because of the perception that it can be faster than using a cashier. Frustrations, however, with the technology persist.

A recent Wall Street Journal article — “Stores and Shoppers Agree: Self-Checkout Is Hard” — details how Walmart has quietly disabled or removed the weight sensors used to detect miss-scanned items because too many “wait for assistance” messages were being triggered, to the annoyance of shoppers. Walmart is making use of cameras in some cases as a solution.

Theft also remains an issue at self-service registers, with tricks such as scanning a less inexpensive item becoming popular. Retailers, however, typically have an aversion to having staff confront shoplifters directly.

A recent survey sponsored by weighing technology firm Shekel Brainweigh found that nearly:

Faster Lab Results with Phone to Lab Chip Connect

Faster lab test results for infectious diseases.

UC Smartphone Lab Delivers Test Results in 'Spit' Second
University of Cincinnati
Michael Miller

Researchers at the University of Cincinnati (UC) have developed a portable lab device that plugs into a smartphone, connecting it to a doctor's office through a custom app. The device uses a specialized plastic lab chip to diagnose infectious diseases with only a single drop of blood or saliva. A patient simply places a single-use plastic lab chip into his or her mouth, then plugs it into a slot for testing; the phone provides the power and test protocol for the lab chip. Said UC professor Chong Ahn, "The performance is comparable to laboratory tests. The cost is cheaper. And it's user-friendly."

New Combinatorial Optimization Algorithm

Combinatorics are of particular interest to me, are part of any kind of complex process choice problem.  Note the use of annealing, being used in some quantum methods.  Here a new advance, examining.

Optimization Algorithm Sets Speed Record for Solving Combinatorial Problems
IEEE Spectrum
John Boyd
February 10, 2020

Researchers at Toshiba Corp. in Japan have developed a quantum-inspired heuristics algorithm that is 10 times faster than competing technologies. In October, the researchers announced a prototype device implementing the algorithm that can detect and execute optimal arbitrage opportunities from among eight currency combinations in real time. The researchers claim the likelihood of the algorithm finding the most profitable arbitrage opportunities is greater than 90%. The team implemented the Simulated Bifurcation Algorithm on a single flat-panel gate array (FPGA) chip, and were able to run 8,000 operations in parallel to solve a 2,000-spin problem. In a separate test using eight GPUs, the system solved a 100,000-spin problem in 10 seconds—1,000 times faster than when using standard optimized simulated annealing software.  .... " 

EU Rules for AI

Will be interesting to see how such regulations evolve.

Fear of Big Brother Guides EU Rules on AI
Agence France-Presse
February 17, 2020

The artificial intelligence (AI) policy unveiled by the European Union (EU) this week urges authorities and companies to practice caution before rolling out facial recognition technology. The European Commission hopes to address Europeans' concerns about the growing importance of AI in their lives amid reports from China of facial recognition technology being used to suppress dissent. EU Commissioner Margrethe Vestager recommends organizations consider the ramifications of facial recognition—specifically any scenarios in which the technology should be authorized. Vestager says Europe has a desire to be "sovereign" on AI and to shield "the integrity of our grids, of our infrastructure, of our research."  ... " 

MIT Tech Review:  The EU just released weakened guidelines for regulating artificial intelligence  .... 

Classification with Naive Bayes

Good piece, behind a paywall but worth a look.  Technical.

Comparing a variety of Naive Bayes classification algorithms
Comprehensive list of formulas for text classification
Pavel Horbonos (Midvel Corp)

Naive Bayes algorithm is one of the well-known supervised classification algorithms. It bases on the Bayes theorem, it is very fast and good enough for text classification. I believe that there is no need to describe the theory behind it, nevertheless, we will cover a few concepts and after that focus on the comparing of different implementations.  .... "

Tuesday, February 18, 2020

Reinforcement Learning Improvements

Very interesting item.   Quite technical.  I like the hint of using pools or teams of agents to solve reinforcement problems.   Brings to mind the idea of process design and bringing multiple resources to bear.    Can it be more directly linked to process optimization processes?  Checking it out.

Google Brain and DeepMind researchers attack reinforcement learning efficiency   In VenturebeatBy Kyle Wiggers

Reinforcement learning, which spurs AI to complete goals using rewards or punishments, is a form of training that’s led to gains in robotics, speech synthesis, and more. Unfortunately, it’s data-intensive, which motivated research teams — one from Google Brain (one of Google’s AI research divisions) and the other from Alphabet’s DeepMind — to prototype more efficient means of executing it. In a pair of preprint papers, the researchers propose  (technical paperAdaptive Behavior Policy Sharing (ABPS)  , an algorithm that allows the sharing of experience adaptively selected from a pool of AI agents, and a framework — Universal Value Function Approximators (UVFA) — that simultaneously learns directed exploration policies with the same AI, with different trade-offs between exploration and exploitation.  .... "

Robot Comedy for Data?

Ultimately we will have to be able do a better job of getting data based on reactions to proposals.  Is standup comedy using robots a model of how to get this kind of data? 

What's the Deal With Robot Comedy?
How to teach a robot to be a stand-up comedian
By Naomi Fitter in IEEE Spectrum

Nao robot learning to be a stand-up comedian

This is a guest post. The views expressed here are solely those of the author and do not represent positions of IEEE Spectrum or the IEEE.

In my mythical free time outside of professorhood, I’m a stand-up comedian and improviser. As a comedian, I’ve often found myself wishing I could banter with modern commercial AI assistants. They don’t have enough comedic skills for my taste! This longing for cheeky AI eventually led me to study autonomous robot comedians, and to teach my own robot how to perform stand-up.

I’ve been fascinated with the relationship between comedy and AI even before I started doing comedy on my own in 2013. When I moved to Los Angeles in 2017 as a postdoctoral scholar for the USC Interaction Lab, I began performing in roughly two booked comedy shows per week, and I found myself with too good of an opportunity for putting a robot onstage to pass up.

Programming a NAO robot for stand-up comedy is complicated. Some joke concepts came easily, but most were challenging to evoke. It can be tricky to write original comedy for a robot since robots have been part of television and cinema for quite some time. Despite this legacy, we wanted to come up with a perspective for the robot that was fresh and not derivative.

Another challenge was that in my human stand-up comedy, I write almost entirely from real-life experience, and I’ve never been a robot! I tried different thought exercises—imagining myself to be a robot with different annoyances, likes, dislikes, and “life” experiences. My improv comedy training with the Upright Citizens Brigade started to come in handy, as I could play-act being a robot, map classic (and even somewhat overdone) human jokes to fit robot experiences, and imagine things like, “What is a robot family?”, “What is a robot relationship like?”, and “What are drugs for a robot?” ... "

Ground Penetrating Radar for Driving?

This is unexpected.   But it has been known that weather is a problem with classic methods, now will ground penetrating radar solve this problem?

MIT’s Ground-Penetrating Radar Looks Down for Perfect Self-Driving    by Bill Howard

Ground-penetrating radar may soon be the sensor that makes your car autonomous in all weather conditions. It turns out that when you scan the 10 feet below the roadway surface, you get a unique identifier that is accurate to an inch or two. Mapping cars would scan the roadways once, then your self-driving car with its own ground-penetrating radar would rescan as you drive, matching its real-time scan to the master map. That would keep your car centered, even if pavement markings are covered by snow or ice, according to WaveSense, an MIT spinoff that already has already tested military applications.

Ground-penetrating radar can’t be the only sensor in a self-driving car. An autonomous car still needs surface radar, possibly lidar, and cameras to track other vehicles, pedestrians, animals, blocked lanes, and cars stopped or crashed in travel lanes. But it has the potential to be the breakthrough that allows bad-weather autonomous driving.  .... " 

A Look at Aibo Robot Dog Programming

Have mentioned this a number of times, we saw it demonstrated in Japan when it first came out, now more, specifically about how it can be programmed.    Note again this has been around for 20 years.    Liked the idea then , but it was hard to tell how it might be significantly programmed with more capabilities.  Good non technical look.   Will look deeper yet.

How to Program Sony's Robot Dog Aibo
We take a look at the new visual programming interface and API for Sony’s adorable robot dog   By Evan Ackerman

The Sony Aibo has been the most sophisticated home robot that you can buy for an astonishing 20 years. The first Aibo went on sale in 1999, and even though there was a dozen year-long gap between 2005’s ERS-7 and the latest ERS-1000, there was really no successful consumer robot over that intervening time that seriously challenged the Aibo .... " 

Monday, February 17, 2020

E-Mail Productivity

Nicely done short look at how to be more productive with too much e-mail.  Tips, tricks and more ... brief too.   In the enterprise, my most read posts were about email productivity.

The Suite Life: 4 tips for a more manageable Gmail inbox

Laura Mae Martin  in the Google Blog
Executive Productivity Advisor

The average person receives 120 emails a day, which means keeping your inbox under control can feel like an impossible task. Fortunately, G Suite gives you the tools you need to stay focused and organized. Welcome to the Gmail edition of The Suite Life, a series that brings you tips and tricks to get the most out of G Suite. In this post, we’ll provide advice to help you save time and get more done—right from your Gmail inbox.  .... "

Google Announces Experimental Means to Detect Altered Images

A useful kind of AI, checking out the details.  Still experimental.  How often we be sure of the results?  Looks for specific means of alteration, So will have to be maintained for new ones.

How can technology strengthen fact-checking?
As the technology to create realistic fake images, video and audio becomes more sophisticated, fact-checkers and journalists need similarly advanced tools to counter this threat. Assembler is an experimental platform from Jigsaw and Google Research that hopes to make it easier to judge manipulated media and help prevent the spread of disinformation.

Assembler analyzes images using detectors — technology trained to identify specific types of manipulation — and evaluates if and where images may have been altered.  ... " 

More.

More Smart Speaker Share to China in 201

While smart speakers are not  'AI assistants' in any  complete sense, they open the way to  such deices in the home and at work.    So stats like this should be watched because they provide an initial framework for assistant capabilities.

Strategy Analytics: Google and Amazon ceded smart speaker market share to Chinese rivals in 2019
  Kyle Wiggers in Venturebeat

In 2019, both Amazon and Google ceded smart speaker market share to Chinese rivals, according to a survey published today by Strategy Analytics. The firm reports that of the 146.9 million units sold in the year 2019 — a 70% uptick compared with 2018 — Amazon Alexa-powered speakers made up 26.2% (roughly 38.5 million units), a dip from 33.7% (49.5 million) in 2018. For their part, Google Assistant smart speakers nabbed a 20.3% share (29.8 million), down from 25.9% (38 million) the year prior. ,,,, "