/* ---- Google Analytics Code Below */

Monday, September 16, 2019

Exposure in a Supply Chain

Interesting examination of a massive supply chain and its exposure to threats.   Seems any complex supply chain should have this done, and repeated as contexts change.

Here are 3 key players in Apple’s massive supply chain
By Ethan Wolff-Mann
Senior Writer in Yahoo Finance 

The trade war with China has thrust global supply chains to the forefront, as companies look to figure out how to weather the disruptive storms of tariffs and threats.

Apple (AAPL), of course, is one of the biggest and most important companies in the world, and has an especially complicated global supply chain that’s tightly linked with China. But for investors, understanding, evaluating and comparing its exposure to supply chain risk is difficult as supply chain disclosure isn’t usually mandated. Furthermore, there are many different ways of looking at supply chain and business relationships.

To help understand those risks and relationships, Yahoo Finance Premium explores five different types of what are called ‘relationship exposure scores,’ by analyzing millions of unstructured documents and filings. Looking at the data, three key players stick out.  ... "

AI Reproduce-ability Crisis

Science requires enough information to reproduce a result that is claimed.   But does not mean that a result cannot be consistently useful even if not formally 'reproduced',   And what then does 'reproducing'  mean?  It will not be exactly the same, but what has been changed or left out?

Notable is a reproducability checklist from McGill University:  Which appears to be a useful list of things to be considered.   Or at least a starting point for understanding the question. Below a good article on the topic:

Artificial Intelligence Confronts a 'Reproducibility' Crisis from Wired
Machine-learning systems are black boxes even to the researchers that build them. That makes it hard for others to assess the results.

A few years ago, Joelle Pineau, a computer science professor at McGill, was helping her students design a new algorithm when they fell into a rut. Her lab studies reinforcement learning, a type of artificial intelligence that’s used, among other things, to help virtual characters (“half cheetah” and “ant” are popular) teach themselves how to move about in virtual worlds. It’s a prerequisite to building autonomous robots and cars. Pineau’s students hoped to improve on another lab’s system. But first they had to rebuild it, and their design, for reasons unknown, was falling short of its promised results. Until, that is, the students tried some “creative manipulations” that didn’t appear in the other lab’s paper.  .... "

New Wifi Standard 6 Released.

Detailed piece.  Apple mentioned Wi-Fi 6 being supported in the latest phones.   Claims to be faster, and be more efficient for multiple users on a single network, promising for the smart home.   But you will need to get new hardware to get the advantages.

Faster Wi-Fi officially launches today   By Jacob Kastrenakes in theVerge

The next generation of Wi-Fi has been trickling out over the past year, but this week, its launch is going to accelerate. The Wi-Fi Alliance, the organization that oversees implementation of the Wi-Fi standard, is launching its official Wi-Fi 6 certification program. That might sound boring, but it means the Wi-Fi 6 standard is truly ready to go, and tech companies will soon be able to advertise their products — mostly brand new ones — as certified to properly support Wi-Fi 6.

Wi-Fi 6 includes a bunch of new technologies that combine together to make Wi-Fi more efficient. This is particularly important because of just how many devices we all have these days — it’s not unusual for a family to have a dozen or more gadgets all connected to a Wi-Fi network at once. “The home scenario today looks like the dense deployment of yesterday,” says Kevin Robinson, marketing leader for the Wi-Fi Alliance.  .... "

Autonomous Planning in Food Retail

Advanced Planning using machine learning techniques are suggested for procurement.   We always had advanced planning, its just how well you could integrate it with predictions of demand, and in particular unusual elements of forecasts.   New analytics have emerged, but how well will they deliver?

The invisible hand: On the path to autonomous planning in food retail   from McKinsey

It’s not news to food retailers: sometimes your stocks are too high, sometimes they’re too low. Advanced planning now gives them entirely new options for solving the expensive problem—and cuts costs in the process.  .... 

Procurement planners in food retail today are not to be envied. They have to please customers who have never made more exacting demands on availability, freshness, and range. And they ignore such expectations at their peril: the competition is relentless, driving all market participants to seek out improvements incessantly. Those who stick to their legacy processes can only make comparable progress at the cost of mounting stocks, increasing write-offs, and an increasingly complex supply chain.

Internally, planners are often struggling with outdated IT systems that are isolated from each other, unreliable sources of information, and in some cases, largely manual and poorly coordinated processes. Forecasts are commensurately inaccurate and personnel expenses high. Externally, on the other hand, decision makers are faced with an increasingly unfathomable offering from digital service providers that—although they can process huge volumes of data with their solutions—cannot give retailers any advantages of relevance as long as they leave their operating models unchanged.


The future will likely be very different. A look at online retail already reveals the shape of things to come: leading companies are developing highly integrated planning systems that already use the most advanced analytics and machine-learning solutions available today. These high-tech methods, also referred to as “advanced planning,” will, in the future, take control of steering in food retail as well. And they set exacting requirements on companies: they entail tapping the entire wealth of transaction data along with external parameters as sources. Retailers need a completely different process landscape, new capabilities, more computing power, and advanced algorithms.  ... " 

Sunday, September 15, 2019

Building Knowledge Graphs

Have been looking at means of continuously and coherently connecting company data sources to analytical and AI methods.    Most recently have looked at the idea of 'Knowledge Graphs'.   Of interest, an upcoming webinar by Neo4j on knowledge graphs, here particularly on financial services, but applicable beyond that.   Note in particular the mention of 'intelligent metadata',  which we posed to construct understandable and maintainable data sources.   Will be attending.

Financial Services Companies Make Disparate Data Simple with Knowledge Graphs Webinar

Tuesday, September 24    11:00 am PST

Knowledge graphs are driving industry disruption and business transformation by bringing together previously disparate data, using connections for superior decision support, and adding context for more intelligent applications (including AI). In this session, we’ll walk through the fundamental elements of knowledge graphs including contextual relevance, dynamic self-updating, understandability with intelligent metadata, and the combination of heterogeneous data. ....'

More information and register here.

Verizon says 5G Will be Available Everywhere Mobile is

Not fully understanding this.  I assume your hardware will have to deal with it, and recent announcements by major players have not mentioned it.    Implications?

Verizon will launch home 5G everywhere mobile service is available in Engadget via Ars Technica

It could be ubiquitous... whenever 5G is actually available near you.

Verizon (Engadget's parent company) may be rolling out 5G at a pokey pace, but at least you won't have to choose which kind of 5G you get. Consumer division chief Ronan Dunne told investors that fixed 5G Home service will "in due course" be available in every market where mobile 5G is available. It's "one network," he said -- there's little stopping Verizon from offering both. The carrier is planning a "full" launch for Home late in 2019 using the official 5G standard, so the synchronicity might begin relatively quickly.   ... " 

More in ArsTechnica  (The ArsTechnica piece contains much more about the current test setups by city)

Certainty is Unusual

Jason Brownlee does his usual good job of explaining important concepts.   Here largely non technical.  And this is perhaps the most important.  Often the hardest to explain to decision makers, despite the fact that they deal with the problem every day.  Risk must always be considered.   Heartily recommend you subscribe.  See the 'Brownlee' tag below for other tutorials from Jason I have mentioned.

What Is Probability?  by Jason Brownlee  

Uncertainty involves making decisions with incomplete information, and this is the way we generally operate in the world.

Handling uncertainty is typically described using everyday words like chance, luck, and risk.

Probability is a field of mathematics that gives us the language and tools to quantify the uncertainty of events and reason in a principled manner.

In this post, you will discover a gentle introduction to probability.

After reading this post, you will know:

Certainty is unusual and the world is messy, requiring operating under uncertainty.
Probability quantifies the likelihood or belief that an event will occur.
Probability theory is the mathematics of uncertainty.

Let’s get started. ... 

Cooperation is Central Issue of Our Time

It really has been the central issue of all time. Its just now we have to cooperate with so many people/things.   Or at least have the opportunity to.   And we now have have to cooperate with some some new things, that threaten to take attention from us, even our jobs.   AI enabled, still not nearly intelligent, but cleverly attention seeking.  This started with smartphones, but is leading to many new kinds of things.  Be amazed and cautious.   ....

COOPERATION IS THE CENTRAL ISSUE OF OUR TIME by Steve Omohundro

Cooperation is the most important issue of our time. It is the key to understanding biology, the success of humans, effective business models, social media, and future society based on beneficial AI.

The challenge is that many interactions have the character of the “Prisoner’s Dilemma” or “Tragedy of the Commons” where selfish actors do better for themselves while arming the group benefit and cooperative actors help the group but can lose out in individual competition.

A variety of mechanisms that lead to cooperation have been invented and studied in biology, economics, political science, business, analysis of social technologies, and increasingly in analyzing AI.

All of these subjects are grounded in biology and today’s biology exhibits cooperation at every level of the “Major Transitions in Evolution”:  .... '

Coaching with AI

Late to noting this, but thought it was interesting.  How might this be more broadly delivered?

2019 U.S. Open gets new 'coach' with IBM's A.I. technology   in Yahoo Finance

The Coach Advisor technology is said to be a "game-changer" for U.S. coaches and their tennis players.   .... "

Generating Text for All

Should we delight or be worried?

The world’s most freakishly realistic text-generating A.I. just got gamified  By Luke Dormehl  in DigitalTrends

What would an adventure game designed by the world’s most dangerous A.I. look like? A neuroscience grad student is here to help you find out.

Earlier this year, OpenAI, an A.I. startup once sponsored by Elon Musk, created a text-generating bot deemed too dangerous to ever release to the public. Called GPT-2, the algorithm was designed to generate text so humanlike that it could convincingly pass itself off as being written by a person. Feed it the start of a newspaper article, for instance, and it would dream up the rest, complete with imagined quotes. The results were a Turing Test tailor-made for the fake news-infused world of 2019.

Of course, like Hannibal Lecter, Heath Ledger’s Joker, or any other top-notch antagonist, it didn’t take GPT-2 too long to escape from its prison. Within months, a version of it had found its way online (you can try it out here.) Now it has formed the basis for a text adventure game created by Northwestern University neuroscience graduate student Nathan Whitmore. Building on the predictive neural network framework of GPT-2, GPT Adventure promises to rewrite itself every time it’s played. It’s a procedurally generated game experience in which players can do whatever they want within the confines of a world controlled by the fugitive A.I.

And you know what? Not since Sarah and John Connor teamed up with The Terminator to take on Skynet has the world’s most dangerous artificial intelligence been quite so much fun.  ... "

Saturday, September 14, 2019

Improvements for Future Voice Assistants

Survey of future improvements that could be useful.  Not complete or specific to voice assistants.   But useful thoughts.

5 ways that future A.I. assistants will take voice tech to the next level   By Luke Dormehl in DigitalTrends

“Before Siri, when I talked about [what I do] there were blank stares,” Tom Hebner, head of innovation at Nuance Communications, which develops cutting edge A.I. voice technology, told Digital Trends. “People would say, ‘Do you build those horrible phone systems? I hate you.’ That was one group of people’s only interaction with voice technology.”

That’s no longer the case today. According to eMarketer forecasts, almost 100 million smartphone users will be using voice assistants by 2020. But while A.I. assistants are no longer a novelty, we’re still at the start of their evolution. There’s a long way to go before they fully live up to the promise that voice assistants have as a product category.

Here are five ways in which the technology could improve to make it smarter and more efficient — and help us lead more productive lives as a result. Call them “predictions” or a “wishlist,” these are the challenges that need to be solved.  .... " 

Facebook Proposes Building Assistant with Minecraft

The idea seemed a bit odd at first, but the idea brings together ideas used in agent modeling, where you build a simulation that has agents interact with other agents (or people) and then use the results of the interactions to train a model of the world.   Mincraft could be used to create such a sim-world.  Though its perhaps not the best vehicle.   Recall Facebook's assistant M, mentioned here previously,  which I don't think was successful, which perhaps drives this idea.

Facebook is using Minecraft to build an AI assistant  By Isobel Asher Hamilton

Facebook is hoping it can train an AI assistant to understand a broad range of human commands with a little help from one of the biggest games in the world — Minecraft.       Paper here. 

A group of Facebook researchers published a paper in July explaining why they think Minecraft is the perfect place for an AI to learn about human communication. The key lies in the fact that Minecraft is what's known as a "sandbox" game, where players can roam around with relatively free rein as to what they want to do or build, while also following a set of relatively simple rules.

The researchers also hope that the natural curiosity of Minecraft players will give the AI plenty of humans to practise with. "Since we work in a game environment, players may enjoy interacting with the assistants as they are developed, yielding a rich resource for human-in-the-loop research," the paper says. Minecraft has 91 million monthly active users, so the potential pool of humans who could help train the AI is pretty vast.  ,,,,  " 

Defining Zero Knowledge Proofs

Relatively short, definitional and largely non technical definition of zero-knowledge proofs.  Previously covered here, and under examination.  Useful for any kind of transaction.

Hacker Lexicon: What Are Zero-Knowledge Proofs?

How do you make blockchain and other transactions truly private? With mathematical models known as zero-knowledge proofs.

In digital security, the less stray information floating around the better. The fewer companies storing your financial records, the less likely they'll be exposed in a breach. But though there are lots of ways to cut down on data sharing and retention, there are some things services just need to know, right? Thanks to the cryptographic method known as “zero-knowledge proofs” that’s not always the case.  ... "  

And here, related, pointer to a paper, and considerably more technical piece:

You Can Now Prove a Whole Blockchain With One Math Problem – Really   By William Foxley in Coindesk

The Electric Coin Company (ECC) says it discovered a new way to scale blockchains with “recursive proof composition,” a proof to verify the entirety of a blockchain in one function. For the ECC and zcash, the new project, Halo, may hold the key to privacy at scale.

A privacy coin based on zero-knowledge proofs, referred to as zk-SNARKs, zcash’s current underlying protocol relies on “trusted setups.” These mathematical parameters were used twice in zcash’s short history: upon its launch in 2016 and first large protocol change, Sapling, in 2018. ... " 

Stronger Cryptowallet Proposed

Recent incidents have shown that the wallet can be a key area of insecurity.

Researchers Invent Cryptocurrency Wallet That Eliminates 'Entire Classes' of Vulnerabilities
ZDNet   by Charlie Osborne

Massachusetts Institute of Technology (MIT) researchers have created a new cryptocurrency wallet that eliminates entire classes of design vulnerabilities. MIT's Anish Athalye and colleagues developed Notary, a universal serial bus (USB) platform that the team said eradicates "entire classes of bugs that affect existing wallets," and could potentially augment transaction approval security. Notary employs reset-based switching, a strategy which resets the central processing unit, memory, and other hardware elements when users switch between apps. The goal is to remove the threat of vulnerability by ensuring apps are isolated from one another, providing greater protection should an individual app be hacked. Said Athalye, “Being able to build a secure hardware wallet would lead to better security for so many different kinds of applications."  ... " 

Hardening Encryption for Quantum Computing

Our own early look at QC aimed at very highly combinatorial problems, and some believe this will also provide methods to break encryption.     Now there are efforts underway to solve this problem.  I note all the claims made so far mention 'resistant' tn their strength, so it seems no general methods exist today.

Companies Explore Encryption That Withstands Quantum Computing
The Wall Street Journal
By Adam Janofsky

By Organizations that manage sensitive data are investigating techniques for safeguarding that data from quantum decryption. IBM recently announced a quantum-resistant tape drive. The National Institute of Standards and Technology (NIST) is evaluating the tape drive's two quantum-resistant algorithms—along with 24 other candidates—with the goal of selecting two to six algorithms to be standardized for academia, corporations, and government by 2022. NIST's Dustin Moody is assessing algorithmic tolerance to traditional and quantum-computer-based hacks, operational speed, and support by small devices with low processing power. IBM's Vadim Lyubashevsky said, "Once NIST declares a standard, there will be a steady transition—big companies will transition their browsers, clouds, and storage to quantum-safe [algorithms]."  ... ' 

Friday, September 13, 2019

China About to Issue its own Cryptocurrency

Wondering the broad implications of this.   Will it embody regulations that will determine how it will be traded versus other currencies?   Or use smart contracts to determine trade contacts?  Or?  Overall very hard to determine how a complex system like this will influence global finance. 

Not even clear if it will use a blockchain, and what its format and update method will be.  Private or not?  Worrisome.

Below an excerpt, with many more links:

In the latest Chain Letter, by MIT Technology Review: 

Welcome to Chain Letter! Great to have you. Here’s what’s new in the world of blockchains and cryptocurrencies.   ... 

China is about to launch its own digital currency. Here’s what we know so far. Officials from the People’s Bank of China have hinted in recent weeks that the nation is almost ready to launch a digital version of its currency, the renminbi, to replace physical cash for consumer payments. There are a number of unanswered questions about how it will work, ranging from whether it will use a blockchain to how private the system will be. Despite the unknowns, recent public comments by central bank officials have shed some light on the timeline for and motivation behind the project. Here’s a trio of things we know:  .... " 


Build AI we can Trust

A long time question.   Its the old problem of  'common sense' reasoning, and it usually a step above common sense .... maybe call it 'light reasoning' , as the article below suggests.  Humans often omit that kind of reasoning too,  say the kind you can do in your head, or need to write a few lines on a piece of paper.   We sometimes do one-level analogies, but rarely more.   But surely we should expect that capability of a machine.  And often a question requires a determination of risk in context to determine a useful and correct answer.

How to Build Artificial Intelligence We Can Trust    By The New York Times

Exploring the universe with artificial intelligence.
We are relying on artificial intelligence more and more, but it hasn't yet earned our confidence.

Artificial intelligence has a trust problem. We are relying on A.I. more and more, but it hasn't yet earned our confidence.

Tesla cars driving in Autopilot mode, for example, have a troubling history of crashing into stopped vehicles. Amazon's facial recognition system works great much of the time, but when asked to compare the faces of all 535 members of Congress with 25,000 public arrest photos, it found 28 matches, when in reality there were none. A computer program designed to vet job applicants for Amazon was discovered to systematically discriminate against women. Every month new weaknesses in A.I. are uncovered.

The problem is not that today's A.I. needs to get better at what it does. The problem is that today's A.I. needs to try to do something completely different.

In particular, we need to stop building computer systems that merely get better and better at detecting statistical patterns in data sets — often using an approach known as deep learning — and start building computer systems that from the moment of their assembly innately grasp three basic concepts: time, space and causality.  ....  "

Element AI Financing

Had seen this mentioned in the supply chain space:

Element AI closes financing, securing $200-million backed by the Caisse, Quebec and McKinsey
SEAN SILCOFF, TECHNOLOGY REPORTER

Montreal artificial intelligence startup Element AI Inc. has closed its second large financing round, announcing Friday it had raised US$151.4-million ($200-million), two years after it secured more than US$100-million from a collection of global investors.

While the financing is one of the largest for an early stage technology company in Canada this year, it comes after a challenging period for the startup, co-founded with global fanfare three years ago by one of the pioneers in the deep learning field, Université de Montréal professor Yoshua Bengio and a group of entrepreneurs led by CEO Jean-Francois Gagné.

Element is developing software to help corporations in the financial services and supply chain and logistics sectors to improve their operations using AI-based tools, working with institutions including global bank HSBC and Cambridge, Ont. insurance firm Gore Mutual. Element set out at least 15 months ago on a global search to raise up to US$250-million and at one point last fall was in advanced discussions with Asian investment giant Softbank. By this summer it had scaled back its goal to a range of between US$150-million and US$250-million, sources told the Globe. In the end the deal came together with significant backing from hometown pension giant Caisse de dépôt et placement du Québec and the Quebec government of Francois Legault, which deemed the startup to be “of significant economic interest to Quebec” in a cabinet missive this summer.  ... " 

Finances of AI Research: DeepMind

Are such losses in leading edge tech signficant, or to be expected?

DeepMind's Losses and the Future of Artificial Intelligence by Gary Marcus in Wired

Alphabet’s DeepMind unit, conqueror of Go and other games, is losing lots of money. Continued deficits could imperil investments in AI.  Alphabet’s DeepMind lost $572 million last year. What does it mean?

DeepMind, likely the world’s largest research-focused artificial intelligence operation, is losing a lot of money fast, more than $1 billion in the past three years. DeepMind also has more than $1 billion in debt due in the next 12 months.

Does this mean that AI is falling apart?

Gary Marcus is founder and CEO of Robust.AI and a professor of psychology and neural science at NYU. He is the author, with Ernest Davis, of the forthcoming Rebooting AI: Building Artificial Intelligence We Can Trust.

Not at all. Research costs money, and DeepMind is doing more research every year. The dollars involved are large, perhaps more than in any previous AI research operation, but far from unprecedented when compared with the sums spent in some of science’s largest projects. The Large Hadron Collider costs something like $1 billion per year and the total cost of discovering the Higgs Boson has been estimated at more than $10 billion. Certainly, genuine machine intelligence (also known as artificial general intelligence), of the sort that would power a Star Trek–like computer, capable of analyzing all sorts of queries posed in ordinary English, would be worth far more than that. .... "

Building Supply Chain Roadmaps with AI


View of push-pull models for supply chains.

How Supply-Chain Companies Can Build Roadmaps With AI
September 13, 2019

by Karthik Ramakrishnan and Ben Humphries, SCB Contributors

The supply chain as we know it is on the precipice of tipping from decades of steadily accelerating “push” dynamics to a new “push-and-pull” model.

Four main factors are contributing to this global, industry-wide change:

Today’s increasingly savvy shoppers. Customers live in the digital world and demand a seamless experience. If not, they’ll go elsewhere. This means that the supply chain — which is optimized for “pushing” inventory to customers — needs to add optimization for what customers wants to “pull” to themselves.

The current geopolitical climate. Whether it’s the nationalistic tendencies on display globally, tariff disputes between the U.S. and China, Brexit, or the global focus on sustainability issues, supply chains are more than ever exposed to uncertainty and risk.

Uneven advances in Industry 4.0 and digital supply chain. Factories, supply chains, and stores are becoming more connected, enabling different systems to share information and shrink lead times — but only for companies that are able to act. Production is shifting closer to customers, disrupting long-held patterns of regional trade.

Existing supply-chain technology is at the end of its lifecycle. Legacy software solutions, built to solve a specific, isolated problem such as forecasting or factory planning, are no longer fit for the purpose. To put it simply, this software just can’t keep up.

Enter artificial intelligence for the supply chain. Be it through predictive maintenance in the factory, self-driving trucks in the logistics chain, or automation in the store, AI solutions are emerging to improve efficiency and lower operating costs for supply-chain players. Yet there's a disconnect, as in most industries, around how to fully recognize value from AI. .....  " 

Thursday, September 12, 2019

IJCAI: International AI Conference

I was reminded in a meeting today that IJCAI, The International Joint Conferences on Artificial Intelligence Organization,  has international meetings and publishes papers of interest to AI.   In the past I attended many meetings.   This covers beyond just deep learning, but also about symbolic methods in use to produce intelligence.

IJCAI  International Joint Conferences on Artificial Intelligence Organization
Artificial Intelligence Journal Division of IJCAI
IJCAI acts as the official host for the editorial operations of the Artificial Intelligence journal, through its Artificial Intelligence Journal Division.

The journal is run by the Steering Committee whose members are the two editors-in-chief, two IJCAI trustee nominees, and a member nominated by the Editorial Board. The IJCAI Secretary-Treasurer Bernhard Nebel also acts as Secretary-Treasurer to this committee.The current SC is composed of:

Patrick Doherty, Linköping University (Sweden)
Marie desJardins, Simmons University (USA)
Bernhard Nebel, Albert-Ludwigs-Universität Freiburg (Germany)
Sylvie Thiébaux, The Australian National University (Australia)
Michael Wooldridge, University of Oxford (UK)
Shlomo Zilberstein, University of Massachusetts (USA)
Funding Opportunities for Promoting AI Research
Deadline for proposals: July 20, 2019

The Artificial Intelligence Journal (AIJ) is one of the longest established and most respected journals in AI, and since it was founded in 1970, it has published many of the key papers in the field. The operation of the Editorial Board is supported financially through an arrangement with AIJ's publisher, Elsevier. Through this arrangement, the AIJ editorial board is able to make available substantial funds, (of the order of 230,000 Euros per annum), to support the promotion and dissemination of AI research. Most of these funds are made available through a series of competitive open calls (the remaining part of the budget is reserved for sponsorship of studentships for the annual IJCAI conference).   .... " 

N Shot Learning

Remember we kind of discovered this accidentally during the early days of using neural networks.  Its a natural thing, having a simple example.   An having N shots is next.   But is often limited in context.

Artificial Intelligence is the new electricity - Andrew NG

If AI is the new electricity, then data is the new coal.

Unfortunately, just as we’ve seen a hazardous depletion in the amount of available coal, many AI applications have little or no data accessible to them.
New technology has made up for a lack of physical resources; likewise, new techniques are needed to allow applications with little data to perform satisfactorily. This is the issue at the heart of what is becoming a very popular field: N-shot Learning.

N-Shot Learning

You may be asking, what the heck is a shot, anyway? Fair question.A shot is nothing more than a single example available for training, so in N-shot learning, we have N examples for training. With the term “few-shot learning”, the “few” usually lies between zero and five, meaning that training a model with zero examples is known as zero-shot learning,  one example is one-shot learning, and so on. All of these variants are trying to solve the same problem with differing levels of training material.

Why N-Shot?
Why do we need this when we are already getting less than a 4% error in ImageNet?

To start, ImageNet’s dataset contains a multitude of examples for machine learning, which is not always the case in fields like medical imaging, drug discovery and many others where AI could be crucially important. Typical deep learning architecture relies on substantial data for sufficient outcomes- ImageNet, for example, would need to train on hundreds of hotdog images before accurately assessing new images as hotdogs. And some datasets, much like a fridge after a 4th of July celebration, are greatly lacking in hotdogs.

There are many use cases for machine learning where data is scarce, and that is where this technology comes in. We need to train a deep learning model which has millions or even billions of parameters, all randomly initialized, to learn to classify an unseen image using no more than 5 images. To put it succinctly, our model has to train using a very limited number of hotdog images. .... "

Wednesday, September 11, 2019

Most Leadership Skills are Learned

I just started more regularly listening to the K@W Podcasts, nicely done, easily accessed today, for example they are readily available by voice command from Alexa and Google systems.   .....

Most leadership Skills are Learned

Former Aetna CEO Ron Williams draws on years of management experience to offer leadership advice in his new book.

As a longtime business executive, Ron Williams is often asked for advice on management issues. He likes to keep his answers clear and simple. One of his favorite mantras is, “Don’t get stuck in paralysis by analysis.” He also tells young people not to map every step of their careers because, like him, they never know where they may end up. Williams grew up in Chicago’s South Side, where he used to wash cars, and became one of the few African-Americans to lead a Fortune 500 company, serving as chairman and chief executive officer of Aetna. He’s on the board at American Express, Boeing and Johnson & Johnson, and also runs his consultancy, RW2 Enterprises. Williams shares his life lessons and experiences in a new book, Learning to Lead: The Journey to Leading Yourself, Leading Others, and Leading an Organization. He joined the Knowledge@Wharton radio show on Sirius XM to talk about why the best leaders keep an open mind and never stop learning from the people around them. (Listen to the podcast at the top of this page).

An edited transcript of the conversation follows:

Knowledge@Wharton: I read that part of the reason why you wrote this book was because you have often been asked to talk about your upbringing and career, going from Chicago to CEO. ... "

MSoft and IOT with Plug and Play

Somewhat late to this, but of interest.  Making ti simple to experiment with is useful.

Microsoft Simplifies And Streamlines IoT With Launch Of Plug And Play  in 7wData

The internet of things (IoT) seems to be everywhere these days—from smart thermostats to video doorbells to connected refrigerators all the way to industrial control systems (ICS) to streamline manufacturing and enable centralized management and monitoring of critical infrastructure. As ubiquitous as IoT seems, though, the skills necessary to design and implement an effective IoT solution are beyond the capabilities of most companies. Microsoft—working with a who’s who of industry partners—just launched IoT Plug and Play to drastically simplify and democratize IoT and make it accessible for everyone. ..

 ... Guthrie talks about automated machine learning, Azure Cognitive Services and more. Guthrie also shared, “In addition, we’re announcing IoT Plug and Play, a new open modeling language to connect IoT devices to the cloud seamlessly, enabling developers to navigate one of the biggest challenges they face — deploying IoT solutions at scale. Previously, software had to be written specifically for the connected device it supported, limiting the scale of IoT deployments. IoT Plug and Play provides developers with a faster way to build IoT devices and will provide customers with a large ecosystem of partner-certified devices that can work with any IoT solution.”  ... ' 

Reprogrammable Colors in Ink Make Chameleons

When I first scanned this it seemed trite, but then I though of a whole group of possible ideas.  Say a car that could become more visible in  darkness.    Or new kinds of art or marketing that could change with their environment.  Or?

Objects Can Now Change Colors Like a Chameleon
MIT News   By Rachel Gordon
September 10, 2019

Researchers from the Massachusetts Institute of Technology (MIT) Computer Science and Artificial Intelligence Laboratory (CSAIL) developed a system that uses reprogrammable ink to allow objects to change colors when exposed to ultraviolet (UV) and visible light sources. The PhotoChromeleon system uses a mixture of photochromic dyes that can be sprayed or painted onto the surface of any object; exposure to UV light saturates the colors in the dyes from transparent to full saturation, while exposure to white light desaturates them as desired. Said MIT’s Stefanie Mueller, "By giving users the autonomy to individualize their items, countless resources could be preserved, and the opportunities to creatively change your favorite possessions are boundless."  .... ' 

Operational Universe

Been following the talks and interviews in The Edge for some time.    And recently these have touched on topics like intelligence,  but this latest one is about the universe and how it operates in time and space.   You would not think this is relates to simulating intelligence,  but with the emergence of quantum computing I am inclined to believe there will be more  than we expect.   So here a talk that leads you in that direction.

Talk:

JULIAN BARBOUR is a theoretical physicist specializing in the study of time and motion; emeritus visiting professor in physics at the University of Oxford; and author of The Janus Point (forthcoming, 2020) and The End of Time. .... 

The Universe Is Not in a Box
A Conversation with Julian Barbour [9.11.19]  .... "

Nokia, Omron, NTT Factory Floor Trial of 5G

Considerable implications by speeding up systems and their broader application.from the use of 5G.

Nokia, NTT DOCOMO and OMRON bring 5G to the factory floor in Industry 4.0 trial
Press Release from Nokia

Trial follows increasing demand for wireless communications at manufacturing sites driven by the need for stable connectivity between IoT devices

5G connectivity leveraged to prove the feasibility of layout-free production line with Autonomous Mobile Robots (AMRs) as well as real-time coaching using AI/IoT
10 September 2019

Espoo, Finland – Nokia, NTT DOCOMO, INC. and OMRON Corporation have agreed to conduct joint field trials using 5G at their plants and other production sites. As part of the trial, Nokia will provide the enabling 5G technology and OMRON the factory automation equipment while NTT DOCOMO will run the 5G trial.

The trial follows the increasing demand for wireless communications at manufacturing sites driven by the need for stable connectivity between IoT devices. As background noise from machines and the movement of people have the potential to interfere with wireless communications, the trial will aim to verify the reliability and stability of 5G technology deployed by conducting radio wave measurements and transmission experiments.

During the trial, Nokia, DOCOMO and OMRON will aim to establish the feasibility of the concept of a layout-free production line with Autonomous Mobile Robots (AMRs). As product cycles become shorter due to fast-changing consumer demands, manufacturing sites are under increasing pressure to rearrange production lines at short notice. By taking advantage of 5G's high speed, large capacity, low latency and ability to connect multiple devices, the trial will see AMRs automatically conveying components to the exact spot where they are required based on communication with production line equipment.  .... " 

Telepresence Robotics Acquisition

We looked at Telepresence for remote meeting presence or interaction with plant systems that benefited from remote control movement, voice interactions, image capture and control.   Not necessarily automated operation. At the time the capabilities were inadequate.

Today, Blue Ocean Robotics, a Danish robotics company, is announcing the acquisition of Suitable Technologies’ Beam telepresence robot business. Blue Ocean has been a Beam partner for five years, but now they’re taking things over completely.

The Beam robot began its life as an internal project within Willow Garage. It was spun out in 2012 as Suitable Technologies, which produced a couple different versions of the Beam. As telepresence platforms go, Beam is on the powerful and expensive side, designed primarily for commercial and enterprise customers. 

The most recent news from Suitable was the introduction of the BeamPro 2, which was announced over a year ago at CES 2018. The Suitable Tech website still lists it as “coming soon,” and our guess is that it’s now up to Blue Ocean to decide whether to go forward with this new version. Blue Ocean calls itself a “robot venture factory.” I’m not entirely sure what a “robot venture factory” is but Blue Ocean describes itself thusly:  .... " 

Trends and Themes at IJCAI AI Conference

Invitation to the ISSIP Cognitive Systems Institue Group Webinar
Please join us for the next ISSIP CSIG Speaker Series (see details below, or click here).

"28th International Joint Conference on Artificial Intelligence IJCAI-19  " 

Sarit Kraus, Biplav Srivastava, Francesca Rossi
When:  Thursday, September 12, 10:30am - US Eastern

Backgrounds:

Sarit Kraus (Ph.D. Computer Science, Hebrew University, 1989) is a Professor of Computer Science at Bar-Ilan University. Her research is focused on intelligent agents and multi-agent systems (including people and robots). For her work she received many prestigious awards. She was awarded the IJCAI Computers and Thought Award, the ACM SIGART Agents Research award, the EMET prize and was twice the winner of the IFAAMAS influential paper award. .....

Francesca Rossi is the IBM AI Ethics Global Leader and a Distinguished Research Staff Member at IBM Research. Her research interests focus on artificial intelligence, including constraint reasoning, preferences, multi-agent systems, computational social choice, and collective decision making.  She has published over 190 scientific articles in journals and conference proceedings, and as book chapters.  .....

Biplav Srivastava is a Distinguished Data Scientist and Master Inventor at IBM's Chief Analytics Office. With over two decades of research experience in Artificial Intelligence, Services Computing and Sustainability, most of which was at IBM Research, Biplav is also an ACM Distinguished Scientist and Distinguished Speaker, and IEEE Senior Member. ..... "

Task Description:
Trends and Themes at 28th International Joint Conference on Artificial Intelligence IJCAI-19

Date and Time : September 12, 2019 - 10:30am US Eastern

Zoom meeting Link: https://zoom.us/j/7371462221
Zoom Calling: (415) 762-9988 or (646) 568-7788 Meeting id 7371462221
Zoom International Numbers: https://zoom.us/zoomconference
(Check the website in case the date or time changes: http://cognitive-science.info/community/weekly-update/ )   ... "

Tuesday, September 10, 2019

McD's to Buy Voice AI Assistant

Another indication of the increasing use of voice interfaces and AI. In my recent experiments with voice in business, the need for accuracy in translation has become clear.

McDonald's Doubles Down on Tech With Voice AI Acquisition in Wired
The Golden Arches will acquire Apprente, a "sound-to-meaning" voice assistant, to speed up its drive-thru.

When McDonald's spent over $300 million on big data crunching startup Dynamic Yield earlier this year, the move came as something of a surprise. The follow-up should not. Today the Golden Arches are announcing the acquisition of Apprente, a voice AI system focused on fast food ordering. It's a niche, but it just paid off.

Specific terms of the deal have not yet been disclosed. But the synergies are at least more immediately understandable. Apprente's speech-based artificial intelligence deals within the relatively narrow confines of quick-service restaurants. As with Dynamic Yield's decision engine, which switches up menu items based on what it thinks consumers want at any given time and location, Apprente's ultimate goal is to increase the speed of any given transaction. Anyone who's had to repeat their order into a squawking speaker knows that pain.  .... " 

Real World RPA

I think RPA, or other process understanding or improvement  methods should always be considered when you are planning AI.     We at least sketched a flow of what we were working on, and there is so much more you can do today.   You need to understand what you are doing, considering, risking.  Some good thoughts here.

RPA In The Real World: Driving Marketing, Analytics, Productivity and Security
As we continue to move forward in digital transformation, an increasing number of companies are discovering the promise of robotic process automation (RPA). In a nutshell, RPA allows companies to gain efficiencies and (hopefully) save money by automating routine tasks. RPA is what I’d call the low-hanging fruit of artificial intelligence. It’s governed by structured input. Its processes are mundane and rule-based. It doesn’t require the deep, complex system or infrastructure integration that other more substantial AI requires. Best yet, it frees up your employees to work on higher-value projects, rather than repetitive day-to-day tasks. And for that reason, it’s become a hot commodity. Forrester says RPA will be a $1.5 billion business by 2020. This spending is a boon for vendors like UiPath, Automation Anywhere and Blue Prism that are at the forefront of this product offering.

But rather than more "philosophical" use cases for RPA, let's look at how businesses are using RPA in the real-world right now—and how your organization may benefit as well. .... " 

Third Party Risk Analysis

Recorded Future writes about the topic.  We examined them for things like competitive risk.   But this topic, especially in today's realm of many technology mal-players, large and small, makes the issue of particular importance.   Below just an intro, much more at the link.

Third-Party Risk Intelligence: Past and Present
SEPTEMBER 10, 2019 • THE RECORDED FUTURE TEAM

After months of searching, budgeting, and vetting, you’ve found the perfect vendor to help take your product offering to the next level. You’re excited to start working together and you’ve initiated the onboarding process. The company has provided the requisite new vendor questionnaires and documentation, and your governance, risk, and compliance (GRC) system has assessed the company for risk and found its current risk score to be acceptable. Everything seems in order.

But what you don’t know is that your soon-to-be partner was the target of a highly stealthy and successful malware attack just nine months ago. They may have taken the appropriate steps to resolve the incident, but wouldn’t you still want to be aware of it?  .... " 

Sainsbury's Experiment with No Checkout

No-checkout is changed to allow for a standard pay option.   Not unexpected, consumer behavior needs to be adjusted over time.   Our local Kroger now has three distinct payment options,  all with technical implications and complexity.   Still an experiment.

Sainsbury's reinstalls tills in till-free store  in the BBC
Tills have been reinstalled in an experimental till-less shop opened by supermarket Sainsbury's.  It had been totally refurbished to remove the entire checkout area, freeing up shop assistants to help customers on the shop floor.

Customers had to scan their groceries using Sainsbury's Pay & Go app, paying for them as they went around the shop.  But it resulted in long queues at the helpdesk as people attempted to pay for their groceries in the traditional way.  .... " 

Automating and Optimizing Experiment Data Collection

 Collecting data from processes, and using some automatic method of choosing, pre-analysis, cleansing, visualizing and tagging, associating with metadata .... can be very useful.   Here a more complex example.

SMART Algorithm Makes Beamline Data Collection Smarter
By Lawrence Berkeley National Laboratory

The "data deluge" in scientific research stems in large part from the growing sophistication of experimental instrumentation and optimizing tools — often using machine- and deep-learning methods — to analyze increasingly large data sets. But what is equally important for improving scientific productivity is the optimization of data collection — aka "data taking" — methods.

Toward this end, Marcus Noack, a postdoctoral scholar at Lawrence Berkeley National Laboratory in the Center for Advanced Mathematics for Energy Research Applications (CAMERA), and James Sethian, director of CAMERA and Professor of Mathematics at UC Berkeley, have been working with beamline scientists at Brookhaven National Laboratory to develop and test SMART (Surrogate Model Autonomous Experiment), a mathematical method that enables autonomous experimental decision making without human interaction. A paper describing SMART and its application in experiments at Brookhaven's National Synchrotron Light Source II (NSLS-II) are described in "A Kriging-Based Approach to Autonomous Experimentation with Applications to X-Ray Scattering," published in Scientific Reports.

"Modern scientific instruments are acquiring data at ever-increasing rates, leading to an exponential increase in the size of data sets," says Noack, lead author on the paper. "Taking full advantage of these acquisition rates requires corresponding advancements in the speed and efficiency not just of data analytics but also experimental control."

The goal of many experiments is to gain knowledge about the material that is studied, and scientists have a well-tested way to do this: they take a sample of the material and measure how it reacts to changes in its environment. User facilities such as Brookhaven's NSLS-II and the Center for Functional Nanomaterials offer access to high-end materials characterization tools. The associated experiments are often lengthy, and complicated procedures and measurement time is precious. A research team might only have a few days to measure their materials, so they need to make the most of each step in each measurement.  .... " 

Monday, September 09, 2019

Accelerating AI with Open Source, and More

Update on MLIR, which we had looked at.   See who has joined the consortium.  Architecture always being a key element to doing anything well.   And to do things efficiently it makes lots of sense to share the work.   I would further add there should be better shared ways to manage varying data  'infrastructures' by problem domains, in both the semantics of the data and its metadata.   Lets make that happen too.

Chris Lattner, Distinguished Engineer, TensorFlow
Tim Davis,  Product Manager, TensorFlow

Machine learning now runs on everything from cloud infrastructure containing GPUs and TPUs, to mobile phones, to even the smallest hardware like microcontrollers that power smart devices. The combination of advancements in hardware and open-source software frameworks like TensorFlow is making all of the incredible AI applications we’re seeing today possible--whether it’s predicting extreme weather, helping people with speech impairments communicate better, or assisting farmers to detect plant diseases. 

But with all this progress happening so quickly, the industry is struggling to keep up with making different machine learning software frameworks work with a diverse and growing set of hardware. The machine learning ecosystem is dependent on many different technologies with varying levels of complexity that often don't work well together. The burden of managing this complexity falls on researchers, enterprises and developers. By slowing the pace at which new machine learning-driven products can go from research to reality, this complexity ultimately affects our ability to solve challenging, real-world problems. 

Earlier this year we announced MLIR, open source machine learning compiler infrastructure that addresses the complexity caused by growing software and hardware fragmentation and makes it easier to build AI applications. It offers new infrastructure and a design philosophy that enables machine learning models to be consistently represented and executed on any type of hardware. And today we’re announcing that we’re contributing MLIR to the nonprofit LLVM Foundation. This will enable even faster adoption of MLIR by the industry as a whole.   .... " 

Voice Applications, Why and How

Good, non device specific look at voice applications.   And an overview of what people are doing and why and where to start. And really not just about AI,  think assistance in context.

Got speech? These guidelines will help you get started building voice applications
Speech adds another level of complexity to AI applications—today’s voice applications provide a very early glimpse of what is to come.     By Ben Lorica, Yishay Carmiel  in O'Reilly Media .... 

ACM Case Studies

Found this service to be of use again, though there needs to be more topic coverage.

ACM Case Studies

Written by leading domain experts for software engineers, ACM Case Studies provide an in-depth look at how software teams overcome specific challenges by implementing new technologies, adopting new practices, or a combination of both. Often through first-hand accounts, these pieces explore what the challenges were, the tools and techniques that were used to combat them, and the solution that was achieved.   ... " 

ERP and all That

Been looking at how ERP can utilize AI and RPA so this was of interest.  Interesting piece though   over acronymed.

Software Delivery Management: ERP for IT redux? in Forrester
by Charles Betz, Principal Analyst

" .... Software Delivery Management helps organizations streamline CI/CD processes and foster meaningful collaboration across all functions involved in software development and delivery. Its purpose is to increase software delivery velocity, quality, predictability and value which consequently results in improved customer and user satisfaction and better business results.

Seems like a reasonable set of goals. SDM as CloudBees describes it is similar to Value Stream Management (VSM) as my colleagues Chris Condo and Diego Lo Giudice have framed it. However, CloudBees really caught my attention when in (on-the-record) briefings for analysts, they compared their SDM vision to Enterprise Resource Planning (ERP) vendor SAP.

SAP? This might be surprising to many in the DevOps community. However, it makes perfect sense to me.

I’ve long wondered: most of the C-suite is well served by ERP vendors like SAP and Oracle. Why not the CIOs of the world? Why are their tools so fragmented? Why does the cobbler go barefoot? It’s not a new question (I wrote a book about it), nor a new branding idea. Various IT service management (ITSM) vendors have experimented with “ERP for IT” messaging, but didn’t get much traction. Why not? And what might be different with SDM?

I think the biggest problem for the ITSM vendors is that they were starting at the end of the digital value stream, at the phase of operations and support, when the harder and more valuable aspect is upstream, in software development.

Before continuous delivery, upstream was a world of project management where (oftentimes) the build and deploy toolchains were unique to each project. Now, the industry has a solidifying vision for a continuous delivery conceptual architecture that spans projects (which are increasingly turning into steady-state products[i]). Story, commit, build, package, provision, deploy, operate – the deepening, DevOps-driven industry consensus here is is a big step forward, and might well be “what’s different this time around.”   ... "

Predicting Severe Weather

I like this experiment because there is so much data gathered and involved,  and many current models, so it should be easy to do good comparisons of methods. I note that the emphasis may be on short term forecasting.

Machine learning and its radical application to severe weather prediction  by Eric Verbeten, University of Wisconsin-Madison

In the last decade, artificial intelligence ("AI") applications have exploded across various research sectors, including computer vision, communications and medicine. Now, the rapidly developing technology is making its mark in weather prediction.

The fields of atmospheric science and satellite meteorology are ideally suited for the task, offering a rich training ground capable of feeding an AI system's endless appetite for data. Anthony Wimmers is a scientist with the University of Wisconsin–Madison Cooperative Institute for Meteorological Satellite Studies (CIMSS) who has been working with AI systems for the last three years. His latest research investigates how an AI model can help improve short-term forecasting (or "nowcasting") of hurricanes.

Known as DeepMicroNet, the model uses deep learning, a type of neural network arranged in "deep" interacting layers that finds patterns within a dataset. Wimmers explores how an AI system like DeepMicroNet can supplement and support conventional weather prediction systems.  .... " 

State of AI in the Enterprise

Useful view from recent surveys Deloitte in 2017 and 2018 by Deloitte.

Irving Wladawsky-Berger reports on The State of AI in the Enterprise

A few months ago, Babson College professor Tom Davenport gave a talk on the state of AI in the enterprise at the annual conference of MIT’s Initiative on the Digital Economy.  His talk was based on two recent US surveys conducted by Deloitte, the first one in 2017 followed by a second in 2018.  Davenport was a co-author of both reports.

The 2017 survey was focused on the responses of 250 US executives who were leading the applications of AI in their companies.  The larger 2018 survey reached out to 1,100 IT (46%) and line-of-business (54%) executives from US-based companies (64% at the C-level) and 10 different industries.  All of these respondents were early AI adopters compared with their counterparts in an average company, - 90% were directly involved in their company’s AI projects, and 75% said that they had an excellent understanding of AI.

Davenport started his talk by summarizing the key findings in the Deloitte surveys:

20-30% of enterprises are early adopters, having implemented at least one AI prototype or production application;

Many projects are in pilots but some are already in production;
Relatively simple low hanging fruit projects prevail over more ambitious and complex moon shots;
Only 24% cited “reducing headcount through automation” as one of their top AI priorities;
The great majority of respondents believe that AI leads to moderate or substantial changes in job roles and skills;

Implementation, integration, data issues and talent top the list of challenges faced by early adopters;
Further AI growth is inevitable.

Overall, the 2018 survey found that “Early adopters are ramping up their AI investments, launching more initiatives, and getting positive returns.”  Compared to executives in average companies, early adopters have been implementing key AI technologies at a growing rate, including machine and deep learning, natural language processing and computer vision.  63% of respondents had adopted machine learning, an increase of 5% over the 2017 survey and 50% were using deep learning.  62% had adopted natural language processing, compared to 53% in 2017, while 57% were using computer vision.  .... "

Sunday, September 08, 2019

Complex Teleportation Achiever

Practical details unclear,  more efficient and universal quantum computers are implied, check the attached but still technical paper.

Complex quantum teleportation achieved for the first time
University of Vienna

Austrian and Chinese scientists have for the first time succeeded in transferring three-dimensional quantum states (symbolic image). Credit: ÖAW/Harald Ritsch
Austrian and Chinese scientists have succeeded in teleporting three-dimensional quantum states for the first time. High-dimensional teleportation could play an important role in future quantum computers.

Researchers from the Austrian Academy of Sciences and the University of Vienna have experimentally demonstrated what was previously only a theoretical possibility. Together with quantum physicists from the University of Science and Technology of China, they have succeeded in teleporting complex high-dimensional quantum states. The research teams report this international first in the journal Physical Review Letters.

In their study, the researchers teleported the quantum state of one photon (light particle) to another distant one. Previously, only two-level states ("qubits") had been transmitted, i.e., information with values "0" or "1". However, the scientists succeeded in teleporting a three-level state, a so-called "qutrit". In quantum physics, unlike in classical computer science, "0" and "1" are not an 'either/or' – both simultaneously, or anything in between, is also possible. The Austrian-Chinese team has now demonstrated this in practice with a third possibility "2".  ... " 

Future Farming

Worked in agricultural and Forestry areas, so this remains an interest:

Rapid adoption of artificial intelligence in agriculture in FutureFarming

The AI in agriculture market was valued at USD 600 million in 2018 and is expected to reach USD 2.6 billion by 2025.

Agriculture is seeing rapid adoption of Artificial Intelligence (AI) and Machine Learning (ML), both in terms of agricultural products and in-field farming techniques.

According to research company MarketsandMarkets, the AI in agriculture market was valued at USD 600 million in 2018 and is expected to reach USD 2.6 billion by 2025, at a CAGR of 22.5% during the forecast period.

Computing the most disruptive technology
The report states that cognitive computing in particular is all set to become the most disruptive technology in agriculture services as it can understand, learn, and respond to different situations (based on learning) to increase efficiency.

The major factors driving the growth of the AI in agriculture market include:

- the growing demand for agricultural production owing to the increasing population
- rising adoption of information management systems and new advanced technologies for improving crop productivity
- increasing crop productivity by implementing deep learning techniques
- growing initiatives by worldwide governments supporting the adoption of modern agricultural techniques. ....

(Update) AI Explainability Toolkit Talk and Technology

From last weeks talk on the just released open source explainabilty toolkit.   This can be seen as a fundamental part of most conversations.   When we interact with colleagues or with professionals, and get recommendations, we often have to ask the question 'Why?'.  This is an attempt at preloading AI originating answers to that question, based on a number of common templates.

http://cognitive-science.info/wp-content/uploads/2019/09/AIX360-CSIG-V1-2019-09-05.pdf  (Slides)

http://cognitive-science.info/community/weekly-update/  Update: Recording: https://www.youtube.com/watch?v=Yn4yduyoQh4

http://aix360.mybluemix.net/   (Technical link, demos)

What does it take to trust AI decisions ? 
AI is now used in many high-stakes decision making applications.

Addressing:
Is it fair?  Is it easy to understand?  Did anyone tamper with it?  Is it accountable?  

Very good talk, lots of great progress shown here,  but still lots more to do.   Everyone doing serious work with AI systems should examine this work and see how their system could link to this capability.  And extend it.   More to follow.

IBM Research AI Explainability 360 Toolkit

By Vijay Arya, Rachel Bellamy, Pin-Yu Chen,Payel Das, Amit Dhurandhar, MaryJo Fitzgerald,Michael Hind, Samuel Hoffman,Stephanie Houde, Vera Liao, Ronny Luss,Sameep Mehta, Saska Mojsilovic, Sami Mourad,Pablo Pedemonte, John Richards,Prasanna Sattigeri, Moninder Singh,Karthikeyan Shanmugam, Kush Varshney,Dennis Wei, Yunfeng Zhang, Ramya Raghavendra .... 

Saturday, September 07, 2019

A/B Testing for Startups

A/B Testing .... of experimentation for Startups.   Intriguing paper.

Experimentation and Startup Performance: Evidence from A/B Testing   by Rembrand Koning, Sharique Hasan, and Aaron Chatterji in HBS Working Knowledge

Is experimentation the right strategy for startups? This analysis of the adoption of A/B testing technology by 35,000 global startups provides evidence that a strategy based on repeated experimentation will improve performance over time. However, the benefits of experimentation vary. Experimentation helps younger startups “fail faster,” while older firms may discover new, high-growth products.

Author Abstract
Recent work argues that experimentation is the appropriate framework for entrepreneurial strategy. We investigate this proposition by exploiting the time-varying adoption of A/B testing technology, which has drastically reduced the cost of experimentally testing business ideas. This paper provides the first evidence of how digital experimentation affects the performance of a large sample of high-technology startups using data that tracks their growth, technology use, and product launches. We find that, despite its prominence in the business press, relatively few firms have adopted A/B testing. 

However, among those that do, we find increased performance on several critical dimensions, including page views and new product features. Furthermore, A/B testing is positively related to tail outcomes, with younger ventures failing faster and older firms being more likely to scale. Firms with experienced managers also derive more benefits from A/B testing. Our results inform the emerging literature on entrepreneurial strategy and how digitization and data-driven decision-making are shaping strategy  .... ' 

Podcast: Can Cybercriminals be Stopped?

Have been seeing increasingly dangerous threats to our technologies:

Can Cybercriminals Be Stopped?
Cybersecurity expert and journalist Kate Fazzini exposes the true nature of cybercriminals in her new book.

Cybercriminals aren’t all young hackers living in dark basements armed with their laptops and quaffing energy drinks. The new generation of cybercriminals have organizations that function much like startups, with CEOs and recruiters, and customer service agents. In her new book, Kate Fazzini, a cybersecurity professional and CNBC journalist, reveals the true nature of these cybercriminals beyond the headlines. She recently joined the Knowledge@Wharton radio show on SiriusXM, to talk about her book, Kingdom of Lies: Unnerving Adventures in the World of Cybercrime. (Listen to the podcast at the top of this page.)

An edited transcript of the conversation follows.

Knowledge@Wharton: Are top-level executives devoting enough resources to cybersecurity within their own companies?

Kate Fazzini: Except for the really large companies — the Fortune 20, Fortune 30 companies — we’re not even close yet. For most companies, the top cybersecurity official is reporting up through a technology organization that then probably reports up through one or two other people to the highest levels of the organization and the board.

That’s very problematic because the technology executive has a bit of a conflict of interest. They’re the ones who are doing the [software] applications for the company. They are the ones who are making the purchases. They want the budget that they’ve allocated to go through, and they don’t want a security person stopping them from doing what they want to do. For most companies, that’s a very old-fashioned way of doing things, and that cybersecurity person still doesn’t have the visibility at the highest C-level that they need to have.

Knowledge@Wharton: We hear stories about hackers in Russia, China and Eastern Europe. How much of this activity is happening inside the United States?

Fazzini: As much as we like to say that we aren’t able to catch these criminals, we have a much more robust law enforcement capability of catching these criminals. What makes us different in the United States is that the people who are doing cybercrime in this country, especially if it involves hands-on activities like going to an ATM or something like that, they’re much deeper underground than they are overseas.

That’s partially because in a lot of Eastern European nations, law enforcement just looks the other way on a lot of these crimes. In countries like Russia and in some Asian countries — not so much China — they will actually recruit criminals who show that they have a really good way of doing certain cyber activities. … That is not something that we do in the United States at all. You will never see the NSA (National Security Agency) recruiting a significant cybercriminal into their organization.  ....  "  

Friday, September 06, 2019

Hand Scanning Payments at Whole Foods

Another payment simplification idea from Amazon.

Amazon Tests Tool That Scans Your Hand to Let You Pay at Whole Foods 
The Telegraph (U.K.)
By James Cook
September 4, 2019

Amazon is reportedly testing a hand-scanning payments system to enable shoppers to pay for groceries. Shoppers using the Orville system hold their hands before a camera; the system measures the size and shape of the hands to confirm their identity and allow the purchase to proceed. Amazon reportedly plans to implement the system in U.S. Whole Foods stores in the coming months (the retail giant bought the supermarket chain in 2017).  ... " 

A History and Future of Computer Hardware Capabilities

 Quite interesting talk I attended that presented how hardware, software methods and algorithms have influenced changes in architecture and speed.   Somewhat technical but instructive for anyone with an interest in the future forecasting of solving complex computational problems.

" ... Following his talk, "A New Golden Age for Computer Architecture," David Patterson was kind enough to answer some additional questions we were not able to get to during the live event. You'll find the questions and answers (including some interesting pointers) on our Discourse forum page.  

For those of you who were not able to attend live, this webcast can now be viewed on-demand.
Use the link below to enter the webcast at any time: A New Golden Age for Computer Architecture

View the most recent ACM TechTalk, "A New Golden Age for Computer Architecture," on demand. The talk was presented by David Patterson, Distinguished Engineer at Google, Professor Emeritus of Computer Science at UC Berkeley, and 2018 ACM A.M. Turing Award Laureate.  Cliff Young, Software Engineer at Google Brain, moderated the Q&A. Leave comments, questions, and check out further resources on ACM's Discourse page. ... '

Protecting Sensitive Data

More efficient by isolating software components.

Efficient Protection of Sensitive Data 
Max Planck Gessellschaft
September 3, 2019

Researchers at the Max Planck Institute for Software Systems in Germany have developed a new technology to isolate software components from each other. ERIM allows sensitive data to be protected from hackers when the data is processed by online services. The method has up to five times less computational overhead than the previous best isolation technology, making it more practical for online services to use. The researchers combined the Memory Protection Keys (MPK) hardware feature with instruction rewriting to create an environment in which an attacker is no longer able to get around the "walls" between software components. Said the Institute’s Peter Druschel, “Software developers are in a permanent race against time and cyber criminals, but data protection still has to be practical. This sometimes calls for systematic but unconventional approaches, like the one we pursued with ERIM.”   .... ' 

Are Personal Data Stores Next?

Was pointed to this article.   Even the term itself was new to me.    Nice exploratory piece below, I agree this has to be kept very simple, with clear goals, benefits and security. Like they say, most everything there is immature and I wold not put my own data in an uncertain space.

Are Personal Data Stores about to become the NEXT BIG THING?  In Medium  Written by Irina and Simon Worthington.

Personal Data Store providers we assessed

We’ve heard about the consequences of mass personal data mining — from manipulating elections to exploiting people’s neuroses. Companies keep basing their business models around tracking their users and selling that data. Data breaches and unsavory uses of all this information clearly infringe personal privacy, but what’s the alternative beyond becoming a cave-dwelling hermit?

One that’s come up time after time is the Personal Data Store (PDS) model. This post looks into some leading solutions and assesses the prospects.

Our hypotheses: 

When we started our research, there was a number of concerns and assumptions we wanted to test. 

Our worries were that PDSs:
Don’t have a market and adoption figures are low.
Are too much hard work (either for non-technical users or in terms of time and effort to be one’s own data broker).
Don’t provide any advantage to the user over existing models apart from privacy (a.k.a “what new superpower does it give me?”).
Don’t integrate with existing social platforms where people’s network lives.
Are unrealistic and will fail to deliver major privacy and data control shifts.   ... " 

Ring Door Bell Now Takes over Property Lighting

Have had the Ring Door Bell and door surveillance working for several years now.   It's continued to add more sophisticated options to capture and manage video and security interactions. They have now added a number of battery powered and Wifi network connected devices to light up your outdoors at night,  and add to outdoor security and lighting convenience.  All controllable via motion sensors and Alexa voice commands. Light a path, entrance or a whole property.  Remotely access the lights, and the real time video from cameras or resulting video.  Have lights chained together in groups.

The doorbell part and cameras can create 'neighborhood watches' by connecting to other nearby users of the system.   I get several alerts there every week.  And somewhat controversially,  local police departments can join these neighborhoods,  to potentially request copies of videos captured.   My suburban police district just announced it has joined, so that must be fairly common.    Video examples of how this has been used in practice are here:  https://tv.ring.com/    Considerable detail about how this works at the link.

All this is fairly easy to install, especially if you already have assistant infrastructure in place, but you can run into situations where you might need some help.

The new lighting options are described here:  https://shop.ring.com/pages/smart-lighting

Here is a review of the added lighting network capabilities in DigitalTrends.

Build Your Own Voice Assistant

A look at building your own Voice Assistant from KDNuggets, instructive about how relatively little it takes to set up the basics using Python.   Of course setting it in a complete ecosystem requires quite a few additional details.   Below just the intro, more at the link:

Hone your practical speech recognition application skills with this overview of building a voice assistant using Python.    By Nagesh Chauhan, Big data developer at CirrusLabs

Introduction
Who doesn't want to have the luxury to own an assistant who always listens for your call, anticipates your every need, and takes action when necessary? That luxury is now available thanks to artificial intelligence-based voice assistants.

Voice assistants come in somewhat small packages and can perform a variety of actions after hearing your command. They can turn on lights, answer questions, play music, place online orders and do all kinds of AI-based stuff.   

Voice assistants are not to be confused with virtual assistants, which are people who work remotely and can, therefore, handle all kinds of tasks. Rather, voice assistants are technology based. As voice assistants become more robust, their utility in both the personal and business realms will grow as well.

What is a Voice Assistant?

A voice assistant or intelligent personal assistant is a software agent that can perform tasks or services for an individual based on verbal commands i.e. by interpreting human speech and respond via synthesized voices. Users can ask their assistants’ questions, control home automation devices, and media playback via voice, and manage other basic tasks such as email, to-do lists, open or close any application etc with verbal commands.

Let me give you the example of Braina (Brain Artificial) which is an intelligent personal assistant, human language interface, automation and voice recognition software for Windows PC. Braina is a multi-functional AI software that allows you to interact with your computer using voice commands in most of the languages of the world. Braina also allows you to accurately convert speech to text in over 100 different languages of the world.  .... " 

Studies of Management by Algorithm

Is part of the process of what we deal with as we start to work more closely with robots and algorithms.   

What People Hate About Being Managed by Algorithms, According to a Study of Uber Drivers
by Mareike Möhlmann, Ola Henfridsson  in HBR

Companies are increasingly using algorithms to manage their remote workforces. Called “algorithmic management,” this approach has been most widely adopted in gig economy companies. For example, ride-hailing company Uber substantially increases its efficiency by managing some three million workers with an app that instructs drivers which passengers to pick up and which route to take.

Being managed in this way offers some benefit to self-employed workers as well: for example, Uber drivers are free to decide when and for how long they would like to work and which area they would like to serve. However, our research reveals that algorithmic management is also frustrating to workers, and their resentment can lead them to behave subversively with the potential to cause real harm to their companies. Our research also suggests some ways that companies can mitigate these concerns while still taking advantage of the benefits of management by algorithm. .... "

Thursday, September 05, 2019

AI Explainability 360 Toolkit

From today's talk:

http://cognitive-science.info/wp-content/uploads/2019/09/AIX360-CSIG-V1-2019-09-05.pdf (Slides)

http://cognitive-science.info/community/weekly-update/  Update: Recording: https://www.youtube.com/watch?v=Yn4yduyoQh4

http://aix360.mybluemix.net/   (Technical link)

What does it take to trust AI Decisions ? 

AI IS NOW USED IN MANY HIGH-STAKES DECISION MAKING APPLICATIONS

Addressing:
Is it fair?  Is it easy to understand?  Did anyone tamper with it?  Is it accountable?  

Very good talk, lots of great progress shown here,  but still lots more to do.   Everyone doing serious work with AI systems should examine this work and see how their system could link to this capability.  And extend it.   More to follow.

IBM Research AI Explainability 360 Toolkit

By Vijay Arya, Rachel Bellamy, Pin-Yu Chen,Payel Das, Amit Dhurandhar, MaryJo Fitzgerald,Michael Hind, Samuel Hoffman,Stephanie Houde, Vera Liao, Ronny Luss,Sameep Mehta, Saska Mojsilovic, Sami Mourad,Pablo Pedemonte, John Richards,Prasanna Sattigeri, Moninder Singh,Karthikeyan Shanmugam, Kush Varshney,Dennis Wei, Yunfeng Zhang, Ramya Raghavendra

Shipping Containers as IoT

Used to deal with lots of shipping containers that contained valuable perishable items.  So saw the need for this early on.     Tests were already underway then, and makes sense that they have progressed.  See also the Maersk work recently mentioned here.

IoT-enabled shipping containers sail the high seas improving global supply chains. in 7wData

Global trade flows through shipping containers. Manufacturers depend on them to get raw materials in time and to ship finished products to market. IoT is being applied to monitor containers and make sure that their contents aren’t damaged or stolen.

Containers have standardized dimensions, which lets transporters easily ship, stack and store them. There are over twenty million containers in motion right now. Containers are pre-filled which reduces the time that trucks need to get loaded. Their standard size allows them to be easily transferred between trucks, planes, ships and trains.

Global supply chains based on containers enable manufacturers to minimize their costs with ‘just-in-time’ inventory. This makes it important to track containers’ location and the condition of their contents.

Containers are made of steel and stacked several deep making communications a challenge. LoRa and WiFi are used for shipboard communication with sensors. Container sensors monitor several parameters:    ..... "

Blockchain Testing to Secure Power Grid

Interesting example of how to use chains for securing data.  Here it appears primarily IOT examples.

US Energy Department Funds Trial of Factom Blockchain to Secure Power Grid in CoinDesk

Factom, one of the earliest companies to pitch blockchains to enterprises, is participating in a U.S. government-funded trial of the technology to protect the national power grid.

Announced Thursday, TFA Labs, an internet-of-things (IoT) security startup, is experimenting with Factom’s protocol to validate that devices on the grid aren’t infected with malware.

Backed by a nearly $200,000 grant from the U.S. Department of Energy (DOE), the project aims to improve the security of millions of such devices.

In some cases, TFA Labs is looking at storing raw data, such as the health and status of devices, on the Factom blockchain. In other cases, the company wants to assign a digital identity to the firmware, or permanent software installed on devices.  If the files are manipulated, they’ll produce a cryptographic hash that does not match the digital identity, indicating something is amiss.

“We can store raw data or data hashes of the data,” Dennis Bunfield, CEO of TFA Labs, told CoinDesk. “It’s ideal for IoT device use.”  .... ' 

McCormick Uses AI to Test Spices

Been impressed with what McCormick has done in emerging tech spaces.

McCormick using AI to test recipes

McCormick has been using artificial intelligence to develop recipes that tap into its 40 years of data, said CEO Lawrence Kurzius. Algorithms choose ingredients based on a flavor profile, sort through 14,000 raw materials and eventually determine a profit margin, he explained. ... 

Hot Stuff: Lawrence Kurzius Spices Up McCormick's Business
 By Chloe Sorvino Forbes Staff

This story appears in the September 30, 2019 issue of Forbes Magazine.  

A hive of food scientists in white lab coats and protective goggles buzz quietly around McCormick & Co. CEO Lawrence Kurzius, filling test tubes and testing the contents with their noses. A garden of herbs grows on the wall behind them, accenting the room with fresh sprouts of mustard seed, amber peas, Brazilian parsley and other spices. The 6-foot-3 Alabama native is in his element, his slow southern drawl slipping through as assistants rattle off the lab’s features: a rotary evaporator that extracts flavor without heat; a centrifuge powerful enough to turn thick, pulpy condiments into totally clear and totally tasty liquids; a bank of eight induction burners.  .... "