/* ---- Google Analytics Code Below */

Sunday, June 30, 2019

State of AI Report

Looks to be a good resource,  subscribe.

State of AI Report 2019  By Nathan Benaich and Ian Hogarth
Artificial intelligence (AI) is a multidisciplinary field of science whose goal is to create intelligent machines.

We believe that AI will be a force multiplier on technological progress in our increasingly digital, data-driven world. This is because everything around us today, ranging from culture to consumer products, is a product of intelligence.

In this report, we set out to capture a snapshot of the exponential progress in AI with a focus on developments in the past 12 months. Consider this report as a compilation of the most interesting things we’ve seen that seeks to trigger an informed conversation about the state of AI and its implication for the future. This edition builds on the inaugural State of AI Report 2018, which can be found here.

We consider the following key dimensions in our report:

Research: Technology breakthroughs and their capabilities.
Talent: Supply, demand and concentration of talent working in the field.
Industry: Large platforms, financings and areas of application for AI-driven innovation today and tomorrow.
China: Large platforms, financings and areas of application for AI-driven innovation today and tomorrow.
Politics: Public opinion of AI, economic implications and the emerging geopolitics of AI.

Read and download the State of AI Report 2019 and 2018 on SlideShare.
Collaboratively produced in East London, UK by:

Nathan Benaich (@nathanbenaich)    Ian Hogarth (@soundboy)

Unifying Logical and Statistical AI With Markov Logic

As AI practitioners in the enterprise we understood this early on.   You need to know the results of statistical analysis AND the ability to link them usefully to logical decision making.   Sometimes easy,  sometimes not   Thus approaches like decision trees based on statistical data became popular for our team.   We understood too that Markov methods could provide the framework for providing this, so we experimented with them.  In both cases the results were also relatively transparent.    This unification can also outline way that humans will interact with the AI.  Research on the idea was going on then and is still now.  Below gives you a good update.  Starts basic and gets technical.

Unifying Logical and Statistical AI with Markov Logic
By Pedro Domingos, Daniel Lowd 
Communications of the ACM, July 2019, Vol. 62 No. 7, Pages 74-83    10.1145/3241978

For many years, the two dominant paradigms in artificial intelligence (AI) have been logical AI and statistical AI. Logical AI uses first-order logic and related representations to capture complex relationships and knowledge about the world. However, logic-based approaches are often too brittle to handle the uncertainty and noise present in many applications. Statistical AI uses probabilistic representations such as probabilistic graphical models to capture uncertainty. However, graphical models only represent distributions over propositional universes and must be customized to handle relational domains. As a result, expressing complex concepts and relationships in graphical models is often difficult and labor-intensive.  .... "   (  Full Technical paper)

Video intro to the concept (technical): 





Alchemy Language, mentioned in the above talk:

https://alchemy.cs.washington.edu/
Alchemy: Open Source AI
Welcome to the Alchemy system! Alchemy is a software package providing a series of algorithms for statistical relational learning and probabilistic logic inference, based on the Markov logic representation. Alchemy allows you to easily develop a wide range of AI applications, including: .... " 

Is an Utterance Relevant to a Conversation?

Everything has a context  (and relevant metadata).   And in any conversation we need to test for relevancy.  You say something and we are quick to say, or think:  That is irrelevant.   A skill or a machine needs to do the same, at least for efficiency.     This piece discusses how its done in
Amazon Alexa Skill development.  Starts with simple points, and then gets technical.

Learning to Recognize the Irrelevant By Young-Bum Kim  

A central task of natural-language-understanding systems, like the ones that power Alexa, is domain classification, or determining the general subject of a user’s utterances. Voice services must make finer-grained determinations, too, such as the particular actions that a customer wants executed. But domain classification makes those determinations much more efficient, by narrowing the range of possible interpretations.

Sometimes, though, an Alexa customer might say something that doesn’t fit into any domain. It may be an honest request for a service that doesn’t exist yet, or it might be a case of the customer’s thinking out loud: “Oh wait, that’s not what I wanted.”

If a natural-language-understanding (NLU) system tries to assign a domain to an out-of-domain utterance, the result is likely to be a nonsensical response. Worse, if the NLU system is tracking the conversation, so that it can use contextual information to improve performance, the interpolation of an irrelevant domain can disrupt its sequence of inferences. Getting back on track can be both time consuming and, for the user, annoying.

Out_of_domain.jpgOne possible solution is to train a second classifier that sits on top of the domain classifier and just tries to recognize out-of-domain utterances. But this looks like an intrinsically inefficient arrangement. Data features that help a domain classifier recognize utterances that fall within a particular domain are also likely to help an out-of-domain classifier recognize utterances that fall outside it.

In a paper we’re presenting at this year’s Interspeech, my colleague Joo-Kyung Kim and I describe a neural network that we trained simultaneously to recognize in-domain and out-of-domain utterances. By using a training mechanism that iteratively attempts to optimize the trade-off between those two goals, we significantly improve on the performance of a system that features a separately trained domain classifier and out-of-domain classifier.

For purposes of comparison, we set a series of performance targets for out-of-domain (OOD) classification, which both our system and the baseline system had to meet. For each OOD target, we then measured the accuracy of domain classification. On average, our system improved domain classification accuracy by about 6% for a given OOD target.

As inputs to our system, we use both word-level and character-level information. At the word level, we use a standard set of “embeddings,” which represent words as points in a 100-dimensional space, such that words with similar meanings are grouped together. We also feed the words’ constituent characters to a network that, during training, learns its own character-level embeddings, which identify substrings of characters useful for predictive purposes.

The character embeddings for each word in the input pass to a bidirectional long short-term memory (bi-LSTM) network. LSTM networks are common in natural-language processing because they factor in the order in which data are received, which is useful in analyzing both strings of characters and strings of words. Bi-LSTM models consider data sequences both forward and backward.  .... " 

Challenges for Machine Driven Marketing

Considerable challenges involved.  Like the concept of 'guardrails', though the measurements involved need to be well established.   Return, risk, behavior?  Think of guardrails as being a human management domain.   With creativity in their adjustment.

Discussions in digital: Making machine-driven marketing work

To work effectively with machines, marketers need to set up guardrails, then let the machines crunch the data and the humans focus on creativity and personalization.

Sent from McKinsey Insights, available in the App Store and Play Store.  ... " 

Building a Computer Vision Model

A simplified, straightforward tutoral on a computer vision model.   This is the place you can get something impressive out of neural nets,  and an intro to the general AI method along them way.       Of most use too, pointers to existing databases to get started with.  We used ImageNet and WordNet tags, for example.

From KDNuggets:

How can we build a computer vision model using CNNs? What are existing datasets? And what are approaches to train the model? This article provides an answer to these essential questions when trying to understand the most important concepts of computer vision.  

By Javier Couto, Tryolabs.

Computer vision is one of the hottest subfields of machine learning, given its wide variety of applications and tremendous potential. Its goal: to replicate the powerful capacities of human vision. But how is this achieved with algorithms?

Let's have a loot at the most important datasets and approaches.

Existing datasets
Computer vision algorithms are no magic. They need data to work, and they can only be as good as the data you feed in. These are different sources to collect the right data, depending on the task:

One of the most voluminous and well known dataset is ImageNet, a readily-available dataset of 14 million images manually annotated using WordNet concepts. Within the global dataset, 1 million images contain bounding box annotations.  .... "

Saturday, June 29, 2019

EU Regulates Linguistic Burger Purity

Am always a bemused observer of my homeland.   And ultimately its about language, and how things are expressed.   The apparent explosion of 'meatless' burgers is a driver of this, and in the EU, expect  specific regulation.  Marketers, consider yourself warned.

Linguistic purity in the EU in the U of Pa Language Log Blog.  I was part of the Language Lab there.   Filed by Mark Liberman under Historical linguistics, Humor

"Europe heroically defends itself against veggie burgers", The Economist 6/29/2019:

The european union gets a lot of flak. All right, it isn't literally blasted with anti-aircraft fire, but you know what we mean. One ongoing battle (ok, nobody died) involves the use of words. Earlier this year, the European Parliament's agriculture committee voted to prohibit the terms "burger", "sausage", "escalope" and "steak" to describe products that do not contain any meat. It was inspired by the European Court of Justice's decision in 2017 to ban the use of "milk", "butter" and "cream" for non-dairy products. Exceptions were made for "ice cream" and "almond milk", but "soya milk" went down the drain, lest consumers assume it had been extracted from the soya udder of a soya cow. The court has yet to rule on the milk of human kindness. ....  "

Heartbeat Identification

Another example of pattern recognition,  here with lasers at a distance.

The Pentagon can now identify people by measuring their heartbeats in DigitalTrends
pentagon heartbeat identification

As if facial recognition and digital fingerprinting weren’t scary enough, the Pentagon has reportedly developed a method for identifying and tracking people through their heartbeat.

Heartbeats are as unique and distinctive as fingerprints, but are distinct in that they can be read from a distance. And it’s this that the Pentagon is taking advantage of, according to a report in the MIT Technology Review.

Developed for identifying combatants in war zones, the idea is to identify individuals by listening in to their cardiac signatures using an infrared laser. Unlike other identification methods like facial recognition, it’s impossible to disguise a heartbeat in any way. The laser method also works through clothing at a distance of up to 200 meters (219 yards). In the future, this range could be extended to be even longer.

“I don’t want to say you could do it from space,” Steward Remaly of the Pentagon’s Combating Terrorism Technical Support Office told the MIT Technology Review, “but longer ranges should be possible.”  ... " 

Google Lens Redesigned: Plant Identification

I see that Google Lens for IOS has been redesigned and the App updated.  Before the update did considerable experimentation with it as a plant identifier, especially at multiple stages of plant development,  showed it was not very useful for that.    Looking forward to see what improvements have been made.   Anyone else out there looking at this function, please connect.   Will report back.  (Note I posted a number of speculative blogs on Google Lens usage, see the tag below)

Nearest Neighbor

A method we worked on for useful purpose from very early on.     In our applications we never needed optimal,  just good, because there was too much else in the context that made measures inaccurate.

Good Algorithms Make Good Neighbors   By Erica Klarreich 
Communications of the ACM, July 2019, Vol. 62 No. 7, Pages 11-13    10.1145/3329712

A host of different tasks—such as identifying the song in a database most similar to your favorite song, or the drug most likely to interact with a given molecule—have the same basic problem at their core: finding the point in a dataset that is closest to a given point. This "nearest neighbor" problem shows up all over the place in machine learning, pattern recognition, and data analysis, as well as many other fields.

Yet the nearest neighbor problem is not really a single problem. Instead, it has as many different manifestations as there are different notions of what it means for data points to be similar. In recent decades, computer scientists have devised efficient nearest neighbor algorithms for a handful of different definitions of similarity: the ordinary Euclidean distance between points, and a few other distance measures.

However, "every time you needed to work with a new space or distance measure, you would kind of have to start from scratch" in designing a nearest neighbor algorithm, said Rasmus Pagh, a computer scientist at the IT University of Copenhagen. "Each space required some kind of craftsmanship."

Because distance measures are so varied, many computer scientists doubted these ad hoc methods would ever give way to a more general approach that could cover many different distance measures at once. Now, however, a team of five computer scientists has proven the doubters—who originally included themselves—were wrong.

In a pair of papers published last year (in the Proceedings of the ACM Symposium on Theory of Computing and the IEEE Annual Symposium on Foundations of Computer Science, respectively), the researchers set forth an efficient approximation algorithm for nearest neighbor search that covers a wide class of distance functions. Their algorithm finds, if not the very closest neighbor, then one that's almost as close, which is good enough for many applications.

The distance functions covered by the new algorithm, called norms, "encompass the majority of interesting distance functions," said Piotr Indyk, a computer scientist at the Massachusetts Institute of Technology.

The new algorithm is a big leap forward, Pagh said, who added, "I wouldn't have guessed such a general result was possible."   ..... " 

(Links to technical issues below)

Voices in AI Podcast: A Conversation with Norman Sadeh

We had early AI connects with Carnegie Mellon.

Voices in AI – Episode 90: A Conversation with Norman Sadeh
By Byron Reese

Episode 90 of Voices in AI features Byron speaking with Norman Sadeh from Carnegie Mellon University about the nature of intelligence and how AI effects our privacy.

Listen to this episode or read the full transcript at www.VoicesinAI.com

Transcript Excerpt:

Byron Reese: This is Voices in AI brought to you by GigaOm I’m Byron Reese, today my guest is Norman Sadeh. He is a professor at Carnegie Mellon School of Computer Science. He’s affiliated with Cylab which is well known for their seminal work in AI planning and scheduling, and he is an authority on computer privacy. Welcome to the show.

Carnegie Mellon has this amazing reputation in the AI world. It’s arguably second to none. There are a few university campuses that seem to really… there’s Toronto and MIT, and in Carnegie Mellon’s case, how did AI become such a central focus?

Norman Sadeh: Well, this is one of the birthplaces of AI, and so the people who founded our computer science department included Herbert Simon and Allen Newell who are viewed as two of the four founders of AI. And so they contributed to the early research in that space. They helped frame many of the problems that people are still working on today, and they helped recruit also many more faculty over the years that have contributed to making Carnegie Mellon as the place that many people refer to as being the number one place in AI here in the US.

Not to say that there are not other many good places out there, but CMU is clearly a place where a lot of the leading research has been conducted over the years, whether you are looking at autonomous vehicles – for instance, I remember when I came here to do my PhD back in 1997, there was research going on autonomous vehicles. Obviously the vehicles were a lot clumsier than they are today, not moving quite as fast, but there’s a very, very long history of AI research, here at Carnegie Mellon. The same is true for language technology, the same is true for robotics, you name it. There are lots and lots of people here who are doing truly amazing things. ..... "

Friday, June 28, 2019

Cortana Moving to a Standalone App

Its been losing share for some time, and from my interactions has only been seen as a nuisance within Windows.   Never integrated as an assistant with MS Windows Office functions.  And relatively little outside device integration.   Will it survive here?  More detail at the link.

Microsoft opts to save Cortana by freeing it from Windows
t seems Microsoft is trying a new approach to save Cortana, it’s not-so-popular voice assistant.

Despite the fact that Microsoft’s flagship voice assistant is built-in to Windows 10, Cortana hasn’t been able to share the same level of success as other voice assistants like Siri, Alexa, and Google Assistant. But that may change with Microsoft’s latest decision. According to The Verge, Microsoft has released a beta version of a standalone Cortana app for Windows 10 PCs via the Microsoft Store ... " 

At Work: Specialists vs Generalists?

Specialists vs Generalists?

I think there is a need for a mix of specialist and generalists.    But at least initially more of the specialist activity will be taken over by devices.     Alerts, Notifications, Assistants, smart contracts, sensors,  ... that will add to the skill and of the number  jobs taken up by generalists.  ....

A Case of the Navy: 

At Work, Expertise Is Falling Out of Favor   In the Atlantic

These days, it seems, just about all organizations are asking their employees to do more with less. Is that actually a good idea? .... 

 " .... Minimal manning—and with it, the replacement of specialized workers with problem-solving generalists—isn’t a particularly nautical concept. Indeed, it will sound familiar to anyone in an organization who’s been asked to “do more with less”—which, these days, seems to be just about everyone. Ten years from now, the Deloitte consultant Erica Volini projects, 70 to 90 percent of workers will be in so-called hybrid jobs or superjobs—that is, positions combining tasks once performed by people in two or more traditional roles. Visit SkyWest Airlines’ careers site, and you’ll see that the company is looking for “cross utilized agents” capable of ticketing, marshaling and servicing aircraft, and handling luggage. At the online shoe company Zappos, which famously did away with job titles a few years back, employees are encouraged to take on multiple roles by joining “circles” that tackle different responsibilities. If you ask Laszlo Bock, Google’s former culture chief and now the head of the HR start-up Humu, what he looks for in a new hire, he’ll tell you “mental agility.” “What companies are looking for,” says Mary Jo King, the president of the National Résumé Writers’ Association, “is someone who can be all, do all, and pivot on a dime to solve any problem.” .... " 

Retail robotics: Platt Retail Tech Bulletin

Several articles on Retail Robotics at the link:

The quarterly Retail Tech Bulletin, published by Platt Retail Institute and the Retail Analytics Council, includes news and case studies regarding retail robotics, artificial intelligence, and related retail technologies. The Bulletin also includes updates on Retail Analytics Council activities, the Retail Robotics Initiative, and the Retail AI Lab at Northwestern University. 

Robot Supply Will Not Keep Pace With Robot Demand “It is the best of times, it is the worst of times” for robotics and retail, to paraphrase Charles Dickens. Right now, robotics is becoming the buzzword in many industries. Only a few years ago, retailers had the pick of the crop of emerging robotics companies. Very few in the retail industry were ready to seize the advantage. We are all familiar with some of those case studies, via early adopters Amazon, Walmart, Kroger, Target, and Lowes, who have all showcased trial deployments of various robots in retail. .... "  

Eye Contact in Video Chat

Interesting,  never found this to be a problem in our own exploration of advanced chat enhancement, but the solution appears to be here.   How much can it measurably enhance chat?

Intel researchers develop an eye contact correction system for video chats  by Ingrid Fadelli , Tech Xplore

When participating in a video call or conference, it is often hard to maintain direct eye contact with other participants, as this requires looking into the camera rather than at the screen. Although most people use video calling services on a regular basis, so far, there has been no widespread solution to this problem.

A team of researchers at Intel has recently developed an eye contact correction model that could help to overcome this nuisance by restoring eye contact in live video chats irrespective of where a device's camera and display are situated. Unlike previously proposed approaches, this model automatically centers a person's gaze without the need for inputs specifying the redirection angle or the camera/display/user geometry.

"The main objective of our project is to improve the quality of video conferencing experiences by making it easier to maintain eye contact," Leo Isikdogan, one of the researchers who carried out the study, told TechXplore. "It is hard to maintain eye contact during a video call because it is not natural to look into the camera during a call. People look at the other person's image on their display, or sometimes they even look at their own preview image, but not into the camera. With this new eye contact correction feature, users will be able to have a natural face-to-face conversation."

IBM and Trifacta Collaborate on Data

Yes, its all about getting the data right.

IBM and Trifacta collaborate on new data prep tool for AI models
in SiliconANGLE by Mike Wheatley 

IBM Corp. is trying to address the cumbersome and time-consuming process of preparing data for use in artificial intelligence and machine learning model training with a new data preparation tool it developed in tandem with Trifacta Inc.

The companies point out that data preparation is an essential step in building machine learning and predictive models. That’s because the data needs to be extremely accurate or else the models will be ineffective, but the problem is that data scientists can spend up to 80% of their time on this task. ... " 

Global Cryptocurrency Regulation Slated to Begin

Had a conversation with a US  Inspector General recently and he expressed how his colleagues were concerned about the use of cryptocurrencies to launder money and related uses, I mentioned that there was work underway to regulate them. Now finally a first step in this direction.  What are the implications for connected technologies?   See also comments in the current MIT Chain Letter.

All Global Crypto Exchanges Must Now Share Customer Data, FATF Rules   By Anna Baydakova , Nikhilesh De ,  in CoinDesk

A powerful intergovernmental organization devoted to combating money laundering and terrorism financing has finalized its recommendations on regulating cryptocurrencies for its 37 member countries.

As expected, the Financial Action Task Force (FATF) standards released Friday include a controversial requirement that “virtual asset service providers” (VASPs), including crypto exchanges, pass information about their customers to one another when transferring funds between firms.

The final recommendation makes official the contentious part of FATF’s February proposal, saying countries should make sure that when crypto businesses send money, they:

“… obtain and hold required and accurate originator [sender] information and required beneficiary [receipient] information and submit the information to beneficiary institutions … if any. Further, countries should ensure that beneficiary institutions … obtain and hold required (not necessarily accurate) originator information and required and accurate beneficiary information …”

Under the new guidance, the required information for each transfer includes:

(i) originator’s name (i.e., the sending customer);
(ii) originator’s account number where such an account is used to process the transaction (e.g., the VA wallet);
(iii) originator’s physical (geographical) address, or national identity number, or customer identification number (i.e., not a transaction number) that uniquely identifies the originator to the ordering institution, or date and place of birth;
(iv) beneficiary’s name; and
(v) beneficiary account number where such an account is used to process the transaction (e.g., the VA wallet).

Calling the “threat of criminal and terrorist misuse of virtual assets” a “serious and urgent” issue, FATF said in a public statement that it will give countries 12 months to adopt the guidelines, with a review set for June 2020.  .... "  (more at link)

Thursday, June 27, 2019

Useful View of one year Old GDPR

Useful piece about what they believe GDPR regulations has resulted in.

What the Evidence Shows About the Impact of the GDPR After One Year
by Eline Chivot and Daniel Castro in DataInnovaion

The General Data Protection Regulation (GDPR), the new privacy law for the European Union (EU), went into effect on May 25, 2018. One year later, there is mounting evidence that the law has not produced its intended outcomes; moreover, the unintended consequences are severe and widespread. This article documents the challenges associated with the GDPR, including the various ways in which the law has impacted businesses, digital innovation, the labor market, and consumers.

Specifically, the evidence shows that the GDPR:

Negatively affects the EU economy and businesses
Drains company resources
Hurts European tech startups
Reduces competition in digital advertising
Is too complicated for businesses to implement
Fails to increase trust among users
Negatively impacts users’ online access
Is too complicated for consumers to understand
Is not consistently implemented across member states
Strains resources of regulators .... ' 

(more detail)

Drumbeat of Digital Development

Considerable paper by McKinsey:

The Drumbeat of Digital: How Winning Teams Play
By Jacques Bughin, Tanguy Catlin, and Laura LaBerge in McKinsey
  
Most executives we know have a powerful, intuitive feel for the rhythm of their businesses. They know how hard and fast to pull strategic levers, move their organization, and drive execution to achieve their objectives. Or at least they did. Digitization has intensified the rhythm of competition in many industries, leaving executives adrift, with information-gathering systems that are too slow or disconnected, direction-setting approaches that are too timid, and talent-management norms that are misaligned and incremental.

These leaders know their companies must adjust and accelerate. Digital is putting pressure on profit pools as it transfers an increasing share of value to consumers. Furthermore, those profit pools are bleeding across traditional industry lines as advanced technologies enable companies to forge into adjacencies, changing who in the value chain is making money, what share of the pie they capture, and how. The slow and inefficient are left behind, competing for scraps.

Faster and harder: Behind the numbers

What is unclear to these executives, however, is how much and how fast to adapt their business rhythms. The exhortation to “change at the speed of digital” generates more anxiety than answers. We have recently completed some research that provides clear guidance: digital leaders appear to keep up a drumbeat in their businesses that can be four times faster, and twice as powerful, as those of their peers.  .... " 

Microsoft Applies AI to Developer Lifecycle

Technical view with useful examples of how this fits together.   A step towards autonomous coding and integration?  Linking to the overall process of development and delivery.

Microsoft wants to apply AI ‘to the entire application developer lifecycle’
By Emil Protalinwski  @Epro in Venturebeat

At its Build 2018 developer conference a year ago, Microsoft previewed Visual Studio IntelliCode, which uses AI to offer intelligent suggestions that improve code quality and productivity. In April, Microsoft launched Visual Studio 2019 for Windows and Mac. At that point, IntelliCode was still an optional extension that Microsoft was openly offering as a preview. But at Build 2019 earlier this month, Microsoft shared that IntelliCode’s capabilities are now generally available for C# and XAML in Visual Studio 2019 and for Java, JavaScript, TypeScript, and Python in Visual Studio Code. Microsoft also now includes IntelliCode by default in Visual Studio 2019. ....

AI Storytelling EBook

Data Robot provides an eBook on AI Storytelling.  Such an approach is useful not only to the clients, but even to the creators of any AI or analytics projects.   To make sure the results make sense to all.    As the article makes the case, its also about trust.   Will be reading.

Introduction to AI Storytelling Build Trust Throughout the AI Project
In today’s era of AI and machine-assisted analytics, accurately interpreting and effectively communicating findings is becoming a crucial skill to bridge the growing data literacy gap. To get the most value from AI projects to drive better outcomes, you need to help decision stakeholders understand the process and make sense of results.

Machine learning use cases, metrics, and charts can be difficult to comprehend and explain. Describing the AI problem to solve, machine learning models, and the relationships among variables are often subtle, surprising and complex. Successful analytical communicators don’t wait until the end of an AI project. Instead, they use the entire process to educate stakeholders.

This eBook will introduce you to the art of AI storytelling.

"DataRobot's platform makes my work exciting, my job fun, and the results more accurate and timely -- it's almost like magic!"
Omair Tariq    Data Analyst, Symphony Post Acute Network ..... " 

Priming Learning Networks

I wrote a paper on just this and related concepts that primed learning networks during the first wave of neural nets,  which we tested internally,  reviewing. 

Randomly wired neural networks and state-of-the-art accuracy? Yes it works.

How do you design the best Convolutional Neural Network (CNN)?    By George Seif

Although Deep Learning has been around for several years now, that’s still an unanswered question.

Much of the difficulty in designing a good neural net stems from the fact that they’re still black boxes. We have some high-level idea of how they work, but we don’t really know how they achieve the results that they do.   .... "

Amazon Clones a Neigborhood

Look at how Amazon tests difficult scenarios real-life scenarios for delivery.  Have a real space you can manipulate and easily include behaviorally difficult elements, like humans, to understand  design issues.

How Amazon Cloned a Neighborhood to Test Its Delivery Robots  By Tom Simonite in Wired

  In March Matt Bratlien saw something odd in the spacious suburb of Silver Firs, north of Seattle. A six-wheeled robot with the Amazon Prime logo on its sky-blue carapace was driving up and down the sidewalks and curbs, watched by a company representative. “I was surprised, excited, and very curious,” says Bratlien, a partner at Net-Tech, an IT services company in nearby Bellevue.

Bratlien had encountered Scout, a delivery robot Amazon is testing in the area, including by ferrying real orders to customers. Here’s what he didn’t see: Countless digital clones crawling through a virtual copy of the neighborhood that Amazon created with scans of the area collected by lasers, cameras, and aircraft.

Amazon knows a lot about the world thanks to data from its vast retail business and cloud computing platform. It knows a 2-square-kilometer zone of Snohomish County in unusual detail—down to the position of weeds sprouting through the drainage grates. The company’s digital copy mirrors the position of curbstones and driveways within centimeters, and textures like the grain of asphalt within millimeters. .... "

Plants as Sensors and Cyborgs

Have been a long time amateur botanist.    So much like the direction suggested here. Designing alerting and circuits inside plants.   Could a field of corn alert the farmer when it is ready to harvest?  Or was being attacked by weeds?   Trees are being attacked by pests in my backyard, could they alert me to  the need for action?   Bring on the idea, ready to test.

Plants Are Oldest Sensors in the World. Could They Be the Future of Computers? 
Fast Company
By Katharine Schwab

The Massachusetts Institute of Technology's Harpreet Sareen suggests plants as a new building material for computer circuits. Based on previous work using leaves as motion detectors and signal transmitters, Sareen's Phytoactuators project enables computers to send electronic signals back to plants, transforming them into alert systems. "Our vision is to have this layer of digital interaction within the plants themselves so we can not only sense signals through them, but also connect our digital responses with the plant's responses," said Sareen, who envisions such systems as tools for "soft notifications," like the arrival of package deliveries triggering signals, to instruct houseplants to retract their leaves. The more advanced Planta Digitalis project has embedded a conductive wire into a plant stem, linked to a computer and functioning as a sensor, as a first step toward designing circuits inside plants. ... "

The MIT Media Lab work this is referencing:
Cyborg Botany: Augmented plants as sensors, displays, and actuators   By Harpreet Sareen 

HR Management via AI

This is already happening.  We used classic optimization systems for human resource use.  What will be interesting is the degree of autonomy such systems will have to assign work, measure results, plan resource needs and use and how they will work with human resource management.     The IBM example mentioned is interesting in a sense of being able to predict future performance.

A Machine May Not Take Your Job, but Could Become Your Boss 
The New York Times    By Kevin Roose 

Call centers and other workplaces are starting to use artificial intelligence (AI) programs to make workers more effective, by giving them real-time feedback. In modern workplaces, AI programs often see human workers themselves as requiring optimization. For example, Amazon uses algorithms to track worker productivity at its fulfillment centers, and automatically generate paperwork to fire workers who do not meet their targets; meanwhile, IBM has used its Watson AI platform during employee reviews to predict future performance, claiming a 96% accuracy rate. However, critics have accused companies of using algorithms for managerial tasks, arguing automated systems can dehumanize and unfairly punish employees. Workplace AI supporters insist these systems are not meant to be overbearing, but rather to make workers better, by reminding them to thank customers, empathize with frustrated callers, or avoid idling on the job. .... " 

Wednesday, June 26, 2019

Spatial Thought

Reasoning with the body, in context space.   Fascinating, How do we use this among machines?

The Geometry of Thought in The Edge    A Conversation with Barbara Tversky

Slowly, the significance of spatial thinking is being recognized, of reasoning with the body acting in space, of reasoning with the world as given, but even more with the things that we create in the world. Babies and other animals have amazing feats of thought, without explicit language. So do we chatterers. Still, spatial thinking is often marginalized, a special interest, like music or smell, not a central one. Yet change seems to be in the zeitgeist, not just in cognitive science, but in philosophy and neuroscience and biology and computer science and mathematics and history and more, boosted by the 2014 Nobel prize awarded to John O’Keefe and Eduard and Britt-May Moser for the remarkable discoveries of place cells, single cells in the hippocampus that code places in the world, and grid cells next door one synapse away in the entorhinal cortex that map the place cells topographically on a neural grid. If it’s in the brain, it must be real. Even more remarkably, it turns out that place cells code events and ideas and that temporal and social and conceptual relations are mapped onto grid cells. Voila: spatial thinking is the foundation of thought. Not the entire edifice, but the foundation.

The mind simplifies and abstracts. We move from place to place along paths just as our thoughts move from idea to idea along relations. We talk about actions on thoughts the way we talk about actions on objects: we place them on the table, turn them upside down, tear them apart, and pull them together. Our gestures convey those actions on thought directly. We build structures to organize ideas in our minds and things in the world, the categories and hierarchies and one-to-one correspondences and symmetries and recursions.  .... ' 

BARBARA TVERSKY is Professor Emerita of Psychology, Stanford University, and Professor of Psychology and Education, Columbia Teachers College. She is the author of Mind in Motion: How Action Shapes Thought. Barbara Tversky's Edge Bio Page ... "

Flat Cryptography in Chrome

Concept new to me, but interesting for making security more common.

Automated Cryptocode Generator Helps Secure the Web
MIT News
By Rob Matheson

Researchers at the Massachusetts Institute of Technology (MIT) have developed a system that automatically generates cryptography code that is normally written by hand. First deployed early last year, "Fiat Cryptography" is used by Google and other technology companies to automatically generate—and simultaneously verify—optimized cryptographic algorithms for all hardware platforms. During testing, the researchers found the system can generate algorithms that match the performance of the best handwritten code, but much faster. Said MIT researcher Adam Chlipala, "You can automatically explore the space of possible representations of the large numbers, compile each representation to measure the performance, and take whichever one runs fastest for a given scenario." ... '

Tuesday, June 25, 2019

Robotic Process Automation (RPA) Expands Rapidly

Often mentioned here.  I am a supporter because it more transparently moves towards real-life process improvement.  It also give you a better understanding of curent and potential business process by modeling it.  Leads to places you can hang deeper AI onto.  We did many models of this type before generalized software evolved.

Gartner Says Worldwide Robotic Process Automation Software Market Grew 63% in 2018    RPA Software Revenue Forecast to Reach $1.3 Billion in 2019

Robotic process automation (RPA) software revenue grew 63.1% in 2018 to $846 million, making it the fastest-growing segment of the global enterprise software market, according to Gartner, Inc. Gartner expects RPA software revenue to reach $1.3 billion in 2019.

“The RPA market has grown since our last forecast, driven by digital business demands as organizations look for ‘straight-through’ processing,” said Fabrizio Biscotti, research vice president at Gartner. “Competition is intense, with nine of the top 10 vendors changing market share position in 2018.” .... "

See also in SiliconAngle:
Gartner: Red-hot robotic process automation market leads enterprise software growth   By Mike Wheatley .... "

Hangouts and Chats in Google

Quite interesting.  Could a company implement a whole set of standards for interaction, recording, goal management, process detail,  After Action Reviews?

Google’s Hangouts Chat gets chatbot boost with Dialogflow
Dialogflow should make it easier for developers to create natural language bots for Google’s team collaboration platform.
         
By Matthew Finnegan  ... Senior Reporter, Computerworld

Google is looking to make it easier to build chatbots for Hangouts Chat, thanks to an integration with its Dialogflow conversational AI platform.

Google launched Hangouts Chat early last year, a chat-based collaboration tool that replaces the Hangouts app for G Suite customers, of which there are are now more than 5 million, according to Google’s latest stats.

As announced last week, developers can now build chatbots for Hangouts Chat using Dialogflow – Google’s machine learning-based development suite that enables the creation of natural language processing (NLP) and natural language understanding (NLU) apps that mimic human interactions. .... " 

Great Decision Making

Good piece which get to the point of decision and process.  And also ultimately in AI.  The solution of commuting to a default decision up front is interesting, never seen it implemented.   And would seem to depend on the risk of alternate decisions,  and understanding them in alternate contexts.  Seeking data in alternate contexts is commendable, but often hard.

The First Thing Great Decision Makers Do
By Cassie Kozyrkov in HBR

As a statistician, I appreciate the quote by applied statistics pioneer W. Edwards Deming, “In God we trust. All others bring data.” But as a social scientist, I’m compelled to warn you that many decision-makers chase data with too much zeal, running from ignorance but never improving their decisions. Is there a way to land in the sweet spot? There is, and it starts with one simple decision-making habit: Commit to your default decision up front.

The key to decision-making is framing the decision context before you seek data — a skill that unfortunately is not usually covered in data science courses. To learn it, you’ll need to look to the social and managerial sciences. It’s unfortunate that we don’t teach it enough where it is most needed: as a skill for leading and managing data science projects. Even in statistics, which is the discipline of making decisions under uncertainty, most of the exercises that students encounter already have the context pre-framed. Your professor usually creates the hypotheses for you and/or frames the question so there’s only one right answer. Wherever there’s a right answer, the decision-maker has already blazed that trail.

Many decision-makers think they’re being data-driven when they look at a number, form an opinion, and execute their decision. Unfortunately, such a decision will be “data-inspired” at best. Data-inspired decision-making is where we swim around in some numbers, eventually reach an emotional tipping point, and then decide. There were numbers near that decision somewhere, but those numbers didn’t drive the decision. The decision came from somewhere else entirely. It was there all along in the unconscious biases of the decision-maker.   .... " 

Debate on Blockchain for the Supplychain

Interesting view with useful examples.  Critical.  Relates to some of my current work.

Is Blockchain Primed to Transform the Supply Chain? A Nuanced Debate
By John Thielens, SCB Contributor in SupplyChainBrain 

Chances are you’ve heard about blockchain’s potential to upend every industry it touches. A quick Google search will tell you that blockchain is already being heralded as an industry-transforming concept. But the truth is that many organizations are still trying to understand exactly what blockchain can enable. That includes companies that are heavily involved in the supply chain. They’re just beginning to dabble in blockchain in niche areas.

A blockchain is a highly distributed ledger that cannot be altered without all participants in a given network agreeing on the change. While blockchain is powering the rapidly growing set of cryptocurrency technologies and related offerings, its tracking and tracing capabilities are what’s making it increasingly appealing to industries beyond that market.

The state of blockchain in the supply chain today can best be described as experimental. Classic supply-chain B2B processes are designed to synchronize the buyer’s and seller’s understanding of a purchase, including SKUs and quantities, pricing and payment, inventory, transportation and logistics. From that perspective, blockchain is on a sound evolutionary trajectory.  ... "

Store Level Data Confidentiality

Laws and Regulation regarding data.  Since data is key to analytics,  these regulations are as well.

FMI wins Supreme Court ruling on SNAP data disclosure
Decision reinforces confidentiality of store-level information, institute says    By Russell Redman in SupermarketNews

The Food Marketing Institute (FMI) prevailed in a U.S. Supreme Court decision today that involved the confidentiality of store-level sales data for the Supplemental Nutrition Assistance Program (SNAP), or food stamps.

In the case, Food Marketing Institute v. Argus Leader Media, the Supreme Court ruled 6-3 that the U.S. Department of Agriculture wasn’t required to release stores’ SNAP redemption data under the Freedom of Information Act (FOIA). The court determined that the SNAP data was provided to the government “under an assurance of privacy” and that the release of that information also would cause competitive harm to retailers.  

The case arose in 2011, when a South Dakota newspaper, the Argus Leader, filed a FOIA request with the USDA for the names and addresses of all retail stores participating in SNAP and each store’s annual redemption data for the fiscal years 2005 to 2010.  .... "

Futures in Focus Podcast

Was linked to this podcast through it's mention of correspondent Michael Schrage of the Media Lab.   Just starting to examine, will pass along things of interest.

Futures In Focus Podcast - A Look Inside Our World 10 Years From Now That Only 19% Of Us Think About
Thought Leaders
Michael Gale   WSJ bestseller, A.I. influencer, podcast host about the world in 2030.

Welcome to Forbes Insight’s Futures in Focus podcast, where we explore the world of AI, travel, entertainment, 3D printing, sports experiences in to 2030, healthcare developments, the environment, digital transformation, the nature of good, animal cloning, and many other advancements that have the potential to change what life, business, and society could be like in ten years.  We even tap into the mindset of growth hacking to see if major corporations can grab one of the secrets of startups with the CEO of Growth Hacking and best selling author Sean Ellis and the world of precision medicine with Professor Sah-Shen Orr and David Harel of Cyto Reason.

These and our many other guests give us their best ideas, years of thinking, and a sense of the possible futures in store for us all of which is extremely exciting.  I ask some tough questions about potential choices into the future that guests like Bruno Sarda, the US CEO of CDP who is charged with measuring carbon footprints around the world, and he gave us some incredibly enlightened answers.  Even the sustainability officer for Campbell's soup talks about sustainable food design. In just about thirty minutes, whether you are on your commute, eating lunch, or some other down moment, Forbes Futures in Focus is designed to give you a peek through the window into the futures we could be part of.  ... "

AI to Drive Re-Skilling Efforts

Makes sense,  but the primary challenge will be how these skills are interlinked between each other, and among people and devices.   And how much autonomy and leadership is afforded to the AI aspects.  Still not that common in practice.  Clear that automation has been here for a long time.  Starting with calculators, computers, then on to smartphones and tablets.    Creativity will be harder than repetitive tasks, it always has been.

AI will drive reskilling in problem solving, creativity and collaboration

A study from the Economist Intelligence Unit has found that executives do not believe that artificial intelligence will lead to job losses, but staff will need retraining  By Cliff Saran in ComputerWeek.,  Managing Editor
  
Uncertainty over security and data privacy represent workers’ main concerns over automation, a study from the Economist Intelligence Unit (EIU) has reported.

EIU’s Advance of automation report, based on a survey of 502 executives in Canada, France, Germany, India, Japan, Singapore, the UK and the US, found that just 9% of respondents said they were not using any automation.

More than half of the people surveyed (51%) said they made extensive use of automation, while 40% were moderate users of it, mainly for automating highly repetitive back-office functions.

However, the EIU found that more than a quarter of respondents (27%) expect automation to create opportunities for professional growth, and a similar number (26%) said they believe it could free up time for more human interaction. Another 37% said they believe automation would serve to increase employee engagement. ..... " 

Monday, June 24, 2019

Game Builder to Build Your Serious Games?

Took a quick look, and the approach is set up for non-serious games.  But could such a method be used to construct special purpose games that would interact with their context to achieve serious goals?  Always thought you needed much better control of the context and meta-game and goals to make that happen.  Is that here?

Google's Game Builder Turns Building Multiplayer Games into a Game 
in TechCrunch
By Frederic Lardinois

Google's Area 120 team has developed Game Builder, a free tool for PC and macOS users who want to build their own 3D games without having to learn to code. The overall design aesthetic is at least partially inspired by Minecraft, but users are free to create whatever kind of game they want. Game Builder can create first-person shooter games, a platformer, and a demo of the tool's card system for programming more complex interactions. Building a 3D level is like playing a game itself, and users can build multiplayer games and even create games in real time with other users. In addition, players can use JavaScript to go beyond some of the pre-programmed features. Google is also relying on Poly, its library of 3D objects, to give users lots of options for creating and designing different levels. ... '

NASA Aluminum Fraud Causes Sat Failures

Looking at how fraud and scams exist in different parts of contract processes.  Here is one we are examining as a case study.   Any input?

An Oregon aluminum manufacturer has admitted to falsifying critical tests on aluminum sold to NASA over a 19 year period, agreeing to pay a $46 million fine to the Department of Justice.

NASA says the scam was at the heart of two failed missions—2009’s Orbiting Carbon Observatory, which carried equipment designed to take the most precise measurements of atmospheric carbon dioxide to date, and 2011’s Glory, which was also meant to aid in climate research—where the Taurus XL rockets protective nose cones failed to separate on command. Both rockets plummeted back to earth. ... " 

EBay Personalizes

We had a corporate short connection and interaction with EBay, been a user since the beginning, so continue to watch their efforts.  Noticed recently a change in their messages and marketing orientation. 

Ebay is pushing ahead on making its marketplace more personalized as customers shop not just by selection, but convenience.   By Hilary Milnes In ModernRetail

On Thursday, the company announced it was releasing 10 new features that use artificial intelligence and machine learning to learn and then adapt to customer preferences in search, product suggestions and ads, as well as on the homepage and through customer service.

Ebay’s simultaneously trying to improve the experience for guests — users that haven’t signed up for Ebay accounts, in an effort to attract more customers — and longtime eBay shoppers through the new tools. To target new users, it’s building personalized recommendations into their searches based on past search history and shopping behavior through Facebook and Google login, and tailoring search results for unaccustomed Ebay shoppers by prioritizing items available to purchase now (rather than bid). For frequent shoppers, the platform has rolled out options to get alerts and updates on an item’s availability they’re likely to want to bid on, as well as a “buy again” option ... " 

Alexa Expands the Ability to 'Announce'

At first this seemed like a trivial thing, you can send information to a group of people.  Or other internal devices, or external things.  So this is like a specific form of communications,  I can reply, or wait for more information.   A means to effect crowd sourcing.   Crowd sourcing  about specific needs?   We do it in conversation all the time.   Something here that could be expanded.

Alexa's intercom-like broadcasts come to more non-Echo devices
You could send announcements through your thermostat.

By Jon Fingas, @jonfingas in Engadget

Amazon has slowly been expanding the circle of devices that can use Alexa Announcements, but now it's throwing the gates wide open. The company has made the intercom-like feature available to any device with Alexa support built-in -- you could use your thermostat or fridge to tell the kids that dinner is ready. In theory, you won't have to visit a specific room like you might today.

The feature requires device makers to implement it, and not every product will necessarily qualify. They'll have to support converting MP3 files 45 seconds or longer to an Alexa-ready format. There may still be gaps in Announcements support even if your home is full of Alexa devices. Still, this could make the broadcasting tool far more flexible in the long run. ... " 

Disruptive Flywheels as Business Model

The idea of disruptive flywheels, new term to me, but worth a look.  Reading further.

How to build disruptive strategic flywheels
Gaming, artificial intelligence, and deep learning are paving the way for dynamic and resilient 21st-century business models.  by Sundar Subramanian and Anand Rao  In Strategy-Business

A large auto manufacturer asked a consulting firm to evaluate its competitive position in relation to ride-sharing startups building autonomous vehicles. Instead of viewing this as a classic strategy project, with a business case, PowerPoint decks, and five-year projections, the firm created a “game” that the automaker could “play” against its competitors. An artificial intelligence (AI) system modeled the voluminous individual choices available to customers, companies, and other entities as digital twins (a digital twin is a computerized replica of a physical asset, process, consumer, actor, or other decision-making entity). The hundreds of thousands of simulations suggested many strategic bets, option-value bets, and “no-regret strategies,” or moves that made strategic and financial sense in a multitude of situations. The selection of those strategies, in turn, made the AI system smarter through learning mechanisms called reinforcement learning, which then further empowered humans to make better decisions. As time progressed, the company was able to choose precise market approaches, pricing, advertising, and customer strategies for multiple cities and communities.

Taken together, these actions created a flywheel, a concept borrowed from the power industry to describe a source of stabilization, energy storage, and momentum, and that was popularized in the strategy context by the author Jim Collins. Executives, instead of trusting instincts and prior assumptions, were able to harness the power of this strategic flywheel to verify hypotheses in simulation and in the real world. Doing so exponentially expanded the array of strategic choices and reduced the cost of experimentation. Rather than paralyzing decision makers with the abundance of options they created, the simulations produced clarifying insights. The result for this auto manufacturer has been a multibillion-dollar valuation of its new services, achieved in less than two years.

Games. AI. Continuous execution and adjustment. Thousands of scenarios to consider. This is not how strategy at blue-chip companies has been done in the past. But it is how business leaders are starting to do strategy now, and how we will need to do strategy in the future — that is, if we are to develop strategies that can both withstand and adapt to the increasing pace of change and disruption that is evident in all industries. .... "

Sunday, June 23, 2019

AI Studying Human Learning

Seen the basic idea proposed before, and not disssimilar to neuromarketing, but fMRI still messy to implement in general.

AI could study your brain to help teachers improve their courses
Machine learning can determine if you understand a concept.

By Jon Fingas, @jonfingas in Engadget
1h ago in Medicine

Teachers don't always know how well their methods work. They can ask questions and hand out tests, of course, but it's not always clear who's at fault if the message doesn't get through. AI might do the trick before long, though. Dartmouth College researchers have produced a machine learning algorithm that measures activity across your brain to determine how well you understand a given concept.

The team started out by having rookie and intermediate engineering students both take standard tests as well as answer questions about pictures while sitting in an fMRI scanner. From there, they had the algorithm generate "neural scores" that could predict a student's performance. The more certain parts of the brain lit up, the easier it was to tell whether or not a student grasped the concepts at play.    .... " 

Quantum Random Numbers

A long time interest of mine, especially as it relates to creating realistic process simulations.  But now important to do well in many areas.   A mostly non technical article:

How to Turn a Quantum Computer Into the Ultimate Randomness Generator in Quanta Mag  By Anil Ananthaswamy

Pure, verifiable randomness is hard to come by. Two proposals show how to make quantum computers into randomness factories. .... " 

Build a Skill with Cake Walk

Amazon continues to add ways to ease the creation of Alexa Skills.  Here the latest:

Alexa Blogs

New Alexa Skills Training Course: Build an Engaging Alexa Skill with Cake Walk

Best Practices to Create a Delightful Voice Commerce Experience for Your Customers    Alexa Auto: Finalist for the TU-Automotive Best Auto Mobility Product/Service Award

How to Write Engaging Dialogs for Alexa Skills .... " 

Saturday, June 22, 2019

New Book: Artificial Intelligence in Practice

Just starting to read Bernard Marr's just released: Artificial Intelligence in Practice: How 50 Companies used AI and Machine Learning to Solve Problems..   A company by company view of how they are using AI today. I will follow with more comments as I see more.  Most of the companies mentioned here are of interest to me.

So far nicely done, but relatively little detail about use of technology.  Useful in its sense of what has has been done.  Certainly worth examining for why these methods are being used.  Not too dissimilar that to what we did in the 80s, but here directly using new, very focused techniques developed in the last few decades.  No mention of Robotic Process Automation or Process analysis or Knowledge management?   Nor Data in the index?  Nor Chatbots or conversation management in the index?  Good breadth of links in the 'notes' sections that point to detailed papers, will be following up on some of these, especially in some industries.    Worth a careful scan.

Artificial Intelligence in Practice: How 50 Successful Companies Used AI and Machine Learning to Solve Problems 1st Edition   by Bernard Marr  (Author), Matt Ward (Contributor)
5.0 out of 5 stars    1 customer review  ..... 

They write: 

Cyber-solutions to real-world business problems

Artificial Intelligence in Practice is a fascinating look into how companies use AI and machine learning to solve problems. Presenting 50 case studies of actual situations, this book demonstrates practical applications to issues faced by businesses around the globe. The rapidly evolving field of artificial intelligence has expanded beyond research labs and computer science departments and made its way into the mainstream business environment. Artificial intelligence and machine learning are cited as the most important modern business trends to drive success. It is used in areas ranging from banking and finance to social media and marketing. This technology continues to provide innovative solutions to businesses of all sizes, sectors and industries. This engaging and topical book explores a wide range of cases illustrating how businesses use AI to boost performance, drive efficiency, analyse market preferences and many others.

Best-selling author and renowned AI expert Bernard Marr reveals how machine learning technology is transforming the way companies conduct business. This detailed examination provides an overview of each company, describes the specific problem and explains how AI facilitates resolution. Each case study provides a comprehensive overview, including some technical details as well as key learning summaries:

Understand how specific business problems are addressed by innovative machine learning methods.

Explore how current artificial intelligence applications improve performance and increase efficiency in various situations

Expand your knowledge of recent AI advancements in technology
Gain insight on the future of AI and its increasing role in business and industry

Artificial Intelligence in Practice: How 50 Successful Companies Used Artificial Intelligence to Solve Problems is an insightful and informative exploration of the transformative power of technology in 21st century commerce.    ... "  

Serious Games Grad Program

Key areas mentioned are of interest.  Good to follow for new developments.

UC Santa Cruz Launches First Graduate Program in Serious Games
University of California, Santa Cruz
Tim Stephens; James McGirk

The University of California, Santa Cruz (UC Santa Cruz) Baskin School of Engineering is launching the first professional master's degree program in serious games offered in the U.S. Serious games are designed to accomplish a purpose other than pure entertainment, aiming to impact measurable social goals. The serious games program, which will begin accepting students this fall, builds on existing expertise at UC Santa Cruz in assistive technologies, games and playable media, digital art and new media, psychology, and other related disciplines. The new program will train students over five academic quarters in six key areas: game design, game technology, eliciting and integrating subject matter knowledge, designing and conducting efficacy measures, effective teamwork, and career planning.  ... " 

Playing the Odds in Quantum Games?

These scenarios make me worry about quantum computing and other uses of entanglement.   What's a poor dice thrower to do?  How do we understand entanglements?  Are they just another kind of context we need to detect, measure, plan for?   The following is an overview, but technical.

In Quantum Games, There’s No Way to Play the Odds in

These games combine quantum entanglement, infinity and impossible-to-calculate winning probabilities. But if researchers can crack them, they’ll reveal deep mathematical secrets.
 By Kevin Hartnett


Quanta Magazines Abstractons blog:  https://www.quantamagazine.org/abstractions/

Negatives on Assistants

It's been a fast uptake.  They seem to be used by most everyone.   In the home and moving to the store and the office the car.  Yet there are still some real problems.   Trust and security issues still linger.  Voice interpretation is still not near perfect.  Control of the details of an interactive  conversation in context is still not done well.   Integration of multiple skills to create intelligence is primitive.  Integrating the abilities of machine and humans to get things done needs to be improved.   When a machine should lead or follow?  Can humans take nudging from a machine?

The downfall of the virtual assistant (so far) in ComputerWorld
Services like Google Assistant and Alexa are growing more capable by the minute — but there's one big, fat lingering problem.... " 

RPA Scaling Operations

Thoughts on 'Robotic Process Automation' , which reminds me of the considerable effort we did with 'knowledge systems' to improve process and results in the enterprise.  Trouble was, the approach became unwieldy to create and hard to maintain.    RPA is a better place to start, especially if you choose the domain and goals and processes involved carefully.

How is RPA Assisting Businesses in Scaling Operations?   By Mitul Makadia   

It is estimated by McKinsey & Co. that automation systems could well and truly, undertake the work of up to 140 million jobs by 2025.

Everest Group reports that Robotic Process Automation is likely to lead to a cost reduction of close to 65% with its potential to register data at the transactional level, thereby enabling decision-making which is swift, precise and predictive. Organizations that stick to a watertight RPA implementation strategy will soon outpace those who still depend on human capital for all their processes.

RPA in Business

RPA is gaining tons of traction for its promise of improving business efficiency, making employees more productive, and leading to an overall increase in profit. In spite of the benefits of RPA in enterprises for those who were the pioneers in implementing it, there are still some decision makers on the fence whether or not RPA is worth their time and effort.

Robotic process automation is a step by step undertaking that enables companies to automate routine, repetitive tasks and free their employees to focus on more fundamental ones. Besides this, there are numerous benefits to implementing it.

To separate the wheat and the chaff, we have compiled a comprehensive list of advantages businesses may enjoy as a result of using RPA.

How does RPA help enterprises? .... '

NVidia Announces AI Edge Platform

Just starting to take a closer look at this:

Nvidia announces its first AI platform for edge devices    By Mike Wheatley

Nvidia Corp. is bringing artificial intelligence to the edge of the network with the launch early Monday of its new Nvidia EGX platform that can perceive, understand and act on data in real time without sending it to the cloud or a data center first.

Delivering AI to edge devices such as smartphones, sensors and factory machines is the next step in the technology’s evolutionary progress. The earliest AI algorithms were so complex that they could be processed only on powerful machines running in cloud data centers, and that means sending lots of information across the network. But this is undesirable because it requires lots of bandwidth and results in higher latencies, which makes “real-time” AI something less than that.  ... "

Friday, June 21, 2019

MIT Tech Review Chain Letter on Facebook and Libra

MIT's Blockchain newsletter takes a good look at Facebook and their activity with the Blockchain Libra.   Also a pointer to Facebook's 29-page paper on the topic.  Reading now.  More is sure to follow.   Again I recommend signing up for this newsletter to follow the rapidly evolving area.

MIT Technology Review
Chain Letter 

Blockchains, cryptocurrencies, and why they matter
06.20: Move fast and break things

Welcome to Chain Letter! Great to have you. Today we'll be taking a good long look at Facebook's much-hyped foray into the digital currency realm. 

We have lots of questions about Facebook’s new digital currency. What did Facebook just do? Officially, it launched a test network for its own digital currency, called Libra coin. But nobody—not even Facebook—seems to be sure what that fully means.

In a new white paper (PDF) describing the project, Facebook tells us that the goal is to build a “financial infrastructure that can foster innovation, lower barriers to entry, and improve access to financial services.” Beyond that, however, the situation is quite unsettled.  .... "

Google Streetview AI to Inventory Infrastructure

This came to mind once as a means to check our business locations for infrastructure regulation and compliance.   Though it was noted that we would be reliant on Google for updates of the images involved.  Still a good thought to build on top of existing data gathering means.

New AI system manages road infrastructure via Google Street View   by RMIT University

Geospatial scientists have developed a new program to monitor street signs needing replacement or repair by tapping into Google Street View images.

The fully-automated system is trained using AI-powered object detection to identify street signs in the freely available images.

Municipal authorities currently spend large amounts of time and money monitoring and recording the geolocation of traffic infrastructure manually, a task which also exposes workers to unnecessary traffic risks.  .... " 

Fraud Detection with AI

And even the predictive risk of the exposure to financial crimes.

How AI Can Help with the Detection of Financial Crimes
Paige Dickie develops artificial intelligence (AI) and digital strategy for Canada’s banking sector at the Vector Institute for Artificial Intelligence in Toronto. She began her career in management consulting — much to the disappointment of her father, an engineer — because she had earned advanced engineering degrees in biomedical and mechanical engineering. Dickie initially worked at McKinsey, the global consulting firm, helping multinational financial institutions across a range of fields from data strategy and digital transformation to setting up innovation centers. She recently joined Vector to lead what she describes as “an exciting project with Canada’s banking industry. It’s an industry-wide, sector-wide, country-wide initiative where we have three different work streams — a consortium work stream, a regulatory work stream, and a research-based work stream.”

Knowledge@Wharton interviewed Dickie at a recent conference on artificial intelligence and machine learning in the financial industry, organized in New York City by the SWIFT Institute in collaboration with Cornell’s SC Johnson College of Business.

According to Dickie, AI can have a significant impact in data-rich domains where prediction and pattern recognition play an important role. For instance, in areas such as risk assessment and fraud detection in the banking sector, AI can identify aberrations by analyzing past behaviors. But, of course, there are also concerns around issues such as fairness, interpretability, security and privacy.

An edited transcript of the conversation follows.  ... " 

Robots in a Store Near You

Including interesting statement from Wal-Mart, which has now emerged as the leader in brick and mortar grocery robotics ....  Talk and link to transcription:

Groceries And Gadgets: The Robots Coming To A Supermarket Near You    in WBUR via O'Reilly

With Meghna Chakrabarti

Much more at the link, including a positioning statement by Wal-Mart.

There are robots roaming the aisles of Walmart and other grocery stores. Monitoring inventory, cleaning up spills and potentially replacing workers. Automation is coming to a supermarket near you.

Want more from the show? You can get messages from our hosts (and more opportunities to engage with the show) sent directly to your inbox with the On Point newsletter. Subscribe here.

Guests:
Andrew McAfee, co-director of the MIT Initiative on the Digital Economy. Associate director of the Center for Digital Business at the MIT Sloan School of Management. Co-author of "Machine, Platform, Crowd: Harnessing our Digital" and "The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies." (@amcafee)

David Pinn, vice president of strategy for Brain Corp, which creates the software for Walmart’s autonomous floor scrubbers. Walmart is adding the floor scrubbers to 1,500 stores. (@braincor)

Erikka Knuti, communications director for The United Food and Commercial Workers Union, which represents 1.3 million workers in grocery stores, retail and other industries. (@erikkaknuti)
... "

Planning Cities of the Future with AI

Like to see much more about this.... planning is tough, and rarely done completely.    Analytic and AI methods require considerable attention to detail and process.  Doing planning well with these methods is probably difficult.  Visualization does help.

AI, Robots, Data Software Helping Create New Approach for Planning Cities of the Future 
Purdue University News in ACM
By Chris Adam

Purdue University researchers have developed a unique strategy to plan future cities, by streamlining building information modeling software through new approaches to data. Purdue's Jiansong Zhang said the methodology facilitates full software development based on data from industry foundation classes, "for any task in the life cycle of an [architecture, engineering, and construction] project." Zhang added that the researchers created a visualization program deployed via the new technique. Said Zhang, "The new method can help eliminate missing or inconsistent information during software development." The data encompasses all sectors, functions, and life-cycle stages of software development for construction projects. .... "

Thursday, June 20, 2019

Real time Drone detection of Forest Fires

More extension of sensors, with direct connection to humans. 

Drones for Early Detection of Forest Fires 
Universidad de Carlos III de Madrid
June 17, 2019

Researchers at Universidad de Carlos III de Madrid (UC3M) in Spain are collaborating with researchers at Telefonica, Divisek, and Dronitec, on a sustainable innovation pilot project for early detection and prevention of forest fires using drone technology. The researchers developed a complete automatic flight system for a drone, as well as an interface that allows users to access what the drone is seeing in real time. The drone is equipped with a thermal camera, an optical camera, and four sensors that allow users to identify the temperature of the device in the environment. The drone's controllers tell users the internal state of the equipment, and communication towers can detect the origin of a fire within a perimeter of 15 kilometers (about nine miles). Said UC3M professor Fernando Garcia, "It's a totally novel solution, based on robotics and automation, which won't remove anyone's job, but will instead offer a new tool for emergency services, providing added value and allowing them to operate more safely and to control the situation." .... "

Wal-Mart Tests Autonomous Vans for Middle-Mile

Not fully autonomous, and with fixed routes,  will this simplify the approach and make it safe enough to decrease on-road issues in the early days of on road autonomy?

Will autonomous vans help Walmart win the middle mile logistics race?
by George Anderson in Retailwire  with further expert comment.

It’s pretty common to hear retailers talking about the need to own the last mile, with many taking a variety of approaches to effectively and efficiently handle the transfer of purchased goods to the customer. You can safely count Walmart among that group, but in an interesting twist, the retailer is taking part in a test of autonomous vans to transfer goods from one warehouse to another or to a store or other pickup point. The goal is to reduce costs in the so-called middle miles while moving packages to their ultimate destination.

The robo-vans being used by Walmart, according to Bloomberg’s reporting, follow fixed routes to reduce the risk of accidents and to keep operating in continual service. Human drivers are currently still behind the wheel on many of the test routes, so new processes for loading vehicles or navigation are not yet needed.

Walmart is working with Gatik, a two-year-old startup focused on short-haul logistics for business-to-business operations. Earlier this month, the company announced it had secured $4.5 million in funding and brought Walmart on as a customer.... " 

Modernizing Marketing

And even poorly understanding the current processes being used to utilize markets and data.

Six governing considerations to modernize marketing  in McKinsey

Legacy structures and operations are keeping companies from taking full advantage of technology. ...  

Most chief marketing officers (CMOs) understand that the utilization of data, analyses, and algorithms to personalize marketing drives value. Concept tests are becoming more efficient, customer approaches are being accelerated, and revenues are quadrupling in certain channels (Exhibit 1). All the evidence suggests that marketing functions should invest in, collect, and analyze available data to support their decision making.  ... " 

Driverless Services in Japan and France

Continuing to gather information on the use and greater implications of driverless.  My experience is mostly in the supply chain space, but the changes this will produce will be very broad.  Consider too the required data generation and transmission involved, what else will ride on these new channels?

Waymo is developing driverless services with Renault and Nissan
The services will be designed to transport people and goods in France and Japan.

By Mariella Moon, @mariella_moon in Engadget
1h ago in Transportation   .... "

IKEA as a delivering Foodchain

Never thought of the nearby IKEA as a food chain.   Although have always known about the meatballs.

Ikea is now the world’s 6th largest food chain, and it’s testing delivery to your door

Call it GrübHüb: The Swedish giant is reportedly testing delivery of its menu in Paris.  By Mark Wilson in  Fastcompany

The piles of Ikea’s meatballs, cinnamon rolls, and herring that hungry shoppers grab in Ikeas around the globe really add up—so much so that Ikea claims to be the world’s sixth largest food chain. After the Spanish publication El Confidencial reported that Ikea is thinking about expanding its food footprint even further into home deliveries, the company confirmed to Co.Design that it is current testing delivery in Paris.

The trial includes delivery of its Swedish foods—which include salads, salmon, beets, and cabbage—which are distributed out of its two-story, 58,000-square-foot urban store located centrally in the city. If the pilot is successful, Ikea may bring the idea to Spain and other European markets in the future. ... " 

AI and Wireless Spectrum

Technical view of controlling spectrum.  A broad and technical look at the history and future of the use of electromagnetic spectrum to communicate and now directly deliver capabilities anywhere.

If DARPA Has Its Way, AI Will Rule the Wireless Spectrum
DARPA’s Spectrum Collaboration Challenge demonstrates that autonomous radios can manage spectrum better than humans can  ... "    By Paul Tilghman  in IEEE 

Wednesday, June 19, 2019

Zero Knowledge Proofs

A good explanation of Zero Knowledge Proofs, starting with simple examples.  Examples of usage at the link. At the tag below there is also a Python implementation example.

Zero-knowledge proof
From Wikipedia, the free encyclopedia

In cryptography, a zero-knowledge proof or zero-knowledge protocol is a method by which one party (the prover) can prove to another party (the verifier) that they know a value x, without conveying any information apart from the fact that they know the value x. The essence of zero-knowledge proofs is that it is trivial to prove that one possesses knowledge of certain information by simply revealing it; the challenge is to prove such possession without revealing the information itself or any additional information.[1]  .... "

Towards Engaging Packaging

Recall being pitched a similar idea for product packaging, this seems to take it further.  Now could deeper information be communicated this way?

Creating 3-D images, with regular ink
MIT startup Lumii helps manufacturers replicate the visual effects of holograms on their printed materials.

By Zach Winn | MIT News Office 

This month, 5,000 distinctive cans of Fuzzy Logic beer will appear on local shelves as part of Massachusetts-based Portico Brewing’s attempt to stand out in the aesthetically competitive world of craft beer.

The cans feature eye-catching arrays of holographic triangles that appear three dimensional at certain angles. Curious drinkers might twist the cans and guess how Portico achieved the varying, almost shining appearance. Were special lenses or foils used? Are the optical effects the result of an expensive, holographic film?

It turns out it takes two MIT PhDs to fully explain the technology behind the can’s appearance. The design is the result of Portico’s collaboration with Lumii, a startup founded by Tom Baran SM ’07 PhD ’12 and Matt Hirsch SM ’09, PhD ’14.

Lumii uses complex algorithms to precisely place tens of millions of dots of ink on two sides of clear film to create light fields that achieve the same visual effects as special films and lenses. The designs add depth, motion, and chromatic effect to packages, labels, IDs, and more.  ... " 

Smart Contracts Explained

An almost completely non-technical description of blockchains, and in particular smart contracts and their value and limitations.     Using as an example the Ethereum Blockchain:

Waking up to New Risks

In some ways we saw this coming,  risk was increasing, and that risk was coming from within in things we had specifically built.   Our internet of things

Deep Insecurities: The Internet of Things Shifts Technology Risk
By Samuel Greengard
Communications of the ACM, May 2019, Vol. 62 No. 5, Pages 20-22
10.1145/3317675

It is human nature to view technology as a path to a better world. When engineers and designers create devices, machines, and systems, the underlying premise is to deliver benefits. The Internet of Things (IoT) is certainly no exception. Smartphones, connected cars, automated thermostats, smart lighting, connected health trackers, and remote medical devices have made it possible to accomplish things that once seemed impossible. Everything from toothbrushes to tape measures are getting "smart."

However, at the center of the tens of billions of connected devices streaming and sharing data lies a vexing problem: cybersecurity. It is no secret that hackers and attackers have broken into baby monitors, Web cameras, automobiles, lighting systems, and medical devices. In the future, it is not unreasonable to assume that cybercriminals could take control of a private citizen's refrigerator or lighting system and demand a $1,000 ransom in bitcoin in order to restore functionality. It is also not difficult to fathom the threat of a vehicle that won't brake, or a pacemaker that stops working due to a hack. Hackers might also weaponize devices and take down financial systems and power grids.

The thought is chilling, and the repercussions potentially far-reaching. "All these devices, which now have computing functionality, affect the world in a direct physical manner—and that just changes everything," observes Bruce Schneier, an independent computer security analyst and author of Click Here to Kill Everybody: Security and Survival in a Hyper-connected World (W. W. Norton & Company, 2018). "Today, computers can actually kill you."

Adds Stuart Madnick, John Norris Maguire Professor of Information Technologies at the Massachusetts Institute of Technology (MIT) Sloan School of Management, "We are entering a dangerous period. We have to wake up to the risks." .....