/* ---- Google Analytics Code Below */

Monday, September 30, 2019

Quantum Computing

A fairly good, non technical description and use overview of quantum computing.  Likely suitable for use for executive questions.

What is quantum computing? The next era of computational evolution, explained    By Jonathan Terrasi  in Digitaltrends  .....

Robotic Reliability vs Reasoning with Transparency

We came up with some similar conclusions.  That observed reliability was typically much more important that people thought, and it was best to solve that problem before aiming at advanced reasoning in a robot, or robotic process.    This further explores the value of transparency in an embedded reasoning  process, especially in human-robot teams.  All essential in a future of such cooperation.

When It Comes to Robots, Reliability May Matter More Than Reasoning
U.S. Army Research Laboratory    September 25, 2019

A study by U.S. Army Research Laboratory (ARL) and University of Central Florida found that human confidence in robots decreases after a robot makes a mistake, even when it is transparent with its reasoning process. The researchers explored human-agent teaming to define how the transparency of the agents, such as robots, unmanned vehicles, or software agents, impacts human trust, task performance, workload, and agent perception. Subjects observing a robot making a mistake downgraded its reliability, even when it did not make any subsequent mistakes. Boosting agent transparency improved participants' trust in the robot, but only when the robot was collecting or filtering data. ARL's Julia Wright said, "Understanding how the robot's behavior influences their human teammates is crucial to the development of effective human-robot teams, as well as the design of interfaces and communication methods between team members."

Humor that Works: and a new Book!

My former P&G Colleague Andrew Tarvin has been promoted to the prestigious group:

P&G Alumni Visionaries under 40
Announcing our “Visionaries Under 40” to be recognized at the 2019 Global Conference in Madrid this October.  .... 

Andrew Tarvin
Humor Engineer
Humor That Works    https://www.humorthatworks.com/ 

See his new book: Humor that Works:The Missing Skill for Success and Happiness at Work    

Andrew Tarvin is a speaker and trainer, and has delivered more than 600 programs to over 40,000 people at 250-plus organizations. He was the recipient of the 2019 President’s Award for Distinguished Service from the NSA NYC, and was one of the first thirty improv practitioners to be certified by the Applied Improvisation Network. His TEDx talk has more than 5 million views and his recent book, “The United States of Laughter”, won the 2019 Book Award from the Association for Applied and Therapeutic Humor.  ..." 

Building More General, Trustable AI: Deeper Understanding?

I have just been thinking about the idea of what is called 'deep understanding'  here.  That is more generally applicable AI.    Agree that deep learning is impressive, but still very narrow  I don't agree that deep understanding, more general AI would make AI safer, could make it less transparent, prone to tricks and misuse, and dangerous.

Book:  Rebooting AI,  Building Artificial Intelligence we can Trust   By Gary Marcus and Ernest Davis  Reading ...

We can’t trust AI systems built on deep learning alone 
Gary Marcus, a leader in the field, discusses how we could achieve general intelligence—and why that might make machines safer.   by Karen Hao  in Technology Review 

Gary Marcus is not impressed by the hype around deep learning. While the NYU professor believes that the technique has played an important role in advancing AI, he also thinks the field’s current overemphasis on it may well lead to its demise.

Marcus, a neuroscientist by training who has spent his career at the forefront of AI research, cites both technical and ethical concerns. From a technical perspective, deep learning may be good at mimicking the perceptual tasks of the human brain, like image or speech recognition. But it falls short on other tasks, like understanding conversations or causal relationships. To create more capable and broadly intelligent machines, often referred to colloquially as artificial general intelligence, deep learning must be combined with other methods. ... "

Teams Fighting Burnout

Rarely seen this attempted, or effectively done.  Lip service.  In one on one cases, but not as a group.  Rewards?   And usually too late. 

Teams Fight Burnout Together  in the HBR
By Tony Schwartz, Rene Polizzi, Kelly Gruber, Emily Pines

Here’s a vexing paradox. On the one hand, companies are offering more wellness and well-being options than ever before, including mindfulness and yoga classes, nap rooms, and fitness facilities. On the other hand, employee burnout has risen to such a level that the World Health Organization now considers it a workplace hazard.

Most corporate well-being offerings are well-intended and potentially valuable. The problem is that without challenging the deeply embedded mindset that more, bigger, faster is always better, these offerings don’t get fully supported, nor are they widely and freely utilized.

Earlier this year, Ernst & Young (EY) and The Energy Project set out to test a hypothesis: If all members of a client-serving team rallied together to build more rest and renewal into their lives, they would feel better and they’d get more work accomplished in less time.   ... "

What do the Next 20 Years Hold for AI?

And the President of AAAI gives a short, nontechnical interview on the future of AI.  Points to a new road map on this topic, reading now.

What do the next 20 years hold for artificial intelligence?  by Caitlin Dawson, University of Southern California

Yolanda Gil, president of the Association for the Advancement of Artificial Intelligence (AAAI), discusses what it will take to move AI forward without moving safety backward.

The year is 2031. An outbreak of a highly contagious mosquito-borne virus in the U.S. has spread quickly to major cities around the world. It's all hands on deck to stop the disease from spreading–and that includes the deployment of artificial intelligence (AI) systems, which scour online news and social media for relevant data and patterns.

Working with these results, and data gathered from numerous hospitals around the world, scientists discover an interesting link to a rare neurological condition and a treatment is developed. Within days, the disease is under control. It's not hard to imagine this scenario—but whether future AI systems will be competent enough to do the job depends in large part on how we tackle AI development today.

That's according to a new 20-year Artificial Intelligence Roadmap co-authored by Yolanda Gil, a USC computer science research professor and research director at the USC Viterbi Information Sciences Institute (ISI), with computer science experts from universities across the U.S.  ... "

AI Improving Biomedical Imaging

It is notable how modern AI is doing best in 'vision' spaces.    As opposed to what I would call conversational interaction and process logic.   Not what we would have expected in the earlier applications of AI.  Is this because the training data is more available and concise, or because the underlying deep learning models are closer to the underlying human intelligence?   Or both?

Artificial intelligence improves biomedical imaging in TechXplore
by Fabio Bergamin, ETH Zurich

ETH researchers use artificial intelligence to improve quality of images recorded by a relatively new biomedical imaging method. This paves the way towards more accurate diagnosis and cost-effective devices.

Scientists at ETH Zurich and the University of Zurich have used machine learning methods to improve optoacoustic imaging. This relatively young medical imaging technique can be used for applications such as visualizing blood vessels, studying brain activity, characterizing skin lesions and diagnosing breast cancer. However, quality of the rendered images is very dependent on the number and distribution of sensors used by the device: the more of them, the better the image quality. The new approach developed by the ETH researchers allows for substantial reduction of the number of sensors without giving up on the resulting image quality. This makes it possible to reduce the device cost, increase imaging speed or improve diagnosis. .... "

Rethinking Procurement in Retail

A long time space of ours, McKinsey sums it up:

Rethinking Procurement in Retail

For retailers, procurement is no longer solely a matter of negotiating “A” brands. Private labels and verticalization are trending. Advanced approaches and tools help get procurement in shape for the future. ... "

Sunday, September 29, 2019

Alexa Still Competing Best in the Home

Agree, have just been taking it into the car with Echo Auto.  Of course its still competing versus the smart phone screen.   In fact all connections with Echo devices are enabled and formulated from a smartphone App.   True you get higher audio and sometimes video from echo devices,  but not necessarily intelligence.   Siri and Android solutions can still be 'closer' to you because they are embedded in the hardware itself.   And hardware AI is starting to show its value for some kinds of AI problems.   The idea of very good 'ambient computing'  listening and engaging and conversing, may also be best closer to the hardware.   Good related thoughts below:

Alexa’s real competition is still your phone screen
Amazon is still better inside the home than outside
By Dieter Bohn in TheVerge

One of the big themes we’ve been tracking for a few years now is Amazon’s various attempts to make Alexa useful outside your home. Amazon has a very good value proposition for customers inside their houses: Echo speakers are great for music, timers and such in the kitchen, and smarthome controls.

Amazon clearly has ambitions to make Alexa the leading platform for ambient computing. But to do that, it needs more ubiquity than it can achieve right now. That’s one reason that Amazon was so excited to announce a partnership with GM to make Alexa available on those cars. It’s also one reason I was surprised to see the company didn’t announce any updates to the Echo Auto.

But the most obvious way to do that is to be the default assistant on phones. Amazon is probably never going to get there, because Apple won’t allow it, for one thing. On Android you can switch our your default assistant from Google to Alexa, but the number of customers who realize that’s possible is small and the number who are likely to do it is even smaller.  ... " 

Detecting Frustration to Enhance Conversation

A better means giving feedback?  We do in human conversation,  in a two way or multi-way conversation we stop for questions,  notice frowns,  gestures,  complaints.    A perfect conversation would have each component perfectly understood, absorbed, and then adjusted to.  But its only rarely happens that way.     Right now assistants ask you if their answer helped,  but rarely then adapt a new answer if not.   That's where some real human intelligence would live.  And taking that further to save the questions + adapted answers for later use.   Suggest below that deep learning can move to this.

Amazon is Testing a Way to Make the Thing You Hate Most About Alexa Go Away  in Inc.com

Ever found yourself screaming at your smart speaker? That just might work next time.
What if Alexa could tell if you were frustrated and course correct? That's exactly the feature Amazon will start testing, the company just announced at its September Devices Event. Alexa will soon have "frustration detection." It detects when Alexa gets your requests wrong, then tries to get it right. Amazon will start testing the feature with music requests in 2020, then will roll it out to other tasks gradually. 

Just say, "No, Alexa."

The feature will only be turned on for music requests to start. If Alexa plays the wrong song (e.g what is definitely not beach sounds), you can say, "No, Alexa." She'll apologize and ask you to clarify.

Here's how Amazon described the feature on their blog:

As customers continue to use Alexa more often, they want her to be more conversational and can get frustrated when Alexa gets something wrong. To help with this, we developed a deep learning model to detect when customers are frustrated, not with the world around them, but with Alexa. And when she recognizes you're frustrated with her, Alexa can now try to adjust, just like you or I would do. ... " 

Cobol Still in Use

I started coding well after COBOL had declined, but key parts of the company were still being run by COBOL code.  It still did many of the logical things required, and because it was compiled, it was very fast for the computers of the day.    If you knew coding you could learn it a few days.   After that I was involved with the Y2K event, and we had to wade through huge amounts of COBOL and other code, to determine if there was code that would fail based on date encoding. 

We only found only a few examples of suspect date coding, but found quite a few more examples where there were possible problems not related to Y2K.   So it helped after all, and the remaining code was fixed, recompiled and allowed to carry on.    I do wonder how much different it would have been if all the COBOL code had been done in Python?  Though lots of shared code design might have helped.  The article below also mentions my colleague Grace Hopper, at the Pentagon who I have talked about here before

COBOL Turns 60: Why It Will Outlive Us All   By ZDNet

I cut my programming teeth on IBM 360 Assembler. This shouldn't be anyone's first language. In computing's early years, the only languages were machine and assembler. In those days, computing science really was "science." Clearly, there needed to be an easier language for programming those hulking early mainframes. That language, named in September 1959, became Common Business-Oriented Language (COBOL).

The credit for coming up with the basic idea goes not to Grace Hopper, although she contributed to the language and promoted it, but to Mary Hawes. She was a Burroughs Corporation programmer who saw a need for a computer language. In March 1959, Hawes proposed that a new computer language be created. It would have an English-like vocabulary that could be used across different computers to perform basic business tasks.

Hawes talked Hopper and others into creating a vendor-neutral interoperable computer language. Hopper suggested they approach the Department of Defense (DoD) for funding and as a potential customer for the unnamed language. 

Business IT experts agreed, and in May 1959, 41 computer users and manufacturers met at the Pentagon. There, they formed the Short Range Committee of the Conference on Data Systems Languages (CODASYL).

Drawing on earlier business computer languages such as Remington Rand UNIVAC's FLOW-MATIC, which was largely the work of Grace Hopper, and IBM's Commercial Translator, the committee established that COBOL-written programs should resemble ordinary English. ...

Personas in Customer Service

What we did in the 80s .... but used an existing advertising persona that was already well know,  Then used a AI driven chatbot system to drive their interaction.   We built up a customer persona starting from an implied advertising persona.   Can work, if its done well.  I have mentioned that here a number of times,  see details at the tag.

Using personas to drive better customer service    By Paul Selby in CutomerThink

It’s done. Your customer service team has completed a six-month project to develop, test, and deploy state-of-the-art technology that was billed as the answer to drastically reducing call volume while delivering higher-quality service to customers faster. The go-live day is promoted to customers via email and a countdown timer on the main customer service landing page adds to the build-up. Everyone is excited by the profound effect this will have on customer service.

Day one … and there’s no noticeable change. That’s okay, this is going to take some time. A week goes by, then a month. Six months in and there’s some change, but nowhere near expectations. What happened?

It’s easy to jump on a new technology trend in customer service. Competitors could be delivering more advanced forms of customer service. A research company might be offering bold predictions for the future of customer service. And the ever-present pressure of rising customer expectations might play a part. Any one of these influences is enough to embark on building new customer service engagement and solution channels. But if you go to all the work to offer these new options and they aren’t being used despite ample evidence it should succeed, there might be a simple reason for its failure: customer personas weren’t considered.

Defining personas
Product design created the concept of personas in the 1990’s. They act as one or more fictional stand-ins for prospects or customers. Well-crafted personas have a story or “biography” that represents their goals and desires as well as challenges and limitations. These traits help guide the development of products and services that would be interesting and useful to them, ensuring greater success of the product or service.

Personas are constructed by examining the existing customer base for its traits (supplementing that information with research through additional conversations and surveys) and by interviewing prospects when a new product or service for a new target market is being considered. Qualities include behavior patterns, skills, attitudes, and environmental details–profession, hobbies, lifestyle, etc.–that influence their behavior. Fun fictional personal details are often added to add depth and to make them more relatable (such as giving them names and a picture). ... 
Using personas in customer service
Persona usage in customer service doesn’t vary much from use in product design. Capturing some additional details unique to how particular personas might seek customer service are the key consideration.  .... " 

Saturday, September 28, 2019

AI, Creativity and the Slime Mold

Quire an interesting piece out of Engadget, pointer to a new book I just received but have not read.  we got AI to show us possibilities, and even automatically evaluate them.   But never came up with a result out of the blue, what we could call creativity.

Hitting the Books: Teaching AI to sing slime mold serenades
Get ready for Mozart on a microchip.
By Andrew Tarantola, @terrortola in Engadget

Book:   The Artist in the Machine: The World of AI-Powered Creativity   by Arthur I. Miller  MIT Press 

Most of the time when we hear about AI, they're taking our jobs or putting us in jail or inflicting some other autonomic horror upon humans. But there's a second side to that AI coin. One in which machine learning algorithms show us skin suits the beauty of the natural form, even if it has been procedurally generated.

In this excerpt from The Artist in the Machine by Arthur I. Miller highlights the work of artist Eduardo Miranda. He's melded the minds of a slime mold and a CPU to create, well, music.

Eduardo Miranda and His Improvising Slime Mold   .... " 

Towards an Analytics Academy

We kind of did this during an earlier AI era.  It worked early on, but was not connected well enough to the needs of the company, and expectations of it being democratized did not work because there was not enough support.  And the technologies had not matured enough to make them work for us, so many of the capabilities went back to more tried and true analytics.  Which were still very successful.  While the magic existed in the hype, it did not suffice in practice for continued application.

The analytics academy: Bridging the gap between human and artificial intelligence  By McKinsey

The rise of artificial intelligence (AI) is one of the defining business opportunities for leaders today. Closely associated with it: the challenge of creating an organization that can rise to that opportunity and exploit the potential of AI at scale.

Meeting this challenge requires organizations to prepare their leaders, business staff, analytics teams, and end users to work and think in new ways—not only by helping these cohorts understand how to tap into AI effectively, but also by teaching them to embrace data exploration, agile development, and interdisciplinary teamwork.

Often, companies use an ad hoc approach to their talent-building efforts. They hire new workers equipped with these skills in spurts and rely on online-learning platforms, universities, and executive-level programs to train existing employees.

But these quick-fix tactics aren’t enough to transform an organization into one that’s fully AI-driven and capable of keeping up with the blazing pace of change in both technology and the nature of business competition that we’re experiencing today. While hiring new talent can address immediate resource needs, such as those required to rapidly build out an organization’s AI practice at the start, it sidesteps a critical need for most organizations: broad capability building across all levels. This is best accomplished by training current employees. Educational offerings from external parties have limitations, too: they aren’t designed to deliver the holistic, company-specific training or the cohesive, repeatable protocols essential for driving deep and lasting cultural changes, agile and cross-functional collaboration, and rapid scaling.   .... "

Enhanced Packaging via Google Lens

Enhancing packaging with scans?  I thought this had been done already with QR codes, I still see it on packages.   Google lens can be found on IOS and Android.   You point it at a package, I assume anywhere on the package,  and you will be linked to their information.    We tested related watermarking ideas.  This also assumes you know about the linkage, probably from an ad.  But you could have done the same with a QR code. Difference appears to be that you don't have to find a QR to scan.

Uncle Ben's Adds Google Lens to Packaging      By Jacqueline Barba - 09/18/2019

Mars Inc.’s Uncle Ben’s has teamed with connected food platform Innit to launch an artificial intelligence-driven initiative that leverages the Google Lens visual search technology to connect digital information to physical products and unlock meal solutions.

The partnership makes Uncle Ben’s the first food brand to adopt Google Lens, an image recognition technology designed to let users access relevant information related to objects the device identifies using visual analysis and "search what they see," according to a media release from Innit.

In this case, Uncle Ben’s is using the technology to help consumers “cut through the clutter” of meal planning by instantly delivering recommendations, information and inspiration about physical products in stores and homes.

To activate, users open either the Google mobile app on iOS devices or the Google Lens app on Android devices, and point their device at Uncle Ben’s Ready Rice packages and retail displays. Innit will then instantly suggest meals that can be built around the product, along with ingredient lists, nutrition advice and step-by-step cooking videos.  .... "

Friday, September 27, 2019

Measures for AI

Essential to get these straight,  sometimes quite simple, often not.   How do they link to goals?

The problem with metrics is a big problem for AI in Fast.AI

Written: 24 Sep 2019 by Rachel Thomas

Goodhart’s Law states that “When a measure becomes a target, it ceases to be a good measure.” At their heart, what most current AI approaches do is to optimize metrics. The practice of optimizing metrics is not new nor unique to AI, yet AI can be particularly efficient (even too efficient!) at doing so.

This is important to understand, because any risks of optimizing metrics are heightened by AI. While metrics can be useful in their proper place, there are harms when they are unthinkingly applied. Some of the scariest instances of algorithms run amok (such as Google’s algorithm contributing to radicalizing people into white supremacy, teachers being fired by an algorithm, or essay grading software that rewards sophisticated garbage) all result from over-emphasizing metrics. We have to understand this dynamic in order to understand the urgent risks we are facing due to misuse of AI. ... "

Quantum Supremacy Examined

In a recent conversation the term 'Quantum Supremacy' came up and I mentioned it here, and the concept also was implied in some writing today.   Google has made some claims.  Note this deals with a rough measure of the value of quantum computing, although still hypothetical.  It means they can see a way that this method can  always be better than classical computing, not that they have done it.  It came to mind I had not researched the precise definition of the term, and here is what I found:


Quantum supremacy

John Preskill has introduced the term quantum supremacy to refer to the hypothetical speedup advantage that a quantum computer would have over a classical computer in a certain field.[28] Google announced in 2017 that it expected to achieve quantum supremacy by the end of the year though that did not happen. IBM said in 2018 that the best classical computers will be beaten on some practical task within about five years and views the quantum supremacy test only as a potential future benchmark.[29] Although skeptics like Gil Kalai doubt that quantum supremacy will ever be achieved, [30][31] Google has been reported to have done so, with calculations more than 3,000,000 times as fast as those of Summit, generally considered the world's fastest computer. [32] Bill Unruh doubted the practicality of quantum computers in a paper published back in 1994.[33] Paul Davies argued that a 400-qubit computer would even come into conflict with the cosmological information bound implied by the holographic principle.[34]  .... "

Amazon Wants to Take Facial Recognition Regulation Lead

A number of experts from the retail sector have comments in the full article:

Amazon wants to take the lead on regulating facial recognition tech   byTom Ryan in Retailwire

In unveiling its new hardware devices in Seattle on Wednesday, Amazon.com CEO Jeff Bezos made a surprise appearance and told reporters the company is working on facial recognition legislation that it plans to propose to lawmakers.

“Our public policy team is actually working on facial recognition regulations, and it makes a lot of sense to regulate that,” Mr. Bezos said. “It’s a perfect example of something that has really positive uses, so you don’t want to put the brakes on it. But at the same time, there’s also potential for abuses of that kind of technology, so you do want regulations.”

Amazon’s legislative ideas haven’t been revealed. In February, Amazon in a blog entry offered some suggestions including ensuring facial recognition tools comply with laws, maintaining human reviews to double-check findings and requiring signage where the technology is used. .... " 

Kroger Unveils Cinci Food Hall

Look forward to seeing this in person.   Particularly interested in seeing their multiple food court idea.   Have in the past seen some of their test restaurant in town, and found them good but not good enough.   My advice is to get this right up front.   Excite people enough to come in and repeat visit.  Food first and then translate it to good, diverse food at home, on the run, in the office ...

Kroger Unveils Cincinnati Food Hall in WinSightGrocery
Two-level downtown store features local restaurant outpost and expands 1883  ... "

In a move Kroger officials describe as a physical expression of its “food-first” culture, the retailer this morning will unveil a new store in downtown Cincinnati that features the company’s first food hall, among other new attractions.

The 52,000-square-foot unit in Cincinnati’s central business district is a block away from Kroger’s headquarters and in reach of the Over-the-Rhine neighborhood. It anchors an 18-story residential tower.

The food hall, to be known as the On the Rhine Eatery, features Cincinnati restaurant brands, including Django Western Taco, Dope Asian Street Fare, Eli’s BBQ and Queen City Whip, as well as Kitchen 1883 Cafe and Bar, Kroger’s American food restaurant concept. The opening comes as Kroger attempts to align itself more authentically with food culture and improve customer experience in stores as part of its Restock initiative. It follows the opening of the 1883 concept, the establishment of a Culinary Innovation Center and an expansion of elevated own-brand items under its Simple Trust, Private Selection and HemisFares labels.  .... "

Supporting Multiple Device Assistance

I have devices from at least 12 suppliers in my smart home lab.  The need is growing.  That these could easily work together and share information is very useful.

Multi-Device Digital Assistance
By Ryen W. White, Adam Fourney, Allen Herring, Paul N. Bennett, Nirupama Chandrasekaran, Robert Sim, Elnaz Nouri, Mark J. EncarnaciĆ³n   ( all authors are from Microsoft )

Communications of the ACM, October 2019, Vol. 62 No. 10, Pages 28-31   10.1145/3357159

The use of multiple digital devices to support people's daily activities has long been discussed.11 The majority of U.S. residents own multiple electronic devices, such as smart-phones, smart wearable devices, tablets, and desktop, or laptop computers. Multi-device experiences (MDXs) spanning multiple devices simultaneously are viable for many individuals. Each device has unique strengths in aspects such as display, compute, portability, sensing, communications, and input. Despite the potential to utilize the portfolio of devices at their disposal, people typically use just one device per task; meaning they may need to make compromises in the tasks they attempt or may underperform at the task at hand. It also means the support that digital assistants such as Amazon Alexa, Google Assistant, or Microsoft Cortana can offer is limited to what is possible on the current device. The rise of cloud services, coupled with increased ownership of multiple devices, creates opportunities for digital assistants to provide improved task completion guidance.  ... "

Alexa Presentation Language Released

Has been around now for a year,  worth looking at as to how capabilities that link voice, text and animation can be constructed.  Still need more ways to sweeten the actual intelligence provided.  Worth an examination as to the approach Amazon is using.

Alexa Presentation Language Now Generally Available: Build Multimodal Experiences that Come Alive with Animation
Arunjeet Singh

Today we are excited to announce that Alexa Presentation Language (APL) is generally available. APL enables you to easily create visually rich Alexa skills for devices with screens and to adapt them for different device types such as the Echo Show, Echo Spot, Fire TV, LG TVs, and the Lenovo Smart Tab. Adding visuals and touch can enhance voice experiences and make skills more engaging and interactive for customers. With the general availability of APL, we addressed key known issues you raised to enable specific use cases and addressed feedback from the public beta.

We believe that the emergence of voice user interfaces isn’t an incremental improvement to existing technology; it marks a significant shift in human-computer interaction. That’s why APL is designed from the ground up for creating voice-first, multimodal Alexa skills.

Over the past year, we have launched many different tools and resources to help you build skills that include text, graphics, slideshows, animation, and video content. There are thousands of multimodal skills across different categories. For example, the winner of the Alexa Multimodal Challenge, Stuart Pocklington, created Loop It, a skill that lets you choose from a variety of audio loops to create your own track. The skill makes use of TouchWrappers to allow users to navigate by touching the screen. Customers can see images for the sun, clouds, and other conditions when interacting with Big Sky, a skill that uses APL’s ability to adapt content to adjust layout and information density based on the device display size.

As APL approaches its 1st birthday, we want to highlight some important benefits, tools and features you should be aware of so that you can build rich, interactive skills. .... "

Tales of Ring Surveillance

The implications of cheap, widespread, continuous surveillance in the suburbs has got me thinking of late.  And then this Wired piece on just this topic came up, describing some of the more unusual things that have come out of this:   Ring Camera Surveillance Is Transforming Suburban Life   Now everything at my front door is recorded,  video and sound, and I can clip, edit and share it.   Some of  my neighbors are doing the same thing. 

Thursday, September 26, 2019

A Chatbot connected to Cryptocurrency Blockchain?

Very interesting idea of linking a chatbot with various cryptocurrency use possibilities.    A kind of assistant play.   Some suggested ideas about how this might be used.  Recall though that Facebook rumors don't often come true. 

Facebook has acquired Servicefriend, which builds ‘hybrid’ chatbots, for Calibra customer service

By Ingrid Lunden@ingridlunden in TechCrunch

As Facebook  prepares to launch its new cryptocurrency Libra in 2020, it’s putting the pieces in place to help it run. In one of the latest developments, it has acquired Servicefriend, a startup that built bots — chat clients for messaging apps based on artificial intelligence — to help customer service teams, TechCrunch has confirmed.

The news was first reported in Israel, where Servicefriend is based, after one of its investors, Roberto Singler, alerted local publication The Marker about the deal. We reached out to Ido Arad, one of the co-founders of the company, who referred our questions to a team at Facebook. Facebook then confirmed the acquisition with an Apple-like non-specific statement:

“We acquire smaller tech companies from time to time. We don’t always discuss our plans,” a Facebook spokesperson said.

Several people, including Arad, his co-founder Shahar Ben Ami, and at least one other indicate that they now work at Facebook within the Calibra digital wallet group on their LinkedIn profiles. Their jobs at the social network started this month, meaning this acquisition closed in recent weeks. (Several others indicate that they are still at Servicefriend, meaning they too may have likely made the move as well.)  .... "

Regulating AI, Digital Identity, Blockchain

A pretty broad swath of technology being covered here.   Will this stunt emergent technology?   Note the inclusion of 'Other innovative technologies'.   Do we imagine China will do this?

Via Technology Review:

 .... The US House of Representatives has passed a bill calling for the Financial Crimes Enforcement Network (FinCEN), the division of the Department of Treasury that polices illicit finance, to study “whether AI, digital identity technologies, blockchain technologies, and other innovative technologies can be further leveraged to make FinCEN’s data analysis more efficient and effective. ...

in Coindesk  ...


Have worked with TIBCO / Spotfire  in several enterprises, here indications of their further move to AI driven solutions.   Worth a look.

TIBCO strengthens commitment to providing cloud-native, open, and AI-driven solutions

Product Enhancements Deliver End-to-End, Data-Led Innovation Capabilities to Customers
LONDON, Sept. 25, 2019 /PRNewswire/ -- TIBCO Software Inc., a global leader in integration, API management, and analytics, today announced a series of new products, features, and connectivity that help make agile innovation easier than ever before by providing technology freedom of choice, cloud-native deployment, and AI everywhere. Enhancements to the TIBCO® Connected Intelligence platform further accelerate TIBCO customers' time-to-action, enabling them to innovate faster and more sustainably, ultimately turning ideas and investments into business value.

"On the back of digital interconnectedness becoming a business norm, customers are dealing with an unwieldy explosion of data that is largely unstructured and complex. The nirvana of using data to mold and inform critical decisions still escapes many businesses, but it shouldn't when we have fundamental technologies like the cloud, flexible open platforms, and AI at our disposal that can turn data into innovation and impact," said Matt Quinn, chief operating officer, TIBCO. "Our vision for our customers is that when they use TIBCO as their data foundation, they can engage and interact with their data seamlessly."  .... '

Meet Bold Bridge Advisors

Now associated with Bold Bridge Advisors   https://bold-bridge.webflow.io

A group with wide and deep experience in delivering AI and analytical systems,  with global experience in IBM and other methodologies.     With enterprises, governments, universities and startups.

 ... We Solve AI Problems, Fast!
We ignite and accelerate your Artificial Intelligence Journey so all team members have clarity, alignment, and confidence to achieve scalable results. .... 

Ask for more information.

AI Powered Wireless Emotion Sensing

Another ability to sense neuromarketing signals?

EmoSense: an AI-powered and wireless emotion sensing system    by Ingrid Fadelli, Tech Xplore

Researchers at Hefei University of Technology in China and various universities in Japan have recently developed a unique emotion sensing system that can recognize people's emotions based on their body gestures. They presented this new AI- powered system, called EmoSense, in a paper pre-published on arXiv.

"In our daily life, we can clearly realize that body gestures contain rich mood expressions for emotion recognition," Yantong Wang, one of the researchers who carried out the study, told TechXplore. "Meanwhile, we can also find out that human body gestures affect wireless signals via shadowing and multi-path effects when we use antennas to detect behavior. Such signal effects usually form unique patterns or ļ¬ngerprints in the temporal-frequency domain for different gestures."

Wang and his colleagues observed that human body gestures can affect wireless signals, producing characteristic patterns that could be used for emotion recognition. This inspired them to develop a system that can identify these patterns, recognizing people's emotions based on their physical movements.  .... "

AI is Computer Science

Short excerpt that is interesting.

AI isn’t magic. It’s computer science.
Rob Thomas (IBM) and Tim O’Reilly discuss the hard work and mass experimentation that will lead to AI breakthroughs.

This is a keynote highlight from the Strata Data Conference in New York 2019.  Full talk is available at the link. 

By Robert Thomas and Tim O’Reilly .... 

Preview Alexa Smart Glasses

Smart glasses, available by invitation only.   Seems still an uncertain effort.  Like most of these efforts to date like this seem quite narrowly targeted  Would need skills for visual analysis and interaction.   For now does not have a camera.   Perhaps such systems can be considered proposal  'platforms' for the delivery of assemblages of skills in given contexts. 

Amazon Echo Frames preview: trying on the Alexa smart glasses
Wear Alexa on your head everywhere you go
By Chris Welch@chriswelch   

We’ve just gotten some brief time with Amazon’s first smart eyeglasses, the Echo Frames, at the company’s Seattle headquarters. Amazon is positioning the glasses as the ultimate take-Alexa-everywhere product, but it’s also trying to balance that pitch with privacy: there’s a button on the glasses to disable its microphones, and the Echo Frames lack any kind of camera. As the company considers these a “day one” product for tech enthusiasts, they’ll eventually be available on an invitational basis for $179.99.  ......  "  

Wednesday, September 25, 2019

Boston Dynamics Spot Robot Dog Goes on Sale

You can now sign up for one of the Boston Dynamics Spot Robot Dogs,  still in the early adopter program.  Not really publicly available,  but Boston Dynamics will work with you to design specific applications.

A kind of semi-autonomous dog-like robot, that works in many kinds of terrain, with lots of videos of it online.  approximate cost of one is said to be about the cost of a luxury car.   Very impressive operations, but also creepy and borderline scary.   Probably not consumer facing.   But for security and disaster recovery? Video below outlines many specs:

Here in IEEE Spectrum:

Boston Dynamics' Spot Robot Dog Goes on Sale
Here's everything we know about Boston Dynamics' first commercial robot    By Erico Guizzo

Boston Dynamics is announcing this morning that Spot, its versatile quadruped robot, is now for sale. The machine’s animal-like behavior regularly electrifies crowds at tech conferences, and like other Boston Dynamics’ robots, Spot is a YouTube sensation whose videos amass millions of views.

Now anyone interested in buying a Spot—or a pack of them—can go to the company’s website and submit an order form. But don’t pull out your credit card just yet. Spot may cost as much as a luxury car, and it is not really available to consumers. The initial sale, described as an “early adopter program,” is targeting businesses. Boston Dynamics wants to find customers in select industries and help them deploy Spots in real-world scenarios.  ...   " 

Considerable detail, about potential uses and competitors in this article ...

Protecting Industrial Control

Probably one of the most important things we can do well.  When involved in this space in the 80s, things were far less connected, so both the threats and remedies are far more possible.   Here an overview of SCADA.  And add to that the ability to find patterns of much more subtle changes in a system.

Protecting Industrial Control Systems   By Keith Kirkpatrick
Communications of the ACM, October 2019, Vol. 62 No. 10, Pages 14-16    10.1145/3355377

While most commercial and government organizations have a corporate network to handle administrative, sales, and other back- or front-office data, a growing number of organizations also have implemented one or more supervisory control and data acquisition (SCADA) systems. These systems incorporate software and hardware elements that allow industrial organizations, utility companies, and power generators to monitor and control industrial processes and devices, including sensors, valves, pumps, and motors. Today's SCADA systems also allow organizations to harvest data from these devices, and then to analyze and make adjustments to their operational infrastructure to improve efficiency, make smarter decisions, and quickly address system issues to help mitigate downtime.  .... " 

Alibaba AI Chip

As expected, new chips to do learning and delivery of e-commerce and related AI tasks.   Note mention of 5G ... 

Alibaba unveils new AI chip aimed at speeding up e-commerce and cloud computing tasks in scmp.com

The chip’s launch comes amid a national drive by China to integrate emerging technologies such as AI and next generation 5G wireless networks into its economy  .... "

ACM Talk on Recommender Systems

Of interest, the topic is broadly addressed:

Register now for the next free ACM Webinar, "Recommender Systems: Beyond Machine Learning" presented on Tuesday, October 8, at 4pm ET by Joseph A. Konstan, Distinguished McKnight University Professor and Distinguished University Teaching Professor at the University of Minnesota. Bart Knijenburg, Assistant Professor at the Clemson University School of Computing, will moderate the questions and answers session. Continue the discussion on ACM's Discourse Page. You can view our entire archive of past ACM TechTalks on demand at https://learning.acm.org/techtalks-archive.

Collaboration with People, Smart Machines, Expanding

Especially I think as we get comfortable with assistants being part of teams to solve problems, get things done.  True purely social will evolve as well, but their negative implications are starting to be understood as well, I see people backing off goal-less attention.

Gartner Says Worldwide Social Software and Collaboration Revenue to Nearly Double by 2023
The worldwide market for social software and collaboration in the workplace is expected to grow from an estimated $2.7 billion in 2018 to $4.8 billion by 2023, nearly doubling in size, according to Gartner, Inc.

“The collaboration market is the most fragmented and contextually focused it has ever been, making the barrier to entry extremely low,” said Craig Roth, research vice president at Gartner. “By 2023, we expect nearly 60% of enterprise application software providers will have included some form of social software and collaboration functionalities in their software product portfolios.”

Evolution of the Collaboration Market

The collaboration market has fragmented into many submarkets – for instance, employee communications applications or meeting solutions – that often do not compete with each other.

“The market is not yet a winner-take-all space, creating opportunities for innovation that will expand the size of each submarket,” said Mr. Roth. “The future of social software and collaboration will leverage new capabilities like social analytics, virtual personal assistants (VPAs) and smart machines.” .... " 

Quantum Sensing on a Chip

Though I have a physics background, and a bit of interaction with a company doing 'quantum' work, still having trouble with the long range capabilities here.  And a sensor, entangled with what?  More at the link.

Quantum sensing on a chip
by Rob Matheson, Massachusetts Institute of Technology

MIT researchers have fabricated a diamond-based quantum sensor on a silicon chip using traditional fabrication techniques (pictured), which could enable low-cost quantum hardware. Credit: Massachusetts Institute of Technology

MIT researchers have, for the first time, fabricated a diamond-based quantum sensor on a silicon chip. The advance could pave the way toward low-cost, scalable hardware for quantum computing, sensing, and communication.  .... "

Next in the Digital Living Room

A general, non technical view, with some useful points.

What’s Next for the Digital Living Room?  in Knowledge@Wharton

Fifteen years ago, Microsoft, Sony, Dell and HP were some of the leading companies jostling for supremacy in the digital living room — where computers, TVs and content came together to deliver home entertainment. Microsoft’s new Xbox 360 console not only played video games, but also DVDs and CDs; it streamed music from MP3 players and connected to the company’s Windows Media Center on PCs. Sony’s TVs, sound systems and computers formed an integrated entertainment hub, while Dell and HP had “media-ready” computers that also acted as content servers in the home.

Today, these four players have been overshadowed by Amazon, Google, Apple and Facebook. Fueled by a leap in broadband adoption in households and the advent of smartphones, tablets and other devices, the digital living room is no longer a TV-centric area in the home. Amazon’s Alexa digital assistant and connected devices are changing the way people search for and consume content; Google is doing the same thing with Google TV, Google Home and YouTube, as is Apple with its HomePod, Apple TV and Siri. Meanwhile, Facebook has become a source of content for people as they interact within the social media site’s platform.

But while the digital living room may look vastly different now, it still isn’t the unified and open ecosystem consumers want. “We’re certainly much further along, but there still isn’t true integration of all the different devices and services,” said Kevin Werbach, Wharton professor of legal studies and business ethics. True integration, he said, is a digital environment where consumers can link “every device and every piece of content on every service and be able to experience them together.”

This quasi-utopia remains out of reach because of competing business interests, Werbach said. Companies race to become the main provider of video and music in the home but also seek to dominate in digital services more broadly to corral consumers into their ecosystem. “The economics make it difficult,” he explained. “For the foreseeable future, it’s going to be this co-opetition (cooperative competition) landscape where it’s never quite in anyone’s interest to give consumers what they want, which is one subscription and one set of devices that give them everything.”

“For the foreseeable future, it’s going to be this … landscape where it’s never quite in anyone’s interest to give consumers what they want, which is one subscription and one set of devices that give them everything.” ..... "

Tuesday, September 24, 2019

Amazon Enlists Companies to Enhance Voice

Of interest, it came to mind early on in the enterprise that such AI assistants needed to work together, with the same log-in and access to the same semantic databases, Knowledge Graphs.   Sharing skills and specialized capabilities.   That does not seem to be what this is about, but it is perhaps a first step in the same direction.  Will be following closely.

Amazon enlists 30 companies to improve how voice assistants work together  That includes multiple assistants on the same device that support multiple wake words.   Christine Fisher, @cfisher writes in Engadget

Just because you have an Amazon device doesn't mean you should be limited to interacting with Alexa -- or so Amazon believes. Today, the company announced a new Voice Interoperability Initiative. The goal is to work with other companies so that users can access multiple voice services -- from Alexa to Cortana and Salesforce's Einstein -- on a single device.

"The initiative is built around a shared belief that voice services should work seamlessly alongside one another on a single device, and that voice-enabled products should be designed to support multiple simultaneous wake words," Amazon wrote in a press release.

More than 30 companies have signed on, including brands like BMW, Bose, ecobee, Microsoft, Salesforce, Sonos, Spotify and Samsung-owned Harman. Missing from the lineup of partners are companies like Google, Apple and Samsung -- all three of which have their own voice assistants and dedicated devices. ... " 

Google Assistant Comes to More Chromebooks

I have had a Google Assistant in my smart home suite since their inception.   Connected to several home sensors.   Good especially for multilingual and multi sentence continued interaction ... but like all such systems, needs more work. 

What's new in Chrome OS: Google Assistant comes to more Chromebooks
By Alexander Kuscher in Google Blog
Director of Chrome OS Software

The latest version of Chrome OS brings the Google Assistant to more Chromebooks. It’s starting to roll out now to more non-managed, consumer devices.  The Assistant on Chromebook helps you stay productive, control your smart devices, and have a little fun along the way. To get started, enable the Assistant in your Chromebook’s settings and then try asking or typing some of these queries:  .... " 

Healing a Supply Chain with Machine Learning

Some instructive thoughts about direction.  Finding patterns in anything can be a first step to its improvement.

How machine learning can heal a supply chain   By Polly Mitchell-Guthrie

" ..... Machine learning opportunities in supply chain are abundant – improving forecast accuracy, inspection of physical assets, improved modeling for new product introductions, predictive asset maintenance, and great visibility across the collaborative supply chain network are a few. In fact, Deloitte has proclaimed that the days of “cognitive planning” are upon us, where computing advances, the maturation of machine learning, and the data available in connecting systems enable this step. They have christened it “synchronized planning,” a world in which data can constantly flow throughout the supply chain and allow organizations to far more accurately match production of supply to demand than ever before.

At my own company, Kinaxis, we use a related term, concurrent planning, to illustrate the importance of being able to plan, monitor and respond to changes across the supply chain in a single, harmonious environment. Based on the foundation of data in our in-memory database, we launched our own machine learning journey.

Our focus is to increase the efficiency of the supply chain for our customers, and when analysis of data from a major customer revealed that 53 percent of their lead times were wrong as designed, we started there.  ....  " 

US Funds Three States for Car Research

I see Ohio is included, will look to see what it might take to be involved.  Or at least see where the results are to be published for review.

U.S. Gives 3 States Grants for Self-Driving Car Research
in CNet   By Sean Szymkowski

The U.S. Department of Transportation has issued Michigan, Ohio, and Virginia its first three Automated Driving Systems Demonstration Grants to research and develop autonomous vehicles. Michigan and Ohio will receive $7.5 million each, and Virginia will receive $15 million. Michigan's grant will be used for general testing and research of self-driving technology, and development of processes to assess autonomous vehicle safety. Smart-vehicle project accelerator DriveOhio said Ohio's grant will be used to fund automated driving projects for rural roads and highways, with deployment handled by the state's Transportation Research Center. Virginia's grant will be allocated to a study of autonomous vehicle communication/interaction in an ideal environment, and a study of self-driving systems for trucks.  .... " 

Monday, September 23, 2019

Robot Record Sales

Expect to see robotics in many new places.

World Record Sales for Robots as Sector Reaches $16.5 Billion in Investment   in ZDNet  By Greg Nichols

The International Federation of Robotics' (IFR) World Robotics Report found that 422,000 robotic units were shipped globally in 2018, an increase of 6% compared to 2017. Among the takeaways from the report was an increase of 23% in annual installations of collaborative robots from 2017 to 2018. China continues to be the world's largest industrial robot market, accounting for 36% of total units installed. Robot installations in the U.S. reached about 40,300 units in 2018, 22% more than the year prior. The report highlighted the growing use of robotics in sectors like construction, mining, and healthcare, as technology developers respond to labor crunches following strong global economies. Said Junji Tsuda, president of IFR, "We saw a dynamic performance in 2018 with a new sales record, even as the main customers for robots—the automotive and electrical-electronics industry—had a difficult year." ... '

The AI Work of the Future Report

Having been seeing the AI hype of late, and getting questions from colleagues and clients as to that tt really means.  This piece from MIT, pointed to by O'Reilly, is refreshing in that addresses what still cannot be done, needs to be addressed.   Their "Work of the Future Report: Shaping Technologies and Futures".

I am a big proponent and optimist on the topic, but still think we need to know what the unsolved challenges still are.   And make plans as to what we need to do to solve them.   Both by the scientists and by the business decision makers and process inventors.   Do read this report.

Tracking Drugs with Blockchain

Another example of blockchain use for secure tracking and thus tracing.    The regulation implied is not to specifically use blockchain, but to ensure secure tracking.

How pharma will soon use blockchain to track your drugs in Computerworld

Under regulatory pressure, a large number of pharmaceutical manufacturers, shippers and wholesalers are adopting blockchain to track and trace prescription drugs.
By Lucas Mearian  .... "

Snorkel for Building Data for ML

This was new to me.  But handling and selecting the data is the most important aspect of machine learning projects.   In a recent project it included over 75% of the resource effort. And likely to be much more of the ongoing maintenance effort.  Worth a good look.

Introducing Snorkel
How this Tiny Project Solves One of the Major Problems in Real World Machine Learning Solutions

By Jesus Rodriguez Towards data Science.

Building high quality training datasets is one of the most difficult challenges of machine learning solutions in the real world. Disciplines like deep learning have helped us to build more accurate models but, to do so, they require vastly larger volumes of training data. Now, saying that effective machine learning requires a lot of training data is like saying that “you need a lot of money to be rich”. It’s true, but it doesn’t make it less painful to get there. In many of the machine learning projects we work on at Invector Labs, our customers spend significant more time collecting and labeling training dataset than building machine learning models. Last year, we came across a small project created by artificial intelligence(AI) researchers from Stanford University that provides a programming model for the creation of training datasets. Ever since, Snorkel has become a regular component of our machine learning implementations.  .... " 

Company Building Brands to Sell only on Amazon

A direction we may see more of.

How one company is building brands to sell only on Amazon    By Cale Guthrie Weissman in ModernRetail

Many digital brands are allergic to Amazon. Some, however, welcome it with open arms.

A growing number of companies pushing products are being built specifically for Amazon — forgoing Google and Facebook for customer acquisitions and instead going all in on the e-commerce platform. Driving this is Innovation Department, an agency that through its media assets and email data has been able to build a suite of brands that are Amazon only. By leveraging millions of email addresses, the company has been able to funnel traffic to its products on Amazon, which has shuttled them to the top spots of product search.

Innovation Department is based in New York, and has a few businesses under its umbrella. One is a piece of software for brands to collaborate and manage consumer email data called DojoMojo. Another is a media company called Valyrian Media, which produces a series of weekly newsletters on a range of topics — from food to fashion to technology. Alex Song, the company’s founder and CEO, said the emails are similar to The Skimm; he uses a roster of 25 freelance writers to keep it going. Through those two engines, Innovation Department claims it has built up an audience of over 1.5 million subscribers.  .... " 

Optimal Neural Architecture

Thoughtful and useful piece.    Though I don't seen how this is necessarily universally optimal, which is usually a broad claim.  Link to full and technical paper below.

How to Construct the Optimal Neural Architecture for Your Machine Learning Task

By Adrian de Wynter
Alexa Alexa research Alexa science

The first step in training a neural network to solve a problem is usually the selection of an architecture: a specification of the number of computational nodes in the network and the connections between them. Architectural decisions are generally based on historical precedent, intuition, and plenty of trial and error.

In a theoretical paper I presented last week at the 28th International Conference on Artificial Neural Networks in Munich, I show that the arbitrary selection of a neural architecture is unlikely to provide the best solution to a given machine learning problem, regardless of the learning algorithm used, the architecture selected, or the tuning of training parameters such as batch size or learning rate.

Rather, my paper suggests, we should use computational methods to generate neural architectures tailored to specific problems. Only by considering a vast space of possibilities can we identify an architecture that comes with theoretical guarantees on the accuracy of its computations.

In fact, the paper is more general than that. Its results don’t just apply to neural networks. They apply to any computational model, provided that it’s Turing equivalent, meaning that it can compute any function that the standard computational model — the Turing machine — can.

To be more specific, we must introduce the function approximation problem. This is a common mathematical formulation of what machine learning actually does: given a function (i.e., your model) and a set of samples, you search through the parameters of the function so that it approximates the outputs of a target function (i.e., the distribution of your data). ..... " 

Material Holes Create Amazing Properties

An interesting discovery of the use of  'holes'.  Consider all the advantages we have gotten from material science.

Researchers catalog defects that give 2-D materials amazing properties
Theoretical analysis distinguishes observed “holes” from the huge list of hypothetically possible ones.  David L. Chandler | MIT News Office

Amid the frenzy of worldwide research on atomically thin materials like graphene, there is one area that has eluded any systematic analysis — even though this information could be crucial to a host of potential applications, including desalination, DNA sequencing, and devices for quantum communications and computation systems.

That missing information has to do with the kinds of minuscule defects, or “holes,” that form in these 2-D sheets when some atoms are missing from the material’s crystal lattice.

Now that problem has been solved by researchers at MIT, who have produced a catalog of the exact sizes and shapes of holes that would most likely be observed (as opposed to the many more that are theoretically possible) when a given number of atoms is removed from the atomic lattice. The results are described in the journal Nature Materials in a paper by graduate student Ananth Govind Rajan, professors of chemical engineering Daniel Blankschtein and Michael Strano, and four others at MIT, Lockheed Martin Space, and Oxford University.  ... "

Self-Flying Cargo Drones

Emergence of such capabilities will change transport.

Bell's New, Self-Flying Cargo Drone Hauls a Heavy Load in Wired
The all-electric APT 70 can tote up to 70 pounds, cruise at 75 mph, and cover 35 miles with a fully charged battery.  ... "

Podcast Interview with Hilary Mason on GigaOM

Another AI practitioner talks about the advances and future of AI:

Voices in AI – Bonus: A Conversation with Hilary Mason   By Byron Reese

On this Episode of Voices in AI features Byron speaking with Hilary Mason, an acclaimed data and research scientist, about the mechanics and philosophy behind designing and building AI.

Listen to this episode or read the full transcript at www.VoicesinAI.com

Byron Reese: This is Voices in AI, brought to you by Gigaom and I am Byron Reese. Today, our guest is Hilary Mason. She is the GM of Machine Learning at Cloudera, and the founder and CEO of Fast Forward Labs, and the Data Scientist in residence at Accel Partners, and a member of the Board of Directors at the Anita Borg Institute for Women in Technology, and the co-founder of hackNY.org. That’s as far down as it would let me read in her LinkedIn profile, but I’ve a feeling if I’d clicked that ‘More’ button, there would be a lot more.

Welcome to the show, amazing Hilary Mason!

Hilary Mason: Thank you very much. Thank you for having me.

I always like to start with the question I ask everybody because I’ve never had the same answer twice and – I’m going to change it up: why is it so hard to define what intelligence is? And are we going to build computers that actually are intelligent, or they can only emulate intelligence, or are those two things the exact same thing?

This a fun way to get started! I think it’s difficult to define intelligence because it’s not always clear what we want out of the definition. Are we looking for something that distinguishes human intelligence from other forms of intelligence? There’s that joke that’s kind of a little bit too true that goes around in the community that AI, or artificial intelligence, is whatever computers can’t do today. Where we keep moving the bar, just so that we can feel like there’s something that is still uniquely within the bounds of human thought.

Let’s move to the second part of your discussion which is really asking, ‘Can computers ever be indistinguishable from human thought?’ I think it’s really useful to put a timeframe on that thought experiment and to say that in the short term, ‘no.’ I do love science fiction, though, and I do believe that it is worth dreaming about and working towards a world in which we could create intelligences that are indistinguishable from human intelligences. Though I actually, personally, think that it is more likely we will build computational systems to augment and extend human intelligence. For example, I don’t know about you but my memory is horrible. I’m routinely absentminded. I do use technology to augment my capabilities there, and I would love to have it more integrated into my own self and my intelligence. ..... " 

Sunday, September 22, 2019

Advances in AI Earthquake Prediction

We attended some early neural net applications meeting where this was proposed, and added some of our own thoughts.   Nice to see this is evolving.   Are there shaking patterns in the earth that reliably predict earthquakes?    Thinking it is likely yes, but enough for the prediction and likely magnitude and location of major events?   I think yes too

AI Helps Seismologists Predict Earthquakes  in Wired
Machine learning is bringing seismologists closer to an elusive goal: forecasting quakes well before they strike. .... 

Artificial Intelligence Takes On Earthquake Prediction in QuantaMag

After successfully predicting laboratory earthquakes, a team of geophysicists has applied a machine learning algorithm to quakes in the Pacific Northwest.

In May of last year, after a 13-month slumber, the ground beneath Washington’s Puget Sound rumbled to life. The quake began more than 20 miles below the Olympic mountains and, over the course of a few weeks, drifted northwest, reaching Canada’s Vancouver Island. It then briefly reversed course, migrating back across the U.S. border before going silent again. All told, the monthlong earthquake likely released enough energy to register as a magnitude 6. By the time it was done, the southern tip of Vancouver Island had been thrust a centimeter or so closer to the Pacific Ocean.

Because the quake was so spread out in time and space, however, it’s likely that no one felt it. These kinds of phantom earthquakes, which occur deeper underground than conventional, fast earthquakes, are known as “slow slips.” They occur roughly once a year in the Pacific Northwest, along a stretch of fault where the Juan de Fuca plate is slowly wedging itself beneath the North American plate. More than a dozen slow slips have been detected by the region’s sprawling network of seismic stations since 2003.  And for the past year and a half, these events have been the focus of a new effort at earthquake prediction by the geophysicist Paul Johnson.    ..... " 

Alexa Skills for Productivity

Still, I think not good enough to really make me have a standard device on my desk at work.   What can be done to really make it essential?

Review: 18 Alexa skills for productivity, collaboration and more in Computerworld

You can use Amazon’s voice-activated Alexa assistant to send Slack messages, texts, and emails; add items to to-do lists; and more. But do Alexa skills for business users really save you time and effort?
 By James A. Martin

Earlier this year, Amazon announced it had sold more than 100 million Alexa devices. Along with the consumer market, Amazon is also pushing Alexa into offices via Alexa for Business, which enables developers to create skills exclusively for internal users at their companies via APIs and other tools.

But can Alexa’s off-the-shelf skills truly make enterprise users more productive? Will they make collaboration easier? To find out, I tested 18 Alexa productivity and collaboration skills that are available to everyone but potentially useful for business professionals. All of these skills are free, although some are associated with paid or freemium services, as noted.  .... "

Learning and Revealing Private Data

Been looking at past articles of the Berkeley AI Group, and found an interesting aspect of data privacy examined.  Can a neural network, while being trained,  inadvertently learn and thus reveal pieces of data that happen to be in the presented data?  So say if a credit card number was in the data, could it later reveal that if the trained model was examined?  And what could you do about it?    Nicely done, largely non technical piece.

Evaluating and Testing Unintended Memorization in Neural Networks
By Nicholas Carlini    Aug 13, 2019

It is important whenever designing new technologies to ask “how will this affect people’s privacy?” This topic is especially important with regard to machine learning, where machine learning models are often trained on sensitive user data and then released to the public. For example, in the last few years we have seen models trained on users’ private emails, text messages, and medical records.

This article covers two aspects of our upcoming USENIX Security paper that investigates to what extent neural networks memorize rare and unique aspects of their training data.  (The paper's abstract provides a further descriptive overview)

Specifically, we quantitatively study to what extent following problem actually occurs in practice:

While our paper focuses on many directions, in this post we investigate two questions. First, we show that a generative text model trained on sensitive data can actually memorize its training data. For example, we show that given access to a language model trained on the Penn Treebank with one credit card number inserted, it is possible to completely extract this credit card number from the model.

Second, we develop an approach to quantify this memorization. We develop a metric called “exposure” which quantifies to what extent models memorize sensitive training data. This allows us to generate plots, like the following. We train many models, and compute their perplexity (i.e., how useful the model is) and exposure (i.e., how much it memorized training data). Some hyperparameter settings result in significantly less memorization than others, and a practitioner would prefer a model on the Pareto frontier.    .... "

5G Coverage for IOT

Was recently asked to give an opinion of 5G use in the Cincinnati area for potential IOT applications,  with mobility implications,  and was pointed to this map.  Which can be used US country wide.  This particular map gives you only AT&T 3G to 5G.   It implies it is frequently updated.  You can click on the map for many locations in the US, zoom in, etc.   Useful for early analyses of applications.  etc.   Please pass along pointers to any other resources of this type.

Saturday, September 21, 2019

Information Latency Study for DOD

I  suggest that there are important latency conditions in many parts of large networked systems.  For example in supply chains it can greatly change costs, effective responses, contract and goal compliance, risk analysis,  decision design integration, etc.    Information latency is always considered in such systems, but often not carefully enough.  Latency is a key kind of metadata, and should be included in a 'knowledge graph' to represent a problem in both its statement and in any automated approaches being designed.   - Franz

Research Team to Study Information Latency With $7.5M DOD Grant
By Virginia Polytechnic Institute and State University
 Virginia Tech researchers Walid Saad, Jeffrey Reed, and Thomas Hou

Information latency is a measure of how quickly or slowly networked devices transmit information. When the information being transmitted is for the military, understanding latency may be the deciding factor in the outcome of warfare.

That's one of the reasons the U.S. Department of Defense has now tapped the expertise of an interdisciplinary research team led by Virginia Tech to study latency and information freshness in military Internet of Things systems with a $7.5 million, five-year Multidisciplinary University Research Initiative (MURI) grant.

The goal is to develop a novel foundational framework for guaranteeing low latency and information freshness in military networked systems, such as the Internet of Things, using a cutting-edge concept known as multimode age of information, which tightly ties in information latency with the dynamic networked military system.

The project will fundamentally define this new concept of information latency and provide a suite of tools to optimize multimode age of information in massive-scale military networked systems.

"Despite much progress being made in the study of military communications, the basic science for tracking, control, and optimization of information latency is yet to be developed," says principal investigator Jeffrey Reed, Willis G. Worcester Professor of Electrical and Computer Engineering in the College of Engineering at Virginia Tech. "In fact, a fundamental knowledge of information latency is crucial for our military to maintain information superiority on the battlefield."  ..... " 

(see more at link above)

Do we Know How the Brain Works?

Have had  conversations of late with people who have said:  Look at neural nets they are modeled after brains.    But the answer is still, no we don't.  And we are still not close.   Artificial neural models are very different even from the way we think we know how biological neurons work.    Not to say the artificial models are not useful,  but its not what your brain actual does.  How much closer are we getting?  See the Neuralink approach, mentioned below, to understand the challenges.  Will we know?  I am always optimistic.

Will It Ever Be Possible to Understand the Human Brain?
Despite technical breakthroughs like Elon Musk’s Neuralink, scientists still have no reliable model of how the brain actually works

By Brian Bergstein in Medium ... 

Structured Signals for Model Training

Technical but interesting point about how to add structured knowledge into otherwise non transparent networks.  Examining further.

Posted by Da-Cheng Juan (Senior Software Engineer) and Sujith Ravi (Senior Staff Research Scientist)

We are excited to introduce  Neural Structured Learning in TensorFlow, an easy-to-use framework that both novice and advanced developers can use for training neural networks with structured signals. Neural Structured Learning (NSL) can be applied to construct accurate and robust models for vision, language understanding, and prediction in general.

Neutral structured learning framework

Many machine learning tasks benefit from using structured data which contains rich relational information among the samples. For example, modeling citation networks, Knowledge Graph inference and reasoning on linguistic structure of sentences, and learning molecular fingerprints all require a model to learn from structured inputs, as opposed to just individual samples. These structures can be explicitly given (e.g., as a graph), or implicitly inferred (e.g., as an adversarial example). Leveraging structured signals during training allows developers to achieve higher model accuracy, particularly when the amount of labeled data is relatively small. Training with structured signals also leads to more robust models. These techniques have been widely used in Google for improving model performance, such as learning image semantic embedding.

Neural Structured Learning (NSL) is an open source framework for training deep neural networks with structured signals. It implements Neural Graph Learning, which enables developers to train neural networks using graphs. The graphs can come from multiple sources such as Knowledge graphs, medical records, genomic data or multimodal relations (e.g., image-text pairs). NSL also generalizes to Adversarial Learning where the structure between input examples is dynamically constructed using adversarial perturbation.  ... " 

See also:  https://www.datanami.com/2019/09/04/google-adds-structured-signals-to-model-training/

See also:  https://venturebeat.com/2019/09/03/google-launches-tensorflow-machine-learning-framework-for-graphical-data/ 

Sensing and AR/VR

Good to see AR/VR linked strongly to sensing capabilities.     As is suggested this is the way we construct models of the word.  Whether they be virtual or real life.   It also allows us to link data to those worlds and drive to better solutions via analytics or AI.

3 Questions: Why sensing, why now, what next?  in MIT News
By Brian Anthony, co-leader of SENSE.nano, discusses sensing for augmented and virtual reality and for advanced manufacturing.


Sensors are everywhere today, from our homes and vehicles to medical devices, smart phones, and other useful tech. More and more, sensors help detect our interactions with the environment around us — and shape our understanding of the world.

SENSE.nano is an MIT.nano Center of Excellence, with a focus on sensors, sensing systems, and sensing technologies. The 2019 SENSE.nano Symposium, taking place on Sept. 30 at MIT, will dive deep into the impact of sensors on two topics: sensing for augmented and virtual reality (AR/VR) and sensing for advanced manufacturing. 

MIT Principal Research Scientist Brian W. Anthony is the associate director of MIT.nano and faculty director of the Industry Immersion Program in Mechanical Engineering. He weighs in on why sensing is ubiquitous and how advancements in sensing technologies are linked to the challenges and opportunities of big data.

Q: What do you see as the next frontier for sensing as it relates to augmented and virtual reality?

A: Sensors are an enabling technology for AR/VR. When you slip on a VR headset and enter an immersive environment, sensors map your movements and gestures to create a convincing virtual experience.

But sensors have a role beyond the headset. When we're interacting with the real world we're constrained by our own senses — seeing, hearing, touching, and feeling. But imagine sensors providing data within AR/VR to enhance your understanding of the physical environment, such as allowing you to see air currents, thermal gradients, or the electricity flowing through wires superimposed on top of the real physical structure. That's not something you could do any place else other than a virtual environment.    .... " 

Apple Shows Interest in Blockchain Tech

The fact that Apple is following this is significant.  Apple Pay at least could have future implementations to consider.    Comments below.

Cryptocurrency Has ‘Long-Term Potential,’ Says Apple Exec
Apple is “watching cryptocurrency,” according to an executive at the tech giant.

Apple Pay vice president Jennifer Bailey, talking to CNN at a private event in San Francisco, said “We think it’s interesting. We think it has interesting long-term potential.”Apple is “watching cryptocurrency,” according to an executive at the tech giant.  Apple Pay vice president Jennifer Bailey, talking to CNN at a private event in San Francisco, said “We think it’s interesting. We think it has interesting long-term potential.”

Bailey did not elucidate about the possible uses of the technology Apple might pursue. She had been taking about the future of payments at the CNN event.  With Facebook planning to launch its Libra stablecoin next year, it would be surprising indeed if Apple were not watching crypto. But, Bailey’s comments may come as confirmation that more might be going behind the scenes at Apple’s Cupertino HQ.

In February, Apple submitted a filing with the Securities and Exchange Commission (SEC) that contained rare details about the computing giant’s interest in blockchain tech.  .... "

Bailey did not elucidate about the possible uses of the technology Apple might pursue. She had been taking about the future of payments at the CNN event.

With Facebook planning to launch its Libra stablecoin next year, it would be surprising indeed if Apple were not watching crypto. But, Bailey’s comments may come as confirmation that more might be going behind the scenes at Apple’s Cupertino HQ.

In February, Apple submitted a filing with the Securities and Exchange Commission (SEC) that contained rare details about the computing giant’s interest in blockchain tech.   ..... " 

Friday, September 20, 2019

Google Quantum Supremacy?

Quite a tease here.   Have they really reached this goal?  And what was the nature and form of the problem?  See much more below.    And at the link.

Google researchers have reportedly achieved “quantum supremacy”
Google's quantum computer  in MIT Tech Review

The news: According to a report in the Financial Times, (oops, the site is walled) a team of researchers from Google led by John Martinis have demonstrated quantum supremacy for the first time. This is the point at which a quantum computer is shown to be capable of performing a task that’s beyond the reach of even the most powerful conventional supercomputer. The claim appeared in a paper that was posted on a NASA website, but the publication was then taken down. Google did not respond to a request for comment from MIT Technology Review.   ... " 

Apple Overton Leading to Code Automation?

Increasingly moving towards automating many aspects of coding.   In fact robot assistants that 'observe' the coding process could readily insure that secure, robust and repeatable methods were used when building AI systems.    They could also make sure that the most important methods were shared, maintained and updated as new research dictated. 

On the data side, that the data was properly selected, prepared and delivered with needed metadata to support explainable results.  That's why I am not a believer in just training everyone in low level coding.  People are not good at these skills.    Train them in problem solving supported by prefabricated AI systems and results visualization methods, because ultimately the classic methods will be built, solved, updated and delivered by automation.

Apple ‘Overton’: Automating Low-Code Machine Learning     By Nick Kolakowski

Apple has struggled in recent years to establish a robust artificial intelligence (A.I.) practice. This partially stems from the company’s ironclad privacy policies—it’s more difficult to analyze datasets for insights when internal rules prevent the company from using every piece of user data it can vacuum up. Nonetheless, Apple’s newest projects show that it’s powering ahead anyway—including one platform that, if it’s ever released, could change how you use A.I. and machine learning (ML).

(It’s worth remembering how, in a 2015 speech, Apple CEO Tim Cook accused tech giants such as Facebook and Google of “gobbling up everything they can learn about you and trying to monetize it,” which he framed as “wrong.” It seems unlikely that Apple’s stance on data and privacy will change during Cook’s tenure.)

According to a just-released paper with the dry-but-mysteriously-compelling title “Overton: A Data System for Monitoring and Improving Machine Learned Products,” a group of Apple researchers describe their work on a machine-learning platform (named—you guessed it—“Overton”) designed to “support engineers in building, monitoring, and improving production machine learning systems.”   ......... '

Abstract of paper mentioned above:    https://arxiv.org/pdf/1909.05372.pdf   (technical)

 ... We describe a system called Overton, whose main design goal is to support engineers in building, monitoring, and improving production machine learning systems. Key challenges engineers face are monitoring fine-grained quality,diagnosing errors in sophisticated applications, and handling contradictory or incomplete supervision data. Overton automates the life cycle of model construction, deployment, and monitoring by providing a set of novel high-level,declarative abstractions. Overton’s vision is to shift developers to these higher-level tasks instead of lower-level machine learning tasks. In fact, using Overton, engineers can build deep-learning-based applications without writing any codein frameworks like TensorFlow. For over a year, Overton has been used in production to support multiple applications in both near-real-time applications and back-of-house processing. In that time, Overton-based applications have answered billions of queries in multiple languages and processed trillions of records reducing errors 1.7 − 2.9× versus  production systems. .... "