/* ---- Google Analytics Code Below */

Sunday, October 25, 2020

Fooling Self Driving Autopilots

 Had seen this mentioned before, good overview in Schneier.   As usual the discussion in the comments there is the most interesting, with experts in the field chiming in, discussing implications for self driving vehicles and their use regulation.

Split-Second Phantom Images Fool Autopilots in Schneier Blog

Researchers are tricking autopilots by inserting split-second images into roadside billboards.

Researchers at Israel’s Ben Gurion University of the Negev … previously revealed that they could use split-second light projections on roads to successfully trick Tesla’s driver-assistance systems into automatically stopping without warning when its camera sees spoofed images of road signs or pedestrians. In new research, they’ve found they can pull off the same trick with just a few frames of a road sign injected on a billboard’s video. And they warn that if hackers hijacked an internet-connected billboard to carry out the trick, it could be used to cause traffic jams or even road accidents while leaving little evidence behind.  ...  "

Learning Common Sense from Animals

The common sense problem.   Intriguing view.  Not enough details, but has links to related academic papers.

Researchers suggest AI can learn common sense from animals  By Khari Johnson   @kharijohnson   October 25, 2020   in VentureBeat

AI researchers developing reinforcement learning agents could learn a lot from animals. That’s according to recent analysis by Google’s DeepMind, Imperial College London, and University of Cambridge researchers assessing AI and non-human animals.

In a decades-long venture to advance machine intelligence, the AI research community has often looked to neuroscience and behavioral science for inspiration and to better understand how intelligence is formed. But this effort has focused primarily on human intelligence, specifically that of babies and children.

“This is especially true in a reinforcement learning context, where, thanks to progress in deep learning, it is now possible to bring the methods of comparative cognition directly to bear,” the researchers’ paper reads. “Animal cognition supplies a compendium of well-understood, nonlinguistic, intelligent behavior; it suggests experimental methods for evaluation and benchmarking; and it can guide environment and task design.”

DeepMind introduced some of the first forms of AI that combine deep learning and reinforcement learning, like the deep Q-network (DQN) algorithm, a system that played numerous Atari games at superhuman levels. AlphaGo and AlphaZero also used deep learning and reinforcement learning to train AI to beat a human Go champion and achieve other feats. More recently, DeepMind produced AI that automatically generates reinforcement learning algorithms.  ... "

Saturday, October 24, 2020

Benchmarking Voice Understanding

 Good points made.   I have been using Google voice assistant versus Amazon Alexa for a few years now.  Only now and then using Siri.   I see more 'balking' by Alexa (that is she does not answer coherently at all)  than Google assistant, but then more 'understanding'.  Alexa is in general more 'human' in conversation.  After that I don't see adequate contextual understanding from either in general.  It all depends on how important and risky the dependent decisions are.  Google does a good job of multilingual understanding when properly set up.   Here voicebot.ai has taken a broader look that is worth looking at.   Neither in my opinion can understand and answer that I would call 'complex questions'.

Understanding Is Crucial for Voice and AI: Testing and Training are Key To Monitoring and Improving It      By John Kelvie in Voicebot.ai

BENCHMARKING VOICE ASSISTANTS

How well does your voice assistant understand and answer complex questions? It is often said, making complex things simple is the hardest task in programming, as well as the highest aim for any software creator. The same holds true for building for voice. And the key to ensuring an effortlessly simple experience for voice is the accuracy of understanding, achieved through testing and training.

To dig deeper into the process of testing and training for accuracy, Bespoken undertook a benchmark to test Amazon Echo Show 5, Apple iPad Mini, Google Nest Home Hub. This article explores what we learned through this research and the implications for the larger voice industry based on other products and services.

For the benchmark, we took a set of nearly 1,000 questions from the ComQA dataset and ran them against the three most popular voice assistants: Amazon Alexa, Apple Siri, and Google Assistant. The results were impressive – these questions were not easy, and the assistants handled them often with aplomb:  ... "

GM Can Manage an EV's Batteries Wirelessly and Remotely

Seems quite a considerable improvement of automotive battery use and management for electric vehicles.

Exclusive: GM Can Manage an EV's Batteries Wirelessly—and Remotely

The new system eliminates the rat's nest of wiring and collects information that can be used to design better batteries.   By Lawrence Ulrich

When the battery dies in your smartphone, what do you do? You complain bitterly about its too-short lifespan, even as you shell out big bucks for a new device. 

Electric vehicles can’t work that way: Cars need batteries that last as long as the vehicles do. One way of getting to that goal is by keeping close tabs on every battery in every EV, both to extend a battery’s life and to learn how to design longer-lived successors.

IEEE Spectrum got an exclusive look at General Motors’ wireless battery management system. It’s a first in any EV anywhere (not even Tesla has one). The wireless technology, created with Analog Devices, Inc., will be standard on a full range of GM EVs, with the company aiming for at least 1 million global sales by mid-decade. 

Those vehicles will be powered by GM’s proprietary Ultium batteries, produced at a new US $2.3 billion plant in Ohio, in partnership with South Korea’s LG Chem.    ... " 

European Quantum Computing Facility Goes Online

 With some useful statistics about usage and capabilities.  Note integration with simulator.  Good description of use and testing processes in place.

Home/News/First European Quantum Computing Facility Goes Online/Full Text

First European Quantum Computing Facility Goes Online,  By Arnout Jaspers

Quantum Inspire, hosted by QuTech, a collaboration of Delft University of Technology and TNO, the Netherlands Organization for Applied research, consists of two independent quantum processors, Spin-2 and Starmon-5, and a quantum simulator. Anyone can create an account, use the Web interface to write a quantum algorithm, and have it executed by one of the processors in milliseconds (if there is no queue), with the result returned within a minute. The process is fully automated.  

Seen from the outside, Spin-2 and Starmon-5 are two large, cylindrical cryostats hanging from the ceiling in a university building. One floor up, a man-size stack of electronics for each takes care of the cooling, feeding the quantum processor input from users and reading out the results. Usually, there is no one in these rooms.     

The facility officially went online on April 20, and over 1,000 accounts have been created since then. Though many curious visitors never returned, active users now upload about 6,000 jobs.... "

Radical Technique Lets AI Learn with Practically No Data

 Learning more accurately, efficiently, is always useful.

A Radical Technique Lets AI Learn with Practically No Data

MIT Technology Review,  By Karen Hao, October 16, 2020

Scientists at Canada's University of Waterloo suggest artificial intelligence (AI) models should be capable of “less than one”-shot (LO-shot) learning, in which the system accurately recognizes more objects than those on which it was trained. They demonstrated this concept with the 60,000-image MNIST computer-vision training dataset, based on previous work by Massachusetts Institute of Technology researchers that distilled it into 10 images, engineered and optimized to contain an equivalent amount of data to the full set. The Waterloo team further compressed the dataset by generating images that combine multiple digits and feeding them into an AI model with hybrid, or soft, labels. Said Waterloo’s Ilia Sucholutsky, “The conclusion is depending on what kind of datasets you have, you can probably get massive efficiency gains.”.. 

https://arxiv.org/abs/2009.08449

U.S. Government Agencies to Use AI to Cut Outdated Regulations

 Makes sense, we examined related methods for regulations.

U.S. Government Agencies to Use AI to Cull, Cut Outdated Regulations

Reuters,  David Shepardson

October 16, 202    The White House Office of Management and Budget (OMB) said federal agencies will use artificial intelligence (AI) to remove outdated, obsolete, and inconsistent requirements across government regulations. A 2019 pilot employing machine learning algorithms and natural-language processing at the U.S. Department of Health and Human Services turned up hundreds of technical errors and outdated mandates in agency rulebooks. The White House said agencies will utilize AI and other software "to comb through thousands and thousands of regulatory code pages to look for places where code can be updated, reconciled, and generally scrubbed of technical mistakes." According to OMB director Russell Vought, the initiative would help agencies "update a regulatory code marked by decades of neglect and lack of reform." Participating agencies include the departments of Transportation, Agriculture, Labor, and the Interior.

Friday, October 23, 2020

Leveraging NVidia GPUs to Power Analytics and AI

Below is an Ad from Nvidia,  Book worth looking at.   I like the fact that they say 'AI and Analytics', a rare clarification.  Its not all AI

  .... Free BOOK

ACCELERATING APACHE SPARK 3

Leveraging NVIDA GPUs to Power the Next Era of Analytics and AI

Apache Spark is a powerful execution engine for large-scale parallel data processing across a cluster of machines, which enables rapid application development and high performance.

In this ebook, learn how Spark 3 innovations make it possible to use the massively parallel architecture of GPUs to further accelerate Spark data processing.

Fill out the form below to download the ebook and learn about the following: The data processing evolution, from Hadoop to GPUs and the NVIDIA RAPIDS™ library

Spark, what it is, what it does, and why it matters

GPU-acceleration in Spark

DataFrames and Spark SQL

A Spark regression example with a random forest classifier

An example of an end-to-end machine learning workflow GPU-accelerated with XGBoost  ... "

Technology Tailoring in Education

Always an interesting question.  The nature of instruction.   In theory it could be precisely tailored to every student and by testing it could be altered in real time to get better results.  Personalized to get the best possible results for each student.  How practical is this,  how will it alter the business of education?   How much is the human touch needed?  

Using Technology to Tailor Lessons to Each Student,  The New York Times,  Janet Morrisey

Computer algorithms and machine learning are helping to personalize instruction to individual students, a trend experts say is long overdue. Some think the Covid-19 pandemic is accelerating U.S. schools' migration to personalized learning programs; American Federation of Teachers president Randi Weingarten said, "Innovations like this can help educators meet students where they are and address their individual needs." Companies like New Classrooms are striving to advance personalized learning; the nonprofit's Teach to One 360 algorithm gives each student access to multigrade curriculums and skills, in order to better address learning gaps in those who are several grades behind. Other companies working aggressively on personalized learning solutions include Eureka Math, iReady, and Illustrative Mathematics.

Towards Artificial Common Sense

 The key part of AI we don't know hw to do yet. Good overview of current state and directions.   What most all of us consider the important starting point for useful intelligence.  It is often also has the ability to explain why and how it came to a conclusion.

Seeking Artificial Common Sense   By Don Monroe  in CACM

Communications of the ACM, November 2020, Vol. 63 No. 11, Pages 14-16 10.1145/3422588

Although artificial intelligence (AI) has made great strides in recent years, it still struggles to provide useful guidance about unstructured events in the physical or social world. In short, computer programs lack common sense.

"Think of it as the tens of millions of rules of thumb about how the world works that are almost never explicitly communicated," said Doug Lenat of Cycorp, in Austin, TX. Beyond these implicit rules, though, commonsense systems need to make proper deductions from them and from other, explicit statements, he said. "If you are unable to do logical reasoning, then you don't have common sense."

This combination is still largely unrealized; in spite of impressive recent successes of machine learning in extracting patterns from massive data sets of speech and images, they often fail in ways that reveal their shallow "understanding." Nonetheless, many researchers suspect hybrid systems that combine statistical techniques with more formal methods could approach common sense.

Importantly, such systems could also genuinely describe how they came to a conclusion, creating true "explainable AI" (see "AI, Explain Yourself," Communications 61, 11, Nov. 2018).   ... " 

Thursday, October 22, 2020

Quantum Safe Hybrid Digital Certificates

 Looking at quantum safe.

What Is a Quantum-Safe Hybrid Digital Certificate?

Sectigo’s Tim Callan, Jason Soroko and Alan Grau break down what quantum safe hybrid TLS certificates are and how they can help to prepare businesses for quantum-safe cryptography in Sectigo’s  

Quantum computing is poised to disrupt the technological world as we know it. And although quantum computing — and all of the advantages it offers — is still realistically years away, businesses and organizations need to prepare themselves for its inevitable downside: broken cryptosystems.

Quantum computers will break our existing asymmetric cryptosystem — something that cybercriminals will be ready and eager to take advantage of. This is why it’ll be necessary to migrate your existing IT and cryptosystems to their quantum-resistant or quantum-safe equivalents.

But, of course, upgrading to post quantum cryptographic (PQC) systems and infrastructure takes time and resources. So, one of the ways to help futureproof your cyber security through this process is through the use of hybrid digital certificates such as a hybrid TLS certificate. ... " 

RPA for Fintech, a Use Example

Here a good intro to RPA (Robotic Process Automation) for finance applications.  Nice too because most of us can understand basic financial statements, arithmetic and goals.   Below just the intro, full look at the link. 

RPA Guide For Fintech Industry   Posted by Amit Dua  from DSC

Technology is changing the way we live and breathe. We’d even go a step ahead, and quote: Everything we do as humans, including every feat we’ve achieved as a modern civilization, is marked by dynamic leaps in technology. 

What is a dynamic leap, you ask? Let’s understand technological advancements through an example of linear and dynamic steps.  When talking in the Linear terms, if you go from 1 to 30, you cover 30 steps.  Common sense, right? But wait. When talking in Dynamic terms, if you go from 1 to 30, you cover a Billion.  That’s what a dynamic leap is; and technology is evolving at a dynamic pace. Marshall McLuhan puts it best: ‘First, we build the tools; then they build us back.’

The same is true with the BFSI (banking, financial services, and Insurance) sector. 

Since the advancements in automation and digital technologies, it has become preemptive for financial institutions to change the dynamics and inculcate automation in their regulatory requirements.  If we follow the automation trend, it suggests that intelligent automation technologies like Robotic Process Automation (RPA) and AI can reduce costs in Fintech by up to 25%. 

Alt: RPA Implementation in Fintech Industry

What’s Fintech? According to Investopedia, ‘Financial technology (Fintech) is used to describe new tech that seeks to improve and automate the delivery and use of financial services.’.... "

Amazon will Pay You to Know what you Bought Somewhere Else

Amazon's paid shopper panel.   

Amazon will pay you to know what you bought somewhere else  by George Anderson in Retailwire

Amazon.com wants greater insights into what its customers are purchasing and it is willing to pay for the information. The e-tailing and technology giant has launched Amazon Shopper Panel, an invitation-only program that allows participants to earn monthly rewards by sharing receipts of purchases made outside of its website and retail stores.

Participants in the program are asked to upload photos of 10 eligible receipts per month taken with the Shopper Panel app. Alternatively, they can forward email receipts to Amazon. Additional rewards are available when participants fill out short surveys. Amazon customers can earn up to $10 a month that can be applied to their balance on the site or donated to charity.

Participation in the panel is voluntary and those involved can choose to stop participating at any time. Amazon collects only the information shared by panelists. The company said it “deletes any sensitive information, such as prescription information from drug store receipts.” Amazon said all personal information of panelists is secured and handled in accordance with its privacy policy.

Amazon’s Shopper Panel site says that the data gleaned from receipts will help brands offer better products and make ads more relevant on Amazon.....  "

MIT and Related Quantum Resources

Was just pointed to this (much more at the link):

MIT partners with national labs on two new National Quantum Information Science Research Centers

Co-design Center for Quantum Advantage and Quantum Systems Accelerator are funded by the U.S. Department of Energy to accelerate the development of quantum computers.

Kylie Foy | Sampson Wilcox | MIT Lincoln Laboratory | Research Laboratory of Electronics Publication Date:August 31, 2020   .... " 

Whats does a Space Force Do?

 Well one thing, guarding against cybersecurity threats.

US Space Force guards against cybersecurity threats miles above Earth

If space is indeed the “final frontier,” as narrated in the famous opening voiceover in “Star Trek,” it is also becoming the final line of defense against threats to technologies that have become essential for daily life.

From the use of GPS to navigate traffic congestion to fighting forest fires with heat sensors, orbiting satellites play a critical role in providing convenience and safety for countries around the globe. And as governments and private enterprises launch more satellites, the attack surface has expanded as well.

“Space is becoming congested and contested,” said Lt. Gen. John F. Thompson (pictured), commander of the Space and Missile Systems Center at the Los Angeles Air Force Base in California. “The cyber aspects of the space business are truly, truly daunting and important to all of us. Integrating cybersecurity into our space systems, both commercial and government, is a mandate.”

Thompson spoke with John Furrier, host of theCUBE, SiliconANGLE Media’s livestreaming studio, during the Space & Cybersecurity Symposium. They discussed the vital role of GPS infrastructure, threats from nation states, the military’s adoption of a DevSecOps mindset, hiring goals and funding for startups with innovative ideas designed to protect the final frontier.

Trillions in GPS value

As a division of the U.S. Space Force, the Space and Missile Systems Center is responsible for acquiring and developing military space systems. This includes both orbiting satellites and ground communications systems for the U.S. Space Force, critical partners in the Department of Defense and the intelligence community, according to Thompson. 

Wednesday, October 21, 2020

Extending Insight from Neural Networks

 Some thoughts about how trained networks can be used to further analyse chemical structure.

Opening the Black Box of Neural Networks

Pacific Northwest National Laboratory, Allan Brettman

Pacific Northwest National Laboratory (PNNL) researchers used deep learning neural networks to model water molecule interactions, unearthing data about hydrogen bonds and structural patterns. The PNNL team employed 500,000 water clusters from a database of more than 5 million water cluster minima to train a neural network, relying on graph theory to extract structural patterns of the molecules' aggregation. The method provides additional analysis after the network has been trained, allowing comparison between measurements of the water cluster networks' structural traits and the predicted neural network, enhancing the network's understanding in subsequent analyses. PNNL's Jenna Pope said, "If you were able to train a neural network, that neural network would be able to do computational chemistry on larger systems. And then you could make similar insights in computational chemistry about chemical structure or hydrogen bonding or the molecules’ response to temperature changes.”

Driverless in San Francisco

And yet more cars without drivers.     A tipping point?

GM to Run Driverless Cars in San Francisco Without Human Backups    Associated Press, Tom Krisher

General Motors' Cruise autonomous vehicle unit said it will remove human backup drivers from the driverless vehicles it is testing on San Francisco streets by year's end, as California's Department of Motor Vehicles has granted the company a permit to do so. This follows Google subsidiary Waymo's announcement last week that it would open its autonomous ride-hailing service in Phoenix, AZ, without human drivers. Said the University of California, Berkeley's Steven Shladover, "I don't see them as revolutionary steps, but they're part of this step-by-step progress toward getting the technology to be able to work under a wider range of conditions."

SpaceX and Microsoft

Here Comes the Space Cloud.   Faster anywhere, even in space.

 Microsoft, SpaceX Team Up to Bring Cloud Computing to Space

Nextgov. Frank Konkel

Microsoft has partnered with SpaceX and others to make its Azure cloud technology available and accessible to people anywhere on Earth, and potentially those in space. Microsoft will use SpaceX's forthcoming Starlink satellite constellation to bring customers in remote regions high-speed, low-latency broadband; the satellites will function as a channel for data between Microsoft's conventional datacenters and matched ground stations, and the company's modular datacenters. Microsoft also announced an expansion of its Azure Orbital partnership with satellite telecommunications company SES to broaden connectivity between its cloud data centers and edge devices. ....'

Hewlett Foundation on Security Cyber Design

Been a long while since I looked at anything by the Hewlett Foundation.  Attended their meetings.  Now back connected.  Visuals are a good thing for communication.  Especially for obscure security concepts.  

Hewlett Foundation Reveals Top Ideas in Cyber Design Competition

The William and Flora Hewlett Foundation today announced five top ideas in the international “Cybersecurity Visuals Challenge,” which is focused on producing easily-understandable visuals to better illustrate the complexity and importance of today’s cybersecurity challenges to broad audiences. 

Five winning designers produced a portfolio of openly-licensed designs aimed at explaining the stakes involved in cybersecurity topics like encryption or phishing in more human, relatable terms. 

The winners of the Cybersecurity Visuals Challenge are:  ....  (Details at the link) 

“The challenges we face today online keeping networks and devices secure are far too complex to be illustrated by a shadowy figure in a hoodie hunched over a laptop,” said Eli Sugarman, program officer at the Hewlett Foundation in charge of the Cyber Initiative, a ten-year grantmaking effort devoted to improving cyber policy. “Sophisticated organizations are attacking the security of the internet and we believe the images produced by the participating artists will help increase understanding of these issues for policymakers and the broader public alike.”  ... 

Expanding AI's Impact with Organizational Learning

I participated in the below study, they make the point that it will be available only for a short time.  Here is the start of the document:

MIT Sloan: EXPANDING AI’S IMPACT WITH ORGANIZATIONAL LEARNING

Most companies developing AI capabilities have yet to gain significant financial benefits from their efforts. Only when organizations add the ability to learn with AI do significant benefits become likely.

EXECUTIVE SUMMARY

Register to download the full report   *Registration Required .... 

Only 10% of companies obtain significant financial benefits from artificial intelligence technologies. Why so few?

Our research shows that these companies intentionally change processes, broadly and deeply, to facilitate organizational learning with AI. Better organizational learning enables them to act precisely when sensing opportunity and to adapt quickly when conditions change. Their strategic focus is organizational learning, not just machine learning.

Organizational learning with AI is demanding. It requires humans and machines to not only work together but also learn from each other — over time, in the right way, and in the appropriate contexts. This cycle of mutual learning makes humans and machines smarter, more relevant, and more effective. Mutual learning between human and machine is essential to success with AI. But it’s difficult to achieve at scale.

Our research — based on a global survey of more than 3,000 managers, as well as interviews with executives and scholars — confirms that a majority of companies are developing AI capabilities but have yet to gain significant financial benefits from their efforts. More than half of all respondents affirm that their companies are piloting or deploying AI (57%), have an AI strategy (59%), and understand how AI can generate business value (70%). These numbers reflect statistically significant increases in adoption, strategy development, and understanding from four years ago. What’s more, a growing number of companies recognize a business imperative to improve their AI competencies. Despite these trends, just 1 in 10 companies generates significant financial benefits with AI.

We analyzed responses to over 100 survey questions to better understand what really enables companies to generate significant financial benefits with AI. We found that getting the basics right — like having the right data, technology, and talent, organized around a corporate strategy — is far from sufficient. Only 20% of companies achieve significant financial benefits with these fundamentals alone. Getting the basics right and building AI solutions that the business wants and can use improve the odds of obtaining significant financial benefits, but to just 39%.

Our key finding: Only when organizations add the ability to learn with AI do significant benefits become likely. With organizational learning, the odds of an organization reporting significant financial benefits increase to 73%.

Organizations that learn with AI have three essential characteristics:

1. They facilitate systematic and continuous learning between   ..... " 

Tuesday, October 20, 2020

Sonos Makes a Small Smart Home Move

Not expected, intriguing. Hoping to compete with other players in the space?

Sonos speakers can now communicate with GE Appliances  They can notify you when the oven is preheated or a dishwasher load is done.

Igor Bonifacic, @igorbonifacic in Engadget .... 

API Security

Pointed out to me recently.   Have not been involved in API security, seems there are useful tips here.

Tips To Strengthen API Security  By Bill Doerrfeld in DevOps

If you haven’t noticed, digital organizations are building more and more APIs. ProgrammableWeb tracks more than 23,000 public web APIs at the time of writing, and the API market is estimated to be worth $5.1 billion by 2023. Building with APIs increases internal interoperability, reduces development time and can extend product functionality tremendously. In short, the value of APIs is rising. However, opening up with APIs brings security caveats that, if not addressed, could result in serious breaches that negate these benefits.  .... 

 ... APIs have been called “the next frontier in cybercrime.” Rightly so, as API breaches continue to pop up nearly every day. Take the recent API vulnerabilities at Cisco Systems, Shopify, Facebook, U.S. presidential campaign apps, and GCP as evidence. The most infamous was likely the Equifax breach—not enforcing formats on incoming API calls resulted in a massive data breach, which cost the company a $700 million lawsuit.  ... " 

Bletchley Park Contribution Over-Rated?

 Am a student of this effort, so this suggestion was surprising.

Bletchley Park’s contribution to WW2 'over-rated'

By Gordon Corera, Security correspondent

Code-breaking hub Bletchley Park's contribution to World War Two is often over-rated by the public, an official history of UK spy agency GCHQ says.  The new book - Behind the Enigma - is released on Tuesday and is based on access to top secret GCHQ files.   "Bletchley is not the war winner that a lot of Brits think it is," the author, Professor John Ferris of the University of Calgary, told the BBC.

But he said Bletchley still played an important role.  ... '

A GPU can Brute Force Your Passwords

Faster GPUs are eroding security.  Using a password manager, which give you a larger number of characters,  and/or a multifactor link up makes much sense.   

The Nvidia RTX 3090 GPU Can Probably Crack Your Passwords   By Ryan Whitwam

The new Nvidia GeForce RTX 3090 is a gaming powerhouse, but that’s not all it can do. According to the makers of a popular password recovery application, the RTX 3090 is also good at brute-forcing passwords. That’s great if you forget an important password, but that’s probably not why people are using such tools. The latest Nvidia cards could make cracking someone else’s files almost trivially easy. 

The RTX 3090 is Nvidia’s latest top-of-the-line GPU with a GA102 graphics processor sporting 10,496 cores and 24GB of GDDR6X memory. It is monstrously, obscenely powerful by today’s gaming standards, and comes with a correspondingly high price of $1,500, give or take a few hundred depending on supply. With a focus on high core counts, GPUs are also great for parallel computing. That’s why you couldn’t even buy a GPU for several months when Bitcoin was at its peak. In the same vein, GPUs are very good at cracking passwords.   .... "

Focus Music

Amazon Alexa Music has been pushing what they call 'Focus Time Music'   With claims for 'perfect sound when you are studying, working, reading or writing '.   They just suggested the idea to me.  Something I have tried myself for years, with playlists, particular artists, etc.    For me its jazz.   Does it really work?  More than other methods.   Seems you could do some controlled  experiements,anyone know of any?  How about taking that to creativity?   Here is what they write about it. 

On Synthetic Data

 Was asked about this, seems it has not come up for some time.   We used it to set up software and analyses for coming real data.  Some MIT thoughts.

The real promise of synthetic data   by Massachusetts Institute of Technology

After years of work, MIT's Kalyan Veeramachaneni and his collaborators recently unveiled a set of open-source data generation tools — a one-stop shop where users can get as much data as they need for their projects, in formats from tables to time series. They call it the Synthetic Data Vault. Credit: Arash Akhgari

Each year, the world generates more data than the previous year. In 2020 alone, an estimated 59 zettabytes of data will be "created, captured, copied, and consumed," according to the International Data Corporation—enough to fill about a trillion 64-gigabyte hard drives.

But just because data are proliferating doesn't mean everyone can actually use them. Companies and institutions, rightfully concerned with their users' privacy, often restrict access to datasets—sometimes within their own teams. And now that the COVID-19 pandemic has shut down labs and offices, preventing people from visiting centralized data stores, sharing information safely is even more difficult.

Without access to data, it's hard to make tools that actually work. Enter synthetic data: artificial information developers and engineers can use as a stand-in for real data.

Synthetic data is a bit like diet soda. To be effective, it has to resemble the "real thing" in certain ways. Diet soda should look, taste, and fizz like regular soda. Similarly, a synthetic dataset must have the same mathematical and statistical properties as the real-world dataset it's standing in for. "It looks like it, and has formatting like it," says Kalyan Veeramachaneni, principal investigator of the Data to AI (DAI) Lab and a principal research scientist in MIT's Laboratory for Information and Decision Systems. If it's run through a model, or used to build or test an application, it performs like that real-world data would.

But—just as diet soda should have fewer calories than the regular variety—a synthetic dataset must also differ from a real one in crucial aspects. If it's based on a real dataset, for example, it shouldn't contain or even hint at any of the information from that dataset.

Threading this needle is tricky. After years of work, Veeramachaneni and his collaborators recently unveiled a set of open-source data generation tools—a one-stop shop where users can get as much data as they need for their projects, in formats from tables to time series. They call it the Synthetic Data Vault.  .... "

Toshiba Targets Quantum Cryptography

Considerable effort under way here:

Toshiba targets $3 billion revenue in quantum cryptography by 2030

By Makiko Yamazaki in Reuters

TOKYO (Reuters) - Toshiba Corp 6502.T said on Monday it aims to generate $3 billion in revenue from its advanced cryptographic technology for data protection by 2030, as the Japanese sprawling conglomerate scrambles to find future growth drivers.

The cyber security technology, called quantum key distribution (QKD), leverages the nature of quantum physics to provide two remote parties with cryptographic keys that are immune to cyberattacks driven by quantum computers.  ... 

Monday, October 19, 2020

Bringing Power Tool From Math Into Quantum Computing

The implication that this idea can be used for problems already well solved by FFT methods, is considerable.  These kinds of pattern recognition techniques are already well known in engineering applications.    Would like to try this against some problems like machine maintenance.

Bringing Power Tool From Math Into Quantum Computing  Tokyo University of Science (Japan),   October 14, 2020

Scientists at Japan's Tokyo University of Science (TUS) have designed a novel quantum circuit that calculates the fast Fourier transform (FFT) in a faster, versatile, and more efficient manner than previously possible. The quantum fast Fourier transform (QFFT) circuit does not waste any quantum bits, and it exploits the superposition of states to boost computational speed by processing a large volume of information at the same time. Its versatility is another benefit. TUS' Ryoko Yahagi said, "One of the main advantages of the QFFT is that it is applicable to any problem that can be solved by the conventional FFT, such as the filtering of digital images in the medical field or analyzing sounds for engineering applications."

Deep Learning Takes on Synthetic Biology

Previously mentioned, we experimented with the idea before the current state of machine learning, with a kind of simulation more akin to 'digital twins'.    The ML method would have been useful to add.

Deep Learning Takes on Synthetic Biology

The Harvard Gazette    By Lindsay Brownell   October 7, 2020

Two teams of scientists from Harvard University and the Massachusetts Institute of Technology have developed machine learning algorithms that can analyze RNA-based "toehold switch" molecular sequences and predict which will reliably sense and respond to a desired target sequence. The researchers first designed and synthesized a massive toehold switch dataset, which Harvard's Alex Garruss said "enables the use of advanced machine learning techniques for identifying and understanding useful switches for immediate downstream applications and future design." One team trained an algorithm to analyze switches as two-dimensional images of base-pair possibilities, and then to identify patterns signaling whether a given image would be a good or a bad toehold via an interpretation process called Visualizing Secondary Structure Saliency Maps. The second team tackled the challenge with orthogonal techniques using two distinct deep learning architectures. Their Sequence-based Toehold Optimization and Redesign Model and Nucleic Acid Speech platforms enable the rapid design and optimizing of synthetic biology components.  ... " 

Quantum Engines?

Interesting proposal.  Speculative?    Relates to laws of thermodynamics brought up recently here. Can entanglement be a fuel?  Consider the implications.   

Perfect Energy Efficiency: Quantum Engines With Entanglement as Fuel? By UNIVERSITY OF ROCHESTER in SciTechDaily

University of Rochester researcher receives $1 million grant to study quantum thermodynamics.

It’s still more science fiction than science fact, but perfect energy efficiency may be one step closer due to new research at the University of Rochester.

In order to make a car run, a car’s engine burns gasoline and converts the energy from the heat of the combusting gasoline into mechanical work. In the process, however, energy is wasted; a typical car only converts around 25 percent of the energy in gasoline into useful energy to make it run.

Engines that run with 100 percent efficiency are still more science fiction than science fact, but new research from the University of Rochester may bring scientists one step closer to demonstrating an ideal transfer of energy within a system.

Andrew Jordan, a professor of physics at Rochester, was recently awarded a three-year, $1 million grant from the Templeton Foundation to research quantum measurement engines—engines that use the principles of quantum mechanics to run with 100 percent efficiency. The research, to be carried out with co-principal investigators in France and at Washington University St. Louis, could answer important questions about the laws of thermodynamics in quantum systems and contribute to technologies such as more efficient engines and quantum computers.

“The grant deals with several Big Questions about our natural world,” Jordan says.  .... 

Learning Microwave Ovens

My microwave oven learns, say to cook a baked potato, but this takes it to a new dimension.  For possible industry applications.    See also my previous note on using microwave ovens for health data detection.  Use the tag below 'microwave'.  

Researchers develop 'learning' microwave ovens    by University of Amsterdam

In a publication in the Journal of Cleaner Production, Prof. Bob van der Zwaan of the Van 't Hoff Institute of Molecular Sciences presents the first example of a learning curve for microwave ovens, which follows a learning rate of around 20%. The paper discusses opportunities for possible microwave heating applications in households and industry that can contribute to sustainable development. Rapidly reducing prices could lead to a meaningful role of microwave technology in the energy transition.

Sunday, October 18, 2020

Small Sensors and Robotics: Insects in Tow

Back to our interest here in small robotics and now taking it beyond bio mimicry to directly using insectoriva.

Researchers Use Flying Insects to Drop Sensors Safely    By UW News

 A Manduca sexta moth carrying a sensor on its back. ... 

University of Washington researchers have created a sensor small and light enough to ride on the back of an insect for deployment.

Researchers at the University of Washington (UW) have created a 98-milligram sensor that can access difficult- or dangerous-to-reach areas by riding on a small drone or an insect and being dropped when it reaches its destination.

The sensor is released when it receives a Bluetooth command and can fall up to 72 feet at a maximum speed of 11 mph without breaking, then collect data like temperature or humidity levels for nearly three years.

Said UW's Shyam Gollakota, "This is the first time anyone has shown that sensors can be released from tiny drones or insects such as moths, which can traverse through narrow spaces better than any drone and sustain much longer flights."The system could be used to create a sensor network within a study area researchers wish to monitor.

From University of Washington

Modernizing Legacy Code with AI

Done a lot of this, can have surprisingly big value, even finding latent bugs along the way.  

IBM Watson's Next Challenge: Modernize Legacy Code  in Spectrum IEEE

IBM Research's Chief Scientist Ruchir Puri says Watson AIOps can take on the tedious tasks of software maintenance so human coders can innovate    By Dexter Johnson

IBM’s initiatives into artificial intelligence have served as bellwethers for how AI helps us re-imagine computing, but also how it can transform the industries to which it is applied. There was no more clear a demonstration of AI’s capability than when IBM had its Watson supercomputer defeat all the human champions in the game show Jeopardy.

The years that followed that success back in 2011, however, were years of struggle for IBM to find avenues for Watson to turn its game-show success into commercially viable problem solving. For example, attempts to translate that problem-solving capability to medical diagnosis has been fraught with challenges. .... " 

Conversations on Architecture for Digital Twins

Based in part on a conversation with swim.ai   Have yet to look at that service, but plan to.  Seems such architecture would have to be very adaptable.    Linkable to real time and samples of stored operational data.  Useful thoughts here.  

What’s the right computing architecture for digital twins?    by 7wdata     October 17, 2020    

Last week, I found myself having a conversation that covered edge computing, digital twins, and the concept of absolute truth. It started out as a discussion with Simon Croby, the CTO of Swim.ai, about that company’s latest product, which is designed to bring Swim’s edge analytics software to the enterprise and industrial world. But it quickly broadened to a conversation about the way we think about data storage and compute when we want to act on real-time information and insights.

Basically, with IoT we’re trying to get a continuous and current view of machines, traffic, environmental conditions, or whatever else so we can use that information to take some sort of action. That action might be predicting when a machine will fail, or routing traffic more efficiently, but for many use cases, the time between gathering the data, offering an insight, and then taking action will be short.

And by short, I mean the data might need to be analyzed before a traffic light changes or a person walks more than a few feet away from a shelf in a grocery store. Figuring out how to analyze incoming data and then create a model based on it, such as of an intersection or shoppers, so that a computer can act on it is what led to our discussion of truth. Crosby’s point was that truth changes every second, so if we’re trying to build a digital twin that represents the truth of a machine or a model, it needs to constantly change. And that has a lot of implications for how we think about computing architectures for digital twins.

For example, Swim.ai is working with a U.S. telecommunications company to create a digital twin of the carrier’s network in real time and then optimize that network based on the ongoing movements of people and any applications they’re running. The carrier is tracking 150 million cellular devices, which together generate 4 petabytes of data each day. With 5G on the horizon and an increasing number elements to track between devices and base stations, the carrier expects that the amount of data it will need to analyze will reach 20 petabytes.

Prior to Swim, the carrier would take that data and move it to a 400-node Hadoop cluster to analyze it in batches. It took roughly 6 hours and required a lot of servers. After switching to Swim’s software, the carrier can track those 150 million devices and base stations and start taking actions on its network in just 100 milliseconds.  ... " 

Generating Photons for Comunications in Quantum Computing Systems

Precisely generating photons can be seen as a form of communication, and thus says this can then be beyond just computing.

Generating Photons for Communication in Quantum Computing System

MIT News,  Michaela Jarvis

Massachusetts Institute of Technology (MIT) researchers have developed a technique for inducing quantum bits (qubits) to generate photons to enable quantum processor communication, a key step in achieving interconnections for a modular quantum computing platform. The architecture features superconducting qubits connected to a microwave transmission line or waveguide, with quantum interconnects needed to link qubits at distant locations. Communication occurs in the waveguide as excitations stored within the qubits produce photon pairs, which are emitted into the waveguide and travel to two distant processing nodes, distributing their entanglement throughout a quantum network. Said MIT’s Bharath Kannan, the entanglement “can then be transferred into the processors for use in quantum communication or interconnection protocols."  ... '  

Traveling Salesperson Algorithm Improved

Classic problem often used as a benchmark for algorithmic solutions.

Computer Scientists Break the 'Traveling Salesperson' Record From Quanta Mag

Finally, there’s a better way to find approximate solutions to the notorious optimization problem, often used to test the limits of efficient computation.

WHEN NATHAN KLEIN started graduate school two years ago, his advisers proposed a modest plan: to work together on one of the most famous, long-standing problems in theoretical computer science.

Even if they didn’t manage to solve it, they figured, Klein would learn a lot in the process. He went along with the idea. “I didn’t know to be intimidated,” he said. “I was just a first-year grad student—I don’t know what’s going on.”

Now, in a paper posted online in July, Klein and his advisers at the University of Washington, Anna Karlin and Shayan Oveis Gharan, have finally achieved a goal computer scientists have pursued for nearly half a century: a better way to find approximate solutions to the traveling salesperson problem.

This optimization problem, which seeks the shortest (or least expensive) round trip through a collection of cities, has applications ranging from DNA sequencing to ride-sharing logistics. Over the decades, it has inspired many of the most fundamental advances in computer science, helping to illuminate the power of techniques such as linear programming. But researchers have yet to fully explore its possibilities—and not for want of trying.

The traveling salesperson problem “isn’t a problem, it’s an addiction,” as Christos Papadimitriou, a leading expert in computational complexity, is fond of saying.

Most computer scientists believe that there is no algorithm that can efficiently find the best solutions for all possible combinations of cities. But in 1976, Nicos Christofides came up with an algorithm that efficiently finds approximate solutions—round trips that are at most 50 percent longer than the best round trip. At the time, computer scientists expected that someone would soon improve on Christofides’ simple algorithm and come closer to the true solution. But the anticipated progress did not arrive.

“A lot of people spent countless hours trying to improve this result,” said Amin Saberi of Stanford University.

Now Karlin, Klein and Oveis Gharan have proved that an algorithm devised a decade ago beats Christofides’ 50 percent factor, though they were only able to subtract 0.2 billionth of a trillionth of a trillionth of a percent. Yet this minuscule improvement breaks through both a theoretical logjam and a psychological one. Researchers hope that it will open the floodgates to further improvements.  .... "

Saturday, October 17, 2020

Automating Declarations of Conflicts of Interest

Had not heard of this specifically stated this way. Contracts.   Considerable discussion below. 

We Need to Automate the Declaration of Conflicts of Interest

By Richard T. Snodgrass, Marianne Winslett,   Communications of the ACM, October 2020, Vol. 63 No. 10, Pages 30-32   10.1145/3414556

Over the last 70 years of computer science research, our handling of conflicts of interest has changed very little. Each paper's corresponding author must still manually declare all their co-authors' conflicts of interest, even though they probably know little about their most senior coauthors' recent activities. As top-tier conference program committees increase past 500 members, many with common, easily confusable names, PC chairs with thousands of reviews to assign cannot possibly double-check corresponding authors' manual declarations against their paper's assigned reviewers. Nor can reviewers reliably catch unreported conflicts. Audits at recent top-tier venues across several areas of computer science each uncovered more than 100 instances where, at the first venue, a pair of recent coauthors failed to declare their conflict of interest; at the second venue, someone was assigned to review a recent co-author's submission; and at the third venue, someone reviewed a submission written by a prior co-author from any year. Even the concept of a conflict deserves closer scrutiny: an audit at yet another recent top-tier venue edition found more than 100 cases in which prior co-authors from any year reviewed each other's submissions.  ;;; " 

Text Mining with R: Free Book

 One of those sponsored books, to get your email address, via KDNuggets.   What we called Text Analytics.    Looks to be useful intro.

Text Mining with R: Free Book    By Matthew Mayo, KDnuggets.

.... R is designed specifically for statistical computing, in juxtaposition to general purpose languages, the trade-off being that the relative lack of generality means better optimization for specialized scenarios. R's optimization for statistical computing is a big reason why it enjoys such high levels of adoption in data science and analytics.

Text analytics — like all applications and sub-genres of natural language processing — is continually reaching increasing heights of importance for data science, data scientists, and a variety of industries. As R (and its opinionated collection of packages designed for data science, the tidyverse) is an established environment for statistical computing utilized by data scientists, fully capable of performing text analytics, today we will look at Text Mining for R: A Tidy Approach.  ... "

What Is a Quantum-Safe Hybrid Digital Certificate?

More linking Quantum methods and data and communications security.

What Is a Quantum-Safe Hybrid Digital Certificate?     Sectigo  in CYBER SECURITY

Sectigo’s Tim Callan, Jason Soroko and Alan Grau break down what quantum safe hybrid TLS certificates are and how they can help to prepare businesses for quantum-safe cryptography in Sectigo’s Root Causes podcast

Quantum computing is poised to disrupt the technological world as we know it. And although quantum computing — and all of the advantages it offers — is still realistically years away, businesses and organizations need to prepare themselves for its inevitable downside: broken cryptosystems.

Quantum computers will break our existing asymmetric cryptosystem — something that cybercriminals will be ready and eager to take advantage of. This is why it’ll be necessary to migrate your existing IT and cryptosystems to their quantum-resistant or quantum-safe equivalents.

But, of course, upgrading to post quantum cryptographic (PQC) systems and infrastructure takes time and resources. So, one of the ways to help futureproof your cyber security through this process is through the use of hybrid digital certificates such as a hybrid TLS certificate.  ... " 

Singapore Using Facial Recognition Broadly

  Note the focused use, and the automatic disposal of the data after 30 days.   Is this sufficient to sufficiently suppress misuse?   Once again,  it is inevitable that such methods will be used broadly. 

 In Singapore, Facial Recognition Getting Woven Into Everyday Life  from ACM

Singaporeans will be able to access government and other services through a facial recognition feature in its SingPass national identity program.

SingPass Face Verification lets users securely log in to their government services accounts at public kiosks and on home computers, tablets, and mobile phones just using their faces. Singapore's Government Technology Agency said the data collected via facial recognition is "purpose-driven," solely for a specific transaction, and deleted after 30 days.

The technology allegedly prevents login attempts using photos, masks, and deepfakes, as well as repelling replay attacks, which use a recording of a person's face to attempt authentication.  ... " 

Friday, October 16, 2020

Towards New Kinds of Search

Like the idea of new kinds of search. How about search with business, data or pattern driven context?  Lots of possibilities.  

Google Search gets hum-to-search and AI query upgrades   Maria Deustcher in SiliconAngle

Google LLC has announced a set of artificial intelligence upgrades to its search engine that will enable users to find more types of information, as well as increase the accuracy of returned results.

Company executives detailed the enhancements at the company’s Search On virtual event on Thursday. 

In cases within users want to find a particular song but can’t remember its name, they can now hum, whistle or sing parts of it and Google will try to identify the tune. That’s made possible by machine learning models that convert the audio into an abstract representation consisting of a series of numbers. This series is then compared to a database of number sequences generated from popular songs.

Looking for videos will become easier as well. Google’s algorithms can now identify key moments in clips indexed by its search engine, tag them and incorporate them into search results. For instance, if a user is looking for information about a step in a recipe, Google could not only surface a relevant culinary video but also flag the specific part of the clip during which the given step is discussed.  ... "

Exceptions to the Law of Entropy

More of a Science idea, but we actually found it useful to include in creativity work.   And in work that related  to the application of AI to process.   I post it here to remind myself of the connection and pass it along to others out there.  Is it creeping into other 'engineering' efforts.  For example, is Quantum an aspect of the below?

Stochastic thermodynamics finds exceptions to the law of entropy  By Tim Andersen, Ph.D.  in Medium

 Thermodynamics is one of the most venerable physical theories. 18th and 19th century physicists like Carnot, Clausius, Gibbs, Helmholtz, and Boltzmann established it as a cornerstone of how macroscopic systems made of many, many constituent particles behave, how heat is transported from one system to another, and how engines do work and with what efficiency.

It is from thermodynamics that we get laws like: there are no perpetual motion machines (machines with 100% or more efficiency or that have non-zero efficiency in thermal equilibrium).

Unlike the laws of mechanics or many field theories, however, thermodynamics has always been recognized as containing both strict and de facto laws. The primary laws are:

   - Energy can be neither created nor destroyed. (Strict)

   - Entropy can never decrease. (De Facto)

There are qualifiers to this. In the 1st law, the system must be closed. For the second, we are talking about bringing two or more isolated systems together.

The conservation of energy is a consequence of time translation symmetry of our universe. All times in physical laws can be shifted by a constant amount. We get that from Emmy Noether’s theorem. No physical law breaks the conservation of energy.

The second law is a “de facto” law of physics. I use this term where others might use the word “statistical”. But to say it is statistical is an understatement compared to other fields like sociology. Until about 25 years ago, a violation of the 2nd law was almost thought to be impossible.   ... "

AI Scanning Construction Site Spotting when things are Slipping

 I like the integration of relatively cheap cameras looking for key patterns in work progress (and potentially process) to determine missed schedules.  A classic use of AI in pattern recognition.  As suggested, also a key aspect of construction management .  Also could be linked to contractual timing and quality of work agreements.   Both in meeting those agreements, and looking at trends towards missing them.   As the article suggests, lots here.

AI that scans a construction site can spot when things are falling behind in TechnologyReview

Construction sites are vast jigsaws of people and parts that must be pieced together just so at just the right times. As projects get larger, mistakes and delays get more expensive. The consultancy Mckinsey estimates that on-site mismanagement costs the construction industry $1.6 trillion. But typically you might only have five managers overseeing construction of a building with 1500 rooms, says Roy Danon, founder and CEO of British-Israeli start-up Buildots: “There’s no way a human can control that amount of detail.”

Danon thinks that AI can help. Buildots is developing an image recognition system that monitors every detail of an ongoing construction project and flags up delays or errors automatically. It is already being used by two of the biggest building firms in Europe, including UK construction giant Wates in a handful of large residential builds. Construction is essentially a kind of manufacturing, says Danon. If high-tech factories now use AI to manage their processes, why not construction sites?  ....

AI is starting to change various aspects of construction, from design to self-driving diggers. But Buildots is the first to use AI as a kind of overall site inspector. 

The system uses a GoPro camera mounted on top of a hardhat. When managers tour a site once or twice a week, the camera on their head captures video footage of the whole project and uploads it to image recognition software, which compares the status of many thousands of objects on site—such as electrical sockets and bathroom fittings—to a digital replica of the building.  ...."

California Says GM Can Test Fully Driverless Cars

This trend seems to be moving forward faster than expected.  Would expect these developments to push reliance in other autonomy efforts.

CA Says General Motors Can Test Fully Driverless Cars on Its Roads  in ACM By Futurism.com

The California Department of Motor Vehicles has granted permission to General Motors' self-driving car venture to test its autonomous cars without anyone in the driver's seat on the streets of San Francisco.

The California Department of Motor Vehicles (DMV) just granted permission to Cruise, General Motors' self-driving car venture, to test out its autonomous cars without anyone in the driver's seat.

California has given out permits that allow driverless tests with someone behind the wheel to 60 companies, The Verge reports. But Cruise is just the fifth to be permitted to test out cars that are completely empty. With the permit in hand, Cruise hopes to be the first to test a fully-driverless car on the streets of San Francisco.

Cruise's fleet, which has never been tested in public, The Verge reports, is made up of 200 electric Chevy Bolts. So essentially Cruise is trying to nail electric and autonomous cars all in one fell swoop. ....  "

Thursday, October 15, 2020

Shopping with YouTube?

 Though have used YouTube since its inception, only in the last month have been using all of its capabilities.   Have been thinking about how it was  a powerful engagement model, and how ads were interspersed.   Thinking the paths within different classes of content.  Here more:

Is YouTube a shopping powerhouse waiting to happen?   by Tom Ryan in Retailwire

YouTube is asking creators to tag and track products featured in their videos as part of an “experiment” in what is potentially a major step toward fulfilling the platform’s e-commerce ambitions.

Creators have largely monetized their YouTube content from advertisements served on their videos and from YouTube Premium subscribers watching their content. Some videos include links in their descriptions to Amazon or other retailers designed to drive affiliate sales.

The video tags that YouTube is now testing are linked to analytics and sales through Google, YouTube’s parent. A Shopify integration is also being explored, according to Bloomberg. The report stated, “The goal is to convert YouTube’s bounty of videos into a vast catalog of items that viewers can peruse, click on and buy directly.”   ... "

Getting Experts to Transfer their Knowledge

 Some obvious here, but nicely arranged.   Spent much time on the premise.   Again consider how the knowledge can be re-tested and maintained. 

How KM Gets Experts to Transfer Their Knowledge  in APQC Blog

No matter what business you’re in, subject matter experts are likely in high demand. Your organization needs people with deep know-how and extensive experience to lead, innovate, and solve tough problems. But experts can’t just be islands unto themselves. You also need them to replicate and spread their knowledge by imparting it others.

It’s impossible to build an effective knowledge transfer program without engaging formal and informal subject matter experts. After all, these folks have the most knowledge to convey, especially when it comes to deep contextual understanding of the organization’s products and processes. Experts can play many different roles in knowledge transfer, but APQC groups these into four broad categories: ... 

AI for the Cloud

Teaching AI to speak code.

IBM Research’s Chief Scientist Talks AI for Cloud Migration in Informationweek

Part of the future is already here as AI increases its influence on hybrid cloud and the digital transformation equation, according to Ruchir Puri.

As enterprises consider how they might take advantage of adopting a hybrid cloud approach, AI may be poised to accelerate and shake up the possibilities, says Ruchir Puri, chief scientist with IBM Research. Known for his work with IBM Watson, Puri discussed with InformationWeek how IBM is approaching the digital era, hybrid cloud, and AI. This includes evolving AI technology born out of Watson and teaching AI to essentially speak code.

Puri’s team works on AI technology to assist with data migration from legacy systems and languages such as COBOL to the cloud, and he says AI augmentation could boost productivity through automation of IT.  .... "

AI and Seismology

 An area we consulted on early and continue to follow.   Some comment here on why current, otherwise excellent pattern recognition techniques do not predict earthquakes.  Good update by CACM

AI Shakes up Seismology World

Commissioned by CACM Staff    By Samuel Greengard  

Artificial intelligence is helping researchers to better understand seismic events and to develop early warning systems that can save lives and protect property.

Few events on our planet are as complex as earthquakes. How, when and why they occur remains mostly a mystery, even with today's sophisticated instruments, sensors, and machines continuously monitoring and measuring seismic activity. "The vast number of variables and data points produce an extraordinarily complex picture," says Men-Andrin Meier, associate staff seismologist in the Seismological Laboratory of the California Institute of Technology (CalTech).

For decades, scientists have attempted to understand earthquakes using everything from satellite imagery to computer simulations, which have yielded modest and mixed results. Now scientists are turning to a new ally: artificial intelligence (AI), which is helping researchers better understand seismic events and develop early warning systems that can save lives and protect property.

"Machine learning and other forms of AI have emerged as valuable tools. They are advancing the science in a significant way," says Zachary Ross, an assistant professor of geophysics in the Division of Geological and Planetary Sciences at CalTech.

Finding Faults

The science surrounding earthquakes is extraordinarily complex. Unlike weather forecasting, which uses real-time data from satellites, sensors, and earth stations to track conditions as they occur, seismologists must rely on signals after an event. This data streams in from digital seismometers and broadband sensors on the ground. Measuring stress beneath the Earth's surface is next to impossible, because researchers don't have access to sensors buried deeply enough to measure underground forces.

Seismologists had largely given up on the idea of predicting earthquakes; at least, for the foreseeable future. However, the field is enjoying a renaissance thanks to machine learning and deep learning. Using connected sensors and algorithms, researchers are gaining insights into earthquake behavior, including how smaller swarms of temblors may or may not lead to a larger event. The researchers also are developing early alert systems that can protect property and lives. Says Meier, "We have gotten to the point where we don't have to choose between quantity and quality of the data."  .... " 

Robots as Essential Workers

Still have not seen many robots about, will that change?

How Robots Became Essential Workers in the COVID-19 Response

Autonomous machines proved their worth in hospitals, offices, and on city streets

By Erico Guizzo and Randi Klett

A robot, developed by Asimov Robotics to spread awareness about the coronavirus, holds a tray with face masks and sanitizer.

As the coronavirus emergency exploded into a full-blown pandemic in early 2020, forcing countless businesses to shutter, robot-making companies found themselves in an unusual situation: Many saw a surge in orders. Robots don’t need masks, can be easily disinfected, and, of course, they don’t get sick.

An army of automatons has since been deployed all over the world to help with the crisis: They are monitoring patients, sanitizing hospitals, making deliveries, and helping frontline medical workers reduce their exposure to the virus. Not all robots operate autonomously—many, in fact, require direct human supervision, and most are limited to simple, repetitive tasks. But robot makers say the experience they’ve gained during this trial-by-fire deployment will make their future machines smarter and more capable. These photos illustrate how robots are helping us fight this pandemic—and how they might be able to assist with the next one.  .... ' 

Wednesday, October 14, 2020

WalMart Competing with Geek Squad

Done well and priced well could attract and engage many consumers.   

Has Walmart come up with an answer to Best Buy’s Geek Squad?  In Retailwire by George Anderson

Walmart is taking part in a pilot program to test the viability of offering consumer electronics and technology services to its customers at a fraction of what similar services such as Best Buy’s Geek Squad cost.

The retailer is setting up kiosks — four Dallas-area locations and another in Springdale, AR — that will enable customers to sign up for in-home installation of computing devices, smart home products, televisions and WiFi. It will also offer repair services for damaged smartphones and other electronic devices.

Walmart is looking to roll out the service in 50 locations by the middle of next year, and shoppers in areas with the service will be able to access it online. The kiosks will be staffed by True Network Solutions, which is partnering with the retailer to provide the service.  ... "

Microsoft says its AI can describe images 'as well as people do'

 Recall some portion of this claim being made, but had not heard much new from Microsoft,  Know sight impaired people who could use the capability.     We also worked on the general idea of  'captioning', which turns out to be tough to do generally well.  

Microsoft says its AI can Describe Images 'as Well as People Do'  By Devindra Hardawar, @devindra  in Engdget 

It’s a new milestone for AI that could genuinely help the visually impaired. 

Describing an image accurately, and not just like a clueless robot, has long been the goal of AI. In 2016, Google said its artificial intelligence could caption images almost as well as humans, with 94 percent accuracy. Now Microsoft says it’s gone even further: Its researchers have built an AI system that’s even more accurate than humans — so much so that it now sits at the top of the leaderboard for the nocaps image captioning benchmark. Microsoft claims its two times better than the image captioning model it’s been using since 2015. 

And while that’s a notable milestone on its own, Microsoft isn’t just keeping this tech to itself. It’s now offering the new captioning model as part of Azure's Cognitive Services, so any developer can bring it into their apps. It’s also available today in Seeing AI, Microsoft's app for blind and visually impaired users that can narrative the world around them. And later this year, the captioning model will also improve your presentations in PowerPoint for the web, Windows and Mac. It’ll also pop up in Word and Outlook on desktop platforms.

Google Duplex Books Haircuts.

Fascinated by 'booking' as a simplistic form of contextual conversational.   And it is interesting that Google Duplex took so long to get to this variant.  But is that significant?    I can see 'booking' being a script for many kinds of useful interactions.  Certainly as the start of such an interaction.   For example for expert interaction we required the filling out of a problem description form, to determine if an expert was necessary.   Simple example

Google Duplex Can Book Haircuts, 2 Years After Stage Demo  By Eric Hal Schwartz in Voicebot,ai

The Google Duplex voice AI service can now book haircuts for clients, a service Google demonstrated when it announced Duplex in 2018. Duplex uses the Google Assistant AI to call barbers and salons, setting up appointments for their client. Until now, Duplex had been limited to making restaurant reservations or getting store information like opening times on behalf of users.  ... " 

Improving Amateur CGI with iPhone AR

Quite an interesting thought ... positioning you in  your real world.

iPhone AR tech can improve amateur CGI
CamTrackAR uses iOS' AR know-how to make adding effects easier.

Daniel Cooper, @danielwcooper
Match Moving is the process of anchoring a CGI object inside a real-world space, so that the camera treats it as if it was really there. If you’ve seen anything where a camera flies past a movie’s title like it was a road sign, then you’re familiar with how it looks. The process is commonly used in film and TV, but even as it’s gotten a lot cheaper, it’s still difficult to achieve without rotoscoping. HitFilm developer FXHome thinks that it’s found a way of harnessing the iPhone camera to make Match Moving accessible even for rank amateurs.

CamTrackAR is a new app from the British developer, which uses the iPhone’s superior augmented reality skills to simplify the process.  When you start the app, you need to look around a space until it identifies the floor, at which point you can start placing nulls. Nulls are essentially the anchor points within a space that will help VFX artists later add in the CGI elements to the live action.   ... "


Tuesday, October 13, 2020

Reinforcement Training is Supervised Learning on Optimized data

 My long time background is in systems optimization.  Quite intriguing claims made that could be very useful.  Ultimately very technical.  .

Reinforcement learning is supervised learning on optimized data  Ben Eysenbach and Aviral Kumar and Abhishek Gupta    Oct 13, 2020

The two most common perspectives on Reinforcement learning (RL) are optimization and dynamic programming. Methods that compute the gradients of the non-differentiable expected reward objective, such as the REINFORCE trick are commonly grouped into the optimization perspective, whereas methods that employ TD-learning or Q-learning are dynamic programming methods. While these methods have shown considerable success in recent years, these methods are still quite challenging to apply to new problems. In contrast deep supervised learning has been extremely successful and we may hence ask: Can we use supervised learning to perform RL?... "

Ceiling Mounted Home Robotics

 An areas we looked at closely to augment smart home studies.  Placing robot arms in key positions was a common solution to ambulatory examples. Note unpredictability of homes. Especially for eldercare applications.  Have not seen this mentioned in a long time.

Toyota Research Demonstrates Ceiling-Mounted Home Robot   Toyota seems to be engaged.

This prototype manipulator is just one of the cool robots that TRI has been working on  By Evan Ackerman

One of the robots Toyota demonstrated was a gantry robot that would hang from the ceiling to perform tasks like wiping surfaces and clearing clutter.

Gill Pratt Discusses Toyota’s AI Plans and the Future of Robots and Cars

CES 2020: Toyota Is Building an Entire City Full of Autonomous Cars and Robots

Over the last several years, Toyota has been putting more muscle into forward-looking robotics research than just about anyone. In addition to the Toyota Research Institute (TRI), there’s that massive 175-acre robot-powered city of the future that Toyota still plans to build next to Mount Fuji. Even Toyota itself acknowledges that it might be crazy, but that’s just how they roll—as TRI CEO Gill Pratt told me a while back, when Toyota decides to do something, they really do go all-in on it.

TRI has been focusing heavily on home robots, which is reflective of the long-term nature of what TRI is trying to do, because home robots are both the place where we’ll need robots the most at the same time as they’re the place where it’s going to be hardest to deploy them. The unpredictable nature of homes, and the fact that homes tend to have squishy fragile people in them, are robot-unfriendly characteristics, but as the population continues to age (an increasingly acute problem in Japan), homes offer an enormous amount of potential for helping us maintain our independence.... "

Monday, October 12, 2020

McKinsey Articles on the Next Normal

 Pointers at the link to a number of articles and surveys on the topic:

The Next Normal 

How companies and leaders can reset for growth beyond coronavirus: 

COVID-19 and supply-chain recovery: Planning for the future

October 9, 2020 – Rarely have supply-chain leaders faced more complex, changing conditions than they have during the COVID-19 pandemic. Here’s how companies can manage through the crisis and build resilience against future shocks. .... 

The emerging resilients: Achieving ‘escape velocity’

October 6, 2020 – The experience of the fast movers out of the last recession teaches leaders emerging from this one to take thoughtful actions to balance growth, margins, and optionality. .... 

Survey

How COVID-19 has pushed companies over the technology tipping point—and transformed business forever

October 5, 2020 – A new survey finds that responses to COVID-19 have speeded the adoption of digital technologies by several years—and that many   ....  (Much more at link above) 

Army Testing AR Goggles for Commanding Dogs

Had not seen the idea before.    For remotely sending commands to dogs.

The Army is Testing AR Goggles for Dogs

 A military dog wearing augmented reality goggles. 

Seattle-based company Command Sight has developed augmented reality devices that can be worn by dogs, through which human handlers can provide visual clues to direct the animal to a specific spot. An often-heard prediction is that augmented reality (AR) could one day become a central tool in our everyday work and play – but it turns out that the technology might not be only suited for humans. 

The U.S. military has unveiled a new project in partnership with Seattle-based company Command Sight, to fit working dogs with AR goggles that would enable soldiers to give orders to the animal at a distance.

Military dogs intervene in tactical operations, patrol, detection and specialized searches. Oftentimes, they can find themselves scouting dangerous areas, looking for explosive devices or materials, for example.

From ZDNet

IBM Enables AI Enabled Debate

This could lead to AI being able to make 'arguments' as parts of business process.  Could be used broadly, for example in courts and as parts of smart contract testing.   Or to test advertising pitches in context.  Truly a newly emergent part of AI.  An improved model for crowd sourcing?

IBM showcases latest A.I. advancements on Bloomberg's "That's Debatable" TV show

Software, which IBM hopes to sell to businesses, distills 'key points' from thousands of individual comments  ... "

More from IBM on Project Debater  ....    Related Blog on Project Debater

Towards a Digital Euro

More experiments in the space.  

Why is the ECB eyeing a 'digital euro'?  by Florian Cazeresin Techxplore

A digital euro would complement, not replace cash

The European Central Bank will on Monday launch a public consultation and start experiments to help it decide whether to create a "digital euro" for the 19-nation currency club. The move comes as the pandemic accelerates a shift away from cash, and as policymakers nervously eye the rise of private cryptocurrencies like Bitcoin.

Here's an explainer of what a "digital euro" would mean for the region's citizens.

What is a digital euro? A digital, or virtual, euro would be an electronic version of euro notes and coins, it would be legal tender and guaranteed by the European Central Bank. .... ' 

Classifying Quantum Sources

An interesting kind of classification.  Perhaps  determining it origins.   And as suggested, a faster way to do the classification.  This is another example at finding patterns.  Patterns are a form of information than can be useful to leverage the type or origins of signals.

ML-Assisted Method Rapidly Classifies Quantum Sources
Purdue University School of Electrical and Computer Engineering
September 10, 2020

Purdue University engineers have invented a machine learning-assisted technique for rapid selection of solid-state quantum emitters, which could enhance the efficiency of quantum photonic circuit development. Quantum emitters generate light with non-classical characteristics, but interfacing most solid-state emitters with scalable photonic platforms requires complex integration. The Purdue researchers trained a computer to recognize promising patterns in single-photon emission within a split second, to accelerate single-photon purity-based screening. Purdue's Zhaxylyk Kudyshev said the new technique could “speed up super-resolution microscopy methods built on higher-order correlation measurements that are currently limited by long image acquisition times.”... ' 

Sunday, October 11, 2020

Code Can Now be added in arXiv Manuscripts

Code is often very important in current research, so it makes sense to include it.   But code of any non trivial amount is difficult to prove correct. so it adds another level of complexity.  And if you add data to drive the code,  a kind of context,  correctness can be further obscured.   Still like the idea of including the code, to show an application of the research.

arXiv Now Allows Researchers to Submit Code with Their Manuscripts VentureBeat,  By Khari Johnson

Machine learning resource Papers with Code said the preprint paper archive arXiv is now permitting researchers to submit code with their papers. A recent artificial intelligence (AI) industry report by London-based venture capital firm Air Street Capital found only 15% of submitted research manuscripts currently include code. Preprint repositories offer AI scientists the means to share their work immediately, before an often-protracted peer review process. Code shared on arXiv will be entered via Papers with Code, and the code submitted for each paper will appear on a new Code tab. Papers with Code co-creator Robert Stojnic said, "Having code on arXiv makes it much easier for researchers and practitioners to build on the latest machine learning research. We also hope this change has ripple effects on broader computational science beyond machine learning."

Sometimes Deep Learning Needs Help

Have had a long time interest in language, and as to how and why AI/DeepLearning, can serve in practice  a means of (mostly) recognizing via an learned taxonomy, complex structures like birds, trees or mushrooms.   In Penn's Language Laboratory blog there is a post about this use of taxonomy and parallel distributed processing to provide recognizing 'intelligence'.   But yet more telling and humorous is the first comment, which makes a point that recognizing intelligence can, depending on context, need some serious help.  Serves a good caution to us today.

Saturday, October 10, 2020

An Internet of Plastic, Wireless Things?

 Now here is something quite different.  Thinking packaging applications. Others? 

Here Comes the Internet of Plastic Things, No Batteries or Electronics Required

IEEE Spectrum, Dexter Johnson,  October 8, 2020

Researchers at the University of Washington (UW) have developed a technique for three-dimensionally (3D) printing plastic objects that communicate with Wi-Fi devices without batteries or electronics. The method applies Wi-Fi backscatter technology to 3D geometry to create easy-to-print wireless devices using commodity 3D printers. The researchers built non-electronic analogues for each electronic component using plastic filaments, then integrated them into a single computational design. Explained UW’s Shyam Gollakota, “We are using mechanism actuation to transmit information wirelessly from these plastic objects.” The team has released its computer-aided design (CAD) models to 3D-printing hobbyists so that they can create their own Internet of Things objects.   ... " 

Fully Driverless Ride Hailing Begins in AZ with Waymo

In Arizona.  Could be a lead in to broad use of fully driverless vehicles.  Consider the data being gathered and leveraged for use.

Waymo Begins Fully Driverless Rides for All Arizona Customers   In Bloomberg  By Ira BoudwayOctober 8, 2020

Self-driving car company Waymo announced on Thursday that it has opened its fully driverless ride-hailing service in suburban Phoenix, AZ, to the public. Existing Waymo One customers will be able to hail a driverless Chrysler Pacifica minivan from a fleet of over 300 such vehicles; service will be limited to an approximately 50-square-mile area. Waymo intends to extend the service to new customers in a few weeks, and CEO John Krafcik said by then, "we'll have general access to anyone who chooses to download the [Waymo One] app." The company also plans to reinstate safety drivers for some trips, but will not allow passengers in vehicles with safety drivers until it has installed barriers between the front and back seats.  ... " 

AI in Cybersecurity Attacks

 We don't often think of this way, but tech like AI can be turned on us as well. Here a broad set of examples to consider as threats.

3 ways criminals use artificial intelligence in cybersecurity attacks  In Techrepublic

Three cybersecurity experts explained how artificial intelligence and machine learning can be used to evade cybersecurity defenses and make breaches faster and more efficient during a NCSA and Nasdaq cybersecurity summit.  Kevin Coleman, the executive director of the National Cyber Security Alliance, hosted the conversation as part of Usable Security: Effecting and Measuring Change in Human Behavior on Tuesday, Oct. 6.

Elham Tabassi, chief of staff information technology laboratory, National Institute of Standards and Technology, was one of the panelists in the "Artificial Intelligence and Machine Learning for Cybersecurity: The Good, the Bad, and the Ugly" session.text    ... ' 

SEE: Social engineering: A cheat sheet for business professionals (free PDF) (TechRepublic)

"Attackers can use AI to evade detections, to hide where they can't be found, and automatically adapt to counter measures," Tabassi said.     .... '

Voice vs Touchscreen at McDonalds

I note below the role of touch in spreading COVID, driving touch technology, but still a matter of public perception.

COVID-19 may push retailers to use voice assistants instead of touch screens By Matthew Stern in Retailwire

While it’s no longer thought to be a primary source of transmission of the novel coronavirus, people are still thinking twice before interacting with public touch screens. Throughout retail, some are seeing voice-driven technology as a perfect solution that lets customers interact with automated kiosks while keeping their hands to themselves.

Circle K, Delaware North, Dunkin’ and White Castle are a few of the retailers who have entered into an agreement with MasterCard to pilot a voice ordering artificial intelligence (AI) for their restaurant drive-thrus. The solution allows customers to speak in natural language and is capable of processing complex orders and substitutions as if the customer were speaking with a human being.

Before the pandemic, chains like McDonald’s had already been looking for ways to blend voice-based automated ordering into their drive-thru experience. In 2019, McDonald’s acquired a speech-based AI startup to gain technology for the effort, according to Mashable SE Asia.

Such voice solutions are appearing in other places where touch screen kiosks have grown familiar. Multiple startups have begun to roll out speech-recognition technology that can be implemented on existing touch screen kiosks in restaurants and retail stores, allowing screen-based kiosks used for ordering, product search and other in-store tasks to reliably take voice commands   ..... '