/* ---- Google Analytics Code Below */

Saturday, October 31, 2020

Identifying Asymptomatic Virus from Cough Prints

 Most interesting, depending upon how well it works in practice.

MIT Open Voice Model Used to Identify Asymptomatic COVID-19 Patients From Cough Recordings  By BRET KINSELLA  in Voicebot.ai

MIT researchers collected over 70,000 cough recordings from 5,320 people through a website in April and May of 2020 to see if an AI-based speech model could indicate whether someone had contracted COVID-19 even if they are asymptomatic. The hypothesis outlined in an IEEE Open Journal of Engineering in Medicine and Biology article in late September was, “COVID-19 subjects, especially including asymptomatics, could be accurately discriminated only from a forced-cough cell phone recording using Artificial Intelligence.”

The MIT Open Voice Model was trained using the recordings of 4,256 subjects to identify acoustic biomarkers that could indicate the presence of COVID-19 infection. The model was then tested on the remaining 1,064 subjects to determine efficacy. Models based on a Convolutional Neural Network (CNN) were enhanced with transfer learning from previous data sets designed to identify Alzheimer’s disease.

When the test cough recordings were run through the system, “it accurately identified 98.5% of coughs from people that were confirmed to have COVID-19, including 100 percent of asymptomatic — who reported they did not have symptoms but had tested positive for the virus,” according to reporting by MIT News.   ... "

Virus Tracing Apps have Problems

Privacy, but then what value do they deliver?

Virus-Tracing Apps Are Rife with Problems. Governments Are Rushing to Fix Them
The New York Times
Natasha Singer; Aaron Krolik
July 8, 2020

Governments are scrambling to fix coronavirus contact-tracking applications riddled with privacy and security flaws, which human rights groups and technologists warned could place hundreds of millions of people at risk for stalking, scams, identity theft, or government surveillance. For example, in June Britain ditched a virus-tracing app it was developing in favor of software from Apple and Google promoted as more "privacy preserving." Analysis by the Guardsquare mobile app security company determined that "the vast majority" of government-used virus-tracing apps are inadequately secure, and can be exploited by hackers easily. Location-tracking apps, which some countries are using to alert people of possible virus exposure or to enforce quarantines, are drawing heightened scrutiny because some continuously collect data on users' health, exact whereabouts, and social interactions. Some digital rights groups said these app launches are designed mainly to assure the public that the government is taking action. ... 

Honeywell Provides Quantum as a Service

More available services for quantum available.

Honeywell Introduces Quantum Computing as a Service with Subscription Offering  By ZDNet

The tech world may have to make room for a new acronym, perhaps qubits-as-a-service, QaaS, or some such, as Honeywell has introduced what appears to be the first subscription-based plan for quantum computing usage.

With the introduction Thursday of the company's Model H1 quantum computer, with 10 qubits and a logical quantum volume of 128, the company detailed a plan to charge in a subscription fashion based on monthly access to the machines. 

Quantum computers offer great promise for cryptography and optimization problems. ZDNet explores what quantum computers will and won't be able to do, and the challenges we still face

The subscriber license gives a company access over the course of a month to blocks of "dedicated time," in two different flavors, standard and premium, with eight hours per month of dedicated time or sixteen hours, respectively.     ..."

Data Commons now Available via Google Search

Brought back to my attention. From Google: 

Data Commons is an open knowledge repository that combines data from public datasets using mapped common entities. It includes tools to easily explore and analyze data across different datasets without data cleaning or joining.

What's new:    October 15, 2020:  Data Commons is now accessible on Google Search! Read more here

Explore the data

We cleaned and processed the data so you don't have to. Data about particular entities are aggregated from different sources for a unified view.

Explore Places: Mountain View, CA, New York City Health, Washington, DC Demographics, more ...

Create Timeline Charts: US University Towns by Income, Richest vs. Poorest California Counties, Employment Differences Across Neighboring Cities, more ...

Browse entities in the Data Commons Graph: Austin, TX, New York City Dept. of Education, Encyclopedia of DNA Elements Biosamples, more ...

The Dilemma of Ransomware

Been more educated of late regards the danger of ransomware.   In my early days in the government I remember all the mag tapes we loaded for analysis and backup.   For all sorts of reasons.  Never imagined this one.  But that was before everyone was using computing for everything. 

See below AND the conversations in the comments.   The comments outline 'solutions', but can be very weak 

In Schneier on Security:

Negotiating with Ransomware Gangs

Really interesting conversation with someone who negotiates with ransomware groups:

For now, it seems that paying ransomware, while obviously risky and empowering/encouraging ransomware attackers, can perhaps be comported so as not to break any laws (like anti-terrorist laws, FCPA, conspiracy and others) ­ and even if payment is arguably unlawful, seems unlikely to be prosecuted. Thus, the decision whether to pay or ignore a ransomware demand, seems less of a legal, and more of a practical, determination ­ almost like a cost-benefit analysis.   ...

(See the comments!)

Illusory Perceptions

 In our early work in this area we thought we found such 'illusions'. Based on the text here, these were not the same thing as mentioned here,  but we named them 'illusions', inspired by thoughts of biomimicry.  Is this a hint we are getting closer to brain models?

AI Also Has Illusory Perceptions

RUVID/Network of Valencian Universities for the Promotion of Research, Development, and Innovation

October 16, 2020

Researchers at Spain’s Universitat de València (UV) and Pompeu Fabra University have found that convolutional neural networks (CNN) are affected by visual illusions, much like the human brain. The researchers trained CNNs for simple tasks and found they were susceptible to visual illusions of brightness, although the illusions may not coincide with biological illusory perceptions. Said UV's Jesús Malo, "This is one of the factors that leads us to think that it is not possible to establish analogies between the simple concatenation of artificial neural networks and the much more complex human brain." The researchers warned in a separate study about the use of CNNs to study human vision. Said Malo, "In addition to the intrinsic limitations of these artificial networks to model vision, the non-linear behavior of flexible architectures can be very different from that of the biological visual system."

Crop Inspection

 Recall my interest in crop and forestry inspection and improvement.  See the related tags. Here another example in play to improve production.

Alphabet Trialing Solar-Paneled, Robotic Buggies to Inspect Crops

CNBC, Anmar Frangoul

Google parent Alphabet is piloting a project to revolutionize agriculture and food production. The company’s Mineral initiative aims to use solar-powered electric buggies to travel across fields and locate plants using global positioning system software; cameras and other "machine perception tools" then collect crop data. The system combines the robot-acquired data with information on weather and soil health, in order to "help breeders understand and predict how different varieties of plants respond to their environments." Said Alphabet’s Elliot Grant, “Just as the microscope led to a transformation in how diseases are detected and managed, we hope that better tools will enable the agriculture industry to transform how food is grown.”   ...

Friday, October 30, 2020

Waymo Self-Driving Stats Revealed

 Quite impressive.  61 million miles, 21 months, 47 collisions and near misses. By far most caused by other cars.  No one seriously injured.  Better than human. This included some some intriguing results. 


Over 21 months in Arizona, Waymo’s vehicles were involved in 47 collisions and near-misses, none of which resulted in injuries

nIn its first report on its autonomous vehicle operations in Phoenix, Arizona, Waymo said that it was involved in 18 crashes and 29 near-miss collisions during 2019 and the first nine months of 2020.

These crashes included rear-enders, vehicle swipes, and even one incident when a Waymo vehicle was T-boned at an intersection by another car at nearly 40 mph. The company said that no one was seriously injured and “nearly all” of the collisions were the fault of the other driver...."

Garbage in - Garbage Forever?

Of course the well known phrase is applicable to many kinds of 'intelligence', analytics, AI and forecasting and inference.   Here we just make sure it goes beyond to methods like blockchain.  Authentication is before, during and after ...  important.

With Blockchain, it’s Garbage In – Garbage Forever  By Avivah Litan   in  Gartner

Many companies across the globe are using or are considering blockchain for tracking assets, proving provenance and eliminating counterfeit products.  But how does anyone know what is being tracked on the blockchain is real to begin with? Just because blockchain data is cryptographically secure doesn’t mean the data is legitimate.

My Gartner colleagues, Scott Smith, Svetlana Sicular, and I delivered a webinar a couple of days ago on Using AI and Blockchain to Detect Fakes in a Zero-Trust World . We discussed the role of blockchain and AI in proving authenticity of any entity, whether it is news, content, photographs, videos, food, pharmaceuticals, shoes, luxury goods, or any ‘thing’ else you can think of. ... '

Robotic Ships

 Looked at robotic ships for a supply chain project now sometime ago there and were movements then.   See the tag re 'autonomous ships'.   IBM was involved.  Then what seemed like a lapse.  Now are seeing more movement again.  Interesting here the claim that pandemic was pushing this, not sure I agree.

The Robot Ships Are Coming … Eventually  in Wired.

As the pandemic fuels demand for less contact and fewer sailors, shipping companies turn to AI-assisted navigation.

SOMETIME NEXT APRIL, a 50-foot-long autonomous ship will shake loose the digital bonds of its human controllers, scan the horizon with radar, and set a course westward across the Atlantic. The Mayflower Autonomous Ship won’t be taking commands from a human captain like the first Mayflower did during its crossing back in 1620. Instead it will get orders from an “AI captain” built by programmers at IBM.   .... " 

Thursday, October 29, 2020

On Auction Theory

 A favorite topic we used for supply chain decision modeling.   Nicely overviewed here in Kellogg.  Below a good reminder to me of its outline , and there follows a good overview.   Well worth understanding. Reviewing now:  

What Is “Auction Theory,” and What Kinds of Questions Can It Answer?

The recent Nobel put the field in the spotlight. An economist explains how it works, using his own research as a guide.

Auction theory—which studies different auction formats and attempts to predict how people will behave in them—is having its moment in the spotlight.

The attention stems from the recent awarding of the Nobel in economics to two pioneers of auction theory research, Paul Milgrom and Robert Wilson, both at Stanford University. (Milgrom is a former Kellogg School professor.) Their work is both theoretical and practical: in the 1990s the Federal Communications Commission used their research to create a new way of auctioning off radio frequencies, resulting in billions of dollars in sales.

Kellogg’s Joshua Mollner started collaborating with Milgrom while Mollner was a Ph.D. student at Stanford. The two have a new paper out on auction theory. Kellogg Insight talked with Mollner, an associate professor of managerial economics and decision sciences, about the research and the Nobel announcement.  ...  '

Restock Kroger Proves it Worth in Pandemic

Quite some good details here.

It took three years, but Restock Kroger is finally proving its worth.

The grocery chain reported first-quarter revenue of $40.55 billion, exceeding analyst expectations of $41.92 billion. What’s more, operating profit hit $1.3 billion up from $901 million a year earlier.

This profit growth is likely due to the investments Kroger has made to build out a back-end digital platform. The thesis behind Restock Kroger, which was first announced in 2017, was to build an Amazon-style flywheel that incorporated both online and digital businesses. “We’ve moved aggressively because the future is now. It has to be simple and seamless no matter what we’re talking about: in-store, pickup, delivery,” said Kroger’s vp of digital Matt Thompson to Digiday in 2019. Meanwhile, the company has also been growing its partnership with the UK-based automated warehouse company Ocado..... "  ... '

Here Comes the No-Code Generation?

Been a long time follower of automating coding.   What will it mean to the future?   Hardly superpowers.  In fact when it finally fully arrives, you will hardly know its there at all.    It will then be folded into an algorithm,   Until then the coders of the future will still have to make sure the complex logic works in context.   And its implications for the 'right' answer.   Sure it should remove lots of the bother of getting the code correct and as secure as possible.  But then, click, it should be gone.   No more generation needed. 

The No-Code Generation is Arriving  By TechCrunch

October 28, 2020

The success and notoriety of no-code platforms comes from the feeling that they grant "superpowers" to their users.

In the distant past, there was a proverbial "digital divide" that bifurcated workers into those who knew how to use computers and those who didn't.[1] Young Gen Xers and their later millennial companions grew up with Power Macs and Wintel boxes, and that experience made them native users on how to make these technologies do productive work. Older generations were going to be wiped out by younger workers who were more adaptable to the needs of the modern digital economy, upending our routine notion that professional experience equals value.

Of course, that was just a narrative. Facility with using computers was determined by the ability to turn it on and log in, a bar so low that it can be shocking to the modern reader to think that a "divide" existed at all. Software engineering, computer science and statistics remained quite unpopular compared to other academic programs, even in universities, let alone in primary through secondary schools. Most Gen Xers and millennials never learned to code, or frankly, even to make a pivot table or calculate basic statistical averages.

There's a sociological change underway though, and it's going to make the first divide look quaint in hindsight.   ... " 

Brazil AI Center

 Visited Brazil to outline their retail technology advances.  Especially regarding food supply chains. Was impressed with their efforts. Now they have established an AI Center

Brazil Launches AI Center   By ZDNet

Brazil's Artificial Intelligence Center (C4AI) officially launched earlier this month, thanks to investments from IBM and Brazil’s São Paulo Research Foundation and University of Sao Paulo.

The center will work to address challenges related to health, the environment, the food production chain, the future of work, and the development of Natural Language Processing technologies in Portuguese (the language spoken by nearly all Brazilians).

C4AI also will work on human well-being improvement projects and diversity and inclusion initiatives. The center's opening comes nearly a year after the Brazilian government announced plans to create a national AI strategy. ... " 

Custom Live Data Drama in Excel

Though its now quite outdated, we used to do lots of things in spreadsheets, remember this could have been very useful.   Drama?   Well maybe. 

Microsoft Excel spreadsheets now take custom live data

Who knew spreadsheets could be exciting?  By Jon Fingas, @jonfingas in Engadget

Microsoft is still finding ways to inject drama into spreadsheets. The Verge reports that Microsoft is giving Excel support for custom live data types, expanding the content you can include well past text, numbers and the occasional stock quote. You could slip a country’s data into a cell and create a formula that extracts the most recent population for your sheet, for example.

The approach works by using logic to structure the data you insert into a given cell, using Power BI to connect data types with Excel for business users. Existing cells can even be turned into linked data types, and you can use a Power Query feature to turn imported data into its own type. ... " 

Wednesday, October 28, 2020

Estimation Theory at Work

Rarely hear about this, we used it for retail analysis, it is useful to understand. Link to it below and a Wikipedia article definition. 

What makes a good estimator?

Blog post by Jasmine Nettiksimmons, Molly Davies    September 24, 2020 - San Francisco, CA

What makes a good estimator? What is an estimator? Why should I care? There is an entire branch of statistics called Estimation Theory that concerns itself with these questions and we have no intention of doing it justice in a single blog post. However, modern effect estimation has come a long way in recent years and we’re excited to share some of the methods we’ve been using in an upcoming post. This will serve as a gentle introduction to the topic and a foundation for understanding what makes some of these modern estimators so exciting.  .... '

Estimation theory is a branch of statistics that deals with estimating the values of parameters based on measured empirical data that has a random component. The parameters describe an underlying physical setting in such a way that their value affects the distribution of the measured data. An estimator attempts to approximate the unknown parameters using the measurements. ... '

Replaying Memory to Retain it

 Fascinating play.   Remember this similarity being brought up in a talk about short vs long term memories.   Now being brought into play?

How a Memory Quirk of the Human Brain Can Galvanize AI  By Shelly Fan in SingularityHub

Even as toddlers we’re good at inferences. Take a two-year-old that first learns to recognize a dog and a cat at home, then a horse and a sheep in a petting zoo. The kid will then also be able to tell apart a dog and a sheep, even if he can’t yet articulate their differences.

This ability comes so naturally to us it belies the complexity of the brain’s data-crunching processes under the hood. To make the logical leap, the child first needs to remember distinctions between his family pets. When confronted with new categories—farm animals—his neural circuits call upon those past remembrances, and seamlessly incorporate those memories with new learnings to update his mental model of the world.

Not so simple, eh?

It’s perhaps not surprising that even state-of-the-art machine learning algorithms struggle with this type of continuous learning. Part of the reason is how these algorithms are set up and trained. An artificial neural network learns by adjusting synaptic weights—how strongly one artificial neuron connects to another—which in turn leads to a sort of “memory” of its learnings that’s embedded into the weights. Because retraining the neural network on another task disrupts those weights, the AI is essentially forced to “forget” its previous knowledge as a prerequisite to learn something new. Imagine gluing together a bridge made out of toothpicks, only having to rip apart the glue to build a skyscraper with the same material. The hardware is the same, but the memory of the bridge is now lost.

This Achilles’ heel is so detrimental it’s dubbed “catastrophic forgetting.” An algorithm that isn’t capable of retaining its previous memories is severely kneecapped in its ability to infer or generalize. It’s hardly what we consider intelligent.

But here’s the thing: if the human brain can do it, nature has already figured out a solution. Why not try it on AI?

A recent study by researchers at the University of Massachusetts Amherst   and the Baylor College of Medicine did just that. Drawing inspiration from the mechanics of human memory, the team turbo-charged their algorithm with a powerful capability called “memory replay”—a sort of “rehearsal” of experiences in the brain that cements new learnings into long-lived memories.     ..." 

IBM Study on the Efficacy of Chatbots

But AI is still in action, especially in speeding simpler customer interactions:

IBM study highlights rapid uptake and satisfaction with AI chatbots  in AINews

A study by IBM released this week highlights the rapid uptake of AI chatbots in addition to increasing customer satisfaction.

Most of us are hardwired to hate not speaking directly to a human when we have a problem—following years of irritating voicemail systems. However, perhaps the only thing worse is being on hold for an uncertain amount of time due to overwhelmed call centres.

Chatbots have come a long way and can now quickly handle most queries within minutes ....

Link to the IBM Study 'The value of virtual agent technology'  Which they are now using as a term for chatbot customer assistants.  Is this AI?

Is AI Failing to Deliver Expectations .... Again?

Always interesting former IBMer Irving Wladawsky-Berger talks AI hype, cycles, and lots of supporting links and observations. Been through it all, and this second spin has delivered more, but still not close to the full expectations.  And as we touched closer to the goal, more fear too.

" ... A collection of observations, news and resources on the changing nature of innovation, technology, leadership, and other subjects.

After Years of Promise and Hype, Is AI Once More Failing to Deliver?

The June 13 issue of The Economist included an in-depth look at the limits of AI, with seven articles on the subject.  “There is no question that AI - or, to be precise, machine learning, one of its sub-fields - has made much progress,” notes The Economist in the issue’s overview article.  “Computers have become dramatically better at many things they previously struggled with…  Yet lately doubts have been creeping in about whether today’s AI technology is really as world-changing as it seems.  It is running up against limits of one kind or another, and has failed to deliver on some of its proponents’ more grandiose promises.”

Transformative technologies, - remember the dot-com bubble, - are prone to hype cycles, when all the excitement and publicity accompanying their early achievements often lead to inflated expectations, followed by disillusionment if the technology fails to deliver.  But AI is in a class by itself, as the notion of machines achieving or surpassing human levels of intelligence has led to feelings of both wonder and fear over the past several decades.

The article reminds us that AI has gone through two such major hype cycles since the field began in the mid-1950s.  Early achievements, - like beating humans at checkers and proving logic theorems, - led researchers to conclude that machines would achieve human-level intelligence within a couple of decades.  This early optimism collapsed leading to the first so-called AI winter from 1974-1980.  The field was revived in the 1980s with the advent of commercial expert systems and Japan’s Fifth Generation project, but it didn’t last long, leading to the second AI winter from 1987-1993....."

Five Nonobvious Remote Work Techniques

Very interesting piece in Queue and the Communications of the ACM, Nov 2020,  pp 108-110.  Take the problem beyond the technical and to the collaboratively social.    From experiences at Stack Overflow.  Well Worth the read.

Five Nonobvious Remote Work Techniques   001:10 .1145/3410627

Emulating the efficiency of in-person conversations   By Thomas A. Limoncelli   This article reveals five nonobvious techniques that make remote work successful at Stack Overflow.

Remote work has been part of the engineering culture at Stack Overflow since the company began. Eighty percent of the engineering department works remotely. This enables the company to hire top engineers from around the world, not just from the New York City area. (Forty percent of the company worked remotely prior to the COVID-19 lockdown; 100 percent during the lockdown.) Even employees who do not work remotely must work in ways that are remote-friendly.

For some companies, working remotely was a new thing when the COVID-19 pandemic lockdowns began. At first the problems were technical: IT departments had to ramp up VPN (virtual private network) capacity, human resources and infosec departments had to adjust policies, and everyone struggled with microphones, cameras, and videoconferencing software.

Once those technical issues are resolved, the social issues become more apparent. How do you strike up a conversation as you used to do in the office? How do you know when it is appropriate to reach out to someone? How do you prevent loneliness and isolation?

Here are my top five favorite techniques Stack Overflow uses to make remote work successful on a social level.

Tip #1: If Anyone is Remote, We're All Remote

Meetings should be either 100 percent in-person, or 100 percent remote; no mixed meetings.

Ever been in a conference room with a bunch of people plus one person participating by phone or videoconference? It never works. The one remote participant can't hear the conversation, can't see what everyone else is seeing, and so on. He or she can't authentically participate.

At Stack Overflow we recognized this years ago and adopted a rule: If one person is remote, we're all remote. This means everyone who is physically present leaves the conference room, goes back to their desks, and we conduct the meeting using desktop videoconferencing.

During the COVID-19 lockdown your entire company may be remote, but this is a good policy to adopt when you return to the office.

This may not be an option for companies with open floor plans, however, where participants videoconferencing from their desks may disturb their neighbors. How can you make mixed meetings work? Where I've observed them working well required two ingredients: First, the conference-room design was meticulously refined and adjusted over time (this is rarer—and more expensive—than you would think); second, and the biggest determinant, was the degree to which all those in the meeting were aware of the remote participants. It requires a learned skill of being vigilant for clues that someone is having difficulty in participating and then taking corrective action. Everyone, not just the facilitator, needs to be mindful of this. ... "

Tuesday, October 27, 2020

Deception and Concealment Technology

Worked with MITRE way back.  Now it is also about active defense.  Not much revealed here, but its interesting. 

MITRE Shield Matrix Highlights Deception & Concealment Technology

The role that these technologies play in the MITRE Shield matrix is a clear indicator that they are an essential part of today's security landscape.

It's an age-old question: How do you know if you need more security? MITRE has been diligently working to document tactics and techniques to assess security readiness and answer this very challenging question. In late August, MITRE, a nonprofit organization, released a new knowledge matrix, called MITRE Shield, to complement the ATT&CK matrix.

The organization called it "an active defense knowledge base MITRE is developing to capture and organize what we are learning about active defense and adversary engagement." With its focus on active defense measures, MITRE designed Shield to help defenders understand their cybersecurity options and take proactive steps to defend their assets. Among the most common active defense techniques are cyber-deception and concealment technologies, which are featured heavily in the new Shield matrix.  ... "

AI, the Next Generations

Fairly good, short, non technical piece.    I agree with the first item mentioned:  'unsupervised learning', it is the future, and  it is hard to do generally.    But I note that I do not agree that 'supervised learning' has to be prepared by human 'supervisors'.  As stated below.  It can be gathered by a sensor for example.   Even adapted without human aid.  

The Next Generation Of Artificial Intelligence

Author: Rob Toews in Forbes,   He writes about the big picture of artificial intelligence.

Discussing Yann LeCun

The field of artificial intelligence moves fast. It has only been 8 years since the modern era of deep learning began at the 2012 ImageNet competition. Progress in the field since then has been breathtaking and relentless.

If anything, this breakneck pace is only accelerating. Five years from now, the field of AI will look very different than it does today. Methods that are currently considered cutting-edge will have become outdated; methods that today are nascent or on the fringes will be mainstream.

What will the next generation of artificial intelligence look like? Which novel AI approaches will unlock currently unimaginable possibilities in technology and business? This article highlights three emerging areas within AI that are poised to redefine the field—and society—in the years ahead. Study up now.

1. Unsupervised Learning

The dominant paradigm in the world of AI today is supervised learning. In supervised learning, AI models learn from datasets that humans have curated and labeled according to predefined categories. (The term “supervised learning” comes from the fact that human “supervisors” prepare the data in advance.)  ... " 

MonoEye - Human Motion Capture

 Most interesting approach.  The implications?  Note deep neural approach for selection of poses. Accuracy?  Gathering information for healthcare, sports, training?    Thinking possibilities.

MonoEye: A Human Motion-Capture System Using Single Wearable Camera

Tokyo Institute of Technology (Japan)

October 21, 2020

Researchers at Japan's Tokyo Institute of Technology and Carnegie Mellon University have developed a human motion-capture system comprised of an ultra-wide fisheye camera worn on the user's chest. The MonoEye system can capture the user's body motion and their perspective, or "viewport," with a 280-degree field of view. MonoEye incorporates three deep neural networks for real-time calculation of three-dimensional body pose, head pose, and camera pose. The researchers trained the networks on a synthetic dataset of 680,000 renderings of people with a range of body shapes, apparel, actions, background, and lighting conditions, along with 16,000 frames of photorealistic images. .... '

Singularity Hub Looks at GPT-3

Out of SingularityHub.   A look at the writing AI called GPT-3.  Scopes some of the possibilities and limitations.

OpenAI’s GPT-3 Wrote This Short Film—Even the Twist at the End  By Vanessa Bates Ramirez

OpenAI’s text generating AI has gotten a lot of buzz since its release in June. It’s been used to post comments on Reddit, write a poem roasting Elon Musk, and even write an entire article in The Guardian (which editors admitted they worked on and tweaked just as they would a human-written op ed).

When the system learned to autocomplete images without having been specifically trained to do so (as well as write code, translate between languages, and do math) it even got people speculating whether GPT-3 might be the gateway to artificial general intelligence (it’s probably not).

Now there’s another feat to add to GPT-3’s list: it wrote a screenplay.

It’s short, and weird, and honestly not that good. But… it’s also not all that bad, especially given that it was written by a machine.

The three-and-half-minute short film shows a man knocking on a woman’s door and sharing a story about an accident he was in. It’s hard to tell where the storyline is going, but surprises viewers with what could be considered a twist ending.  ... " 

Behold the Openbot Smart Phone

 A curious and interesting thing.  Strap a smartphone to some wheels and sensors, and you have an OpenBot.  Is it a toy, a test platform, an indication where Intel is going?

How Intel's OpenBot Wants to Make Robots Out of Smartphones  in SpectrumIEEE

Intel talks to us about why OpenBot has a future we should believe in:

You could make a pretty persuasive argument that the smartphone represents the single fastest area of technological progress we’re going to experience for the foreseeable future. Every six months or so, there’s something with better sensors, more computing power, and faster connectivity. Many different areas of robotics are benefiting from this on a component level, but over at Intel Labs, they’re taking a more direct approach with a project called OpenBot that turns US $50 worth of hardware and your phone into a mobile robot that can support “advanced robotics workloads such as person following and real-time autonomous navigation in unstructured environments.”    ... " 

Amazon Made a Mistake with Whole Foods?

In Bob Herbold's blog, a strategic error by Amazon acquiring whole foods?   Used to go to these, not much anymore.    Yes, can see the 'cultlike' tag for its customers.   Does that not work in a pandemic?

Amazon: A Rare Strategic Mistake?

Whole Foods’ first store opened in 1978 in Austin, Texas.   National expansion began in earnest in 1984 and it has grown to 500 stores in North America.    In October of 2017 it was acquired by Amazon. Whole Foods’ amazing growth was due to three points of differentiation that it used to attract its unique customer set:

Natural Organic Foods – Whole Foods was the first and only USDA Certified Organic food chain, and customers were taught that what is good for them is foods free of chemicals, artificial colors or flavors, preservatives, etc.  It basically put organic on the map!

Regional Distinctive Items – Local Whole Foods managers were encouraged to stock in their stores small local/regional brands that were very distinctive, matched the natural/organic/fresh positioning, and had very high profit margins. 

Discriminating Customers – They developed a cultlike reputation of being the store for the educated, upscale, health-conscious customer.

A lot has changed in the marketplace since the Amazon acquisition, and the Whole Foods business is currently suffering.  Here are the key reasons:   ..." 

Monday, October 26, 2020

Zoom Adding End-to-end Encryption

Been following this for some time.   I like Zoom, my favorite to date of the half dozen commonly in use methods I know of.     Now even free users can turn on advanced end to end encryption.   Very powerful encryption, though it must be specifically turned on.   Other vendors are adding other interesting, more text analytic  'business' functions  that are interesting,  but not aimed at the average user. 

Zoom starts rolling out end-to-end encryption

E2EE is available as a technical preview for free and paid users.    By Kris Holt, @krisholt   in Engadget

Zoom is now rolling out end-to-end encryption (E2EE) for both free and paid users, so your video chats and meetings should be much more secure. You can activate E2EE on Zoom’s latest desktop client, Android app and Zoom Rooms starting today. It’ll be available on iOS soon after Apple approves an app update.

You’ll need to manually switch on E2EE in your settings, and all meeting participants will need to do so for their calls to use that level of encryption. You’ll know E2EE is active if you spot a green shield icon in the top left of your call.  .... ' 

Predictive and Prescriptive Analytics Impacting the Bottom Line

Yes, well done, it always has.   And further methods like AI, a kind of analytics, does the same.  Been doing it for a lifetime.  Interesting numbers below.

How predictive and prescriptive analytics impact the bottom line  by 7wData

In a digitally transformed world, the combination of data and analytics is critical to maintaining a competitive advantage and business relevance. To achieve this goal, enterprises collect vast volumes of data and derive valuable insights from them. This knowledge can be anything from ascertaining customer satisfaction to identifying operational discrepancies.

The capability of business intelligence and analytics is continually evolving. In a highly competitive business world, analytics plays a key role in identifying trends and patterns to make quick and informed business decisions. Predictive and prescriptive analytics are two important methods in business-analytics solutions. Mordor Intelligence research suggests that the predictive and prescriptive analytics market (valued at $8.14 billion in 2019) is expected to grow at a compound annual rate of 22.53% to reach $27.57 billion by 2025.

As artificial intelligence (AI) and machine learning evolve and play a more significant role in data and analytics, smart algorithms can now pull both prescriptive and predictive insights from the data. Both approaches give insight and foresight to enable smart decision-making; they incorporate data mining, machine learning and statistical modeling to deliver deep insight into customers and overall operations ...."

Upcoming ACM Talk: Deep Learning and Software Engineering

Plan to attend this ACM talk:

VIP Reminder: Nov 2 Talk with fast.ai http://fast.ai/  Co-Founder Jeremy Howard on Applying Software Engineering Practices to Deep Learning... 

If you haven't done so already, Register now  for the next free ACM TechTalk, "It's Time Deep Learning Learned from Software Engineering," presented on Monday, November 2, at 1:00 PM ET/10:00 AM PT by Jeremy Howard, Founding Researcher at fast.ai and Distinguished Research Scientist at the University of San Francisco. Hamel Husain, Staff Machine Learning Engineer at GitHub, will moderate the questions and answers session following the talk.

Leave your comments and questions with our speaker now and any time before the live event on ACM's Discourse Page. And check out the page after the webcast for extended discussion with your peers in the computing community, as well as further resources on fastai and deep learning.

(If you'd like to attend but can't make it to the virtual event, you still need to register to receive a recording of the TechTalk when it becomes available.)

Note: You can stream this and all ACM TechTalks on your mobile device, including smartphones and tablets.

The world of deep learning has traditionally been an academic world, drawing from mathematics, statistics, and operations research. This has meant great advances in the development of theory and algorithms, but software engineering best practices have sometimes been left behind. In this talk, the creator of fastai will explain how bringing software engineering best practices, such as layered API design and decoupling, have allowed him to provide a deep learning library that is both easier to use for beginners, at the same time as being more deeply hackable for experts, and also increasing performance. He will be drawing from research discussed in the peer reviewed paper describing the principles of fastai.

Presenter:  Jeremy Howard, Founding Researcher, fast.ai; Distinguished Research Scientist, University of San Francisco

Jeremy Howard is a data scientist, researcher, developer, educator, and entrepreneur. Jeremy is a founding researcher at fast.ai, a research institute dedicated to making deep learning more accessible. He is also a Distinguished Research Scientist at the University of San Francisco, the chair of WAMRI, and is Chief Scientist at platform.ai.  http://fast.ai/ 

Previously, Jeremy was the founding CEO of Enlitic, which was the first company to apply deep learning to medicine, and was selected as one of the world’s top 50 smartest companies by MIT Tech Review two years running. He was the President and Chief Scientist of the data science platform Kaggle .... Before that, he spent eight years in management consulting, at McKinsey & Co, and AT Kearney. Jeremy has invested in, mentored, and advised many startups, and contributed to many open source projects. ....

Moderator:  Hamel Husain, Staff Machine Learning Engineer, GitHub

Hamel Husain is a Staff Machine Learning Engineer at GitHub, and is focused on creating developer tools powered by machine learning.  .... .

Hamel holds a Bachelors degree in Management Science and Mathematics from Southern Methodist University, as well as a Masters of Computer Science from Georgia Tech. You can read more about Hamel's recent work on his page: https://hamel.dev.

Visit learning.acm.org/techtalks-archive for our full archive of past TechTalks  ... 

Adding a Brain to a Robot

Sounds obvious, but what are the form and function and challenges of such a brain? To be more human, more creative, more engaging?   Made me think.

How Giving Robots a Hybrid, Human-Like ‘Brain’ Can Make Them Smarter By Edd Gent in SingularityHub

Squeezing a lot of computing power into robots without using up too much space or energy is a constant battle for their designers. But a new approach that mimics the structure of the human brain could provide a workaround.

The capabilities of most of today’s mobile robots are fairly rudimentary, but giving them the smarts to do their jobs is still a serious challenge. Controlling a body in a dynamic environment takes a surprising amount of processing power, which requires both real estate for chips and considerable amounts of energy to power them.

As robots get more complex and capable, those demands are only going to increase. Today’s most powerful AI systems run in massive data centers across far more chips than can realistically fit inside a machine on the move. And the slow death of Moore’s Law suggests we can’t rely on conventional processors getting significantly more efficient or compact anytime soon.

That prompted a team from the University of Southern California to resurrect an idea from more than 40 years ago: mimicking the human brain’s division of labor between two complimentary structures. While the cerebrum is responsible for higher cognitive functions like vision, hearing, and thinking, the cerebellum integrates sensory data and governs movement, balance, and posture.

When the idea was first proposed the technology didn’t exist to make it a reality, but in a paper recently published in Science Robotics, the researchers describe a hybrid system that combines analog circuits that control motion and digital circuits that govern perception and decision-making in an inverted pendulum robot.

“Through this cooperation of the cerebrum and the cerebellum, the robot can conduct multiple tasks simultaneously with a much shorter latency and lower power consumption,” write the researchers.

The type of robot the researchers were experimenting with looks essentially like a pole balancing on a pair of wheels. They have a broad range of applications, from hoverboards to warehouse logistics—Boston Dynamics’ recently-unveiled Handle robot operates on the same principles. Keeping them stable is notoriously tough, but the new approach managed to significantly improve all digital control approaches by radically improving the speed and efficiency of computations.  ... "

Voice Assistants and Health Care

Thoughtful and considerable piece with statistics.    Below an intro, much more at the link.

How AI Voice Assistants Can Revolutionize Health in TowardsAi  by Alan Jiang

Siri, Alexa, Google and the future of voice and health technology

The next health revolution is in your voice

The vision for this future is to unlock the human voice as a meaningful measurement of health. AI voice assistants can transform speech into a vital sign, enabling early detection and predictions of oncoming conditions. Similar to how temperature is an indicator of fever, vocal biomarkers can provide us with a more complete picture of our health.

Global problems in mental health to solve

One in four people globally will be affected by major or minor mental health issues at some point in their lives. Around 450 million people currently suffer from conditions such as anxiety, stress, depression, or others, placing mental health among the leading cause of ill-health worldwide. Many of these issues are preventable if detected and treated early, however, nearly two-thirds of people with ill-health do not seek nor receive appropriate help.

Voice as a biomarker for health

Spoken communication encodes a wealth of information. Only recently has research and technology intersected to enable the use of our own voice to be one of the most effective biomarkers of health.

“Think about how much precision and coordination of muscles and brain regions are involved to produce voice, and various diseases can subtly or acutely affect one’s voice and use of language.”  ... '

Sunday, October 25, 2020

Fooling Self Driving Autopilots

 Had seen this mentioned before, good overview in Schneier.   As usual the discussion in the comments there is the most interesting, with experts in the field chiming in, discussing implications for self driving vehicles and their use regulation.

Split-Second Phantom Images Fool Autopilots in Schneier Blog

Researchers are tricking autopilots by inserting split-second images into roadside billboards.

Researchers at Israel’s Ben Gurion University of the Negev … previously revealed that they could use split-second light projections on roads to successfully trick Tesla’s driver-assistance systems into automatically stopping without warning when its camera sees spoofed images of road signs or pedestrians. In new research, they’ve found they can pull off the same trick with just a few frames of a road sign injected on a billboard’s video. And they warn that if hackers hijacked an internet-connected billboard to carry out the trick, it could be used to cause traffic jams or even road accidents while leaving little evidence behind.  ...  "

Learning Common Sense from Animals

The common sense problem.   Intriguing view.  Not enough details, but has links to related academic papers.

Researchers suggest AI can learn common sense from animals  By Khari Johnson   @kharijohnson   October 25, 2020   in VentureBeat

AI researchers developing reinforcement learning agents could learn a lot from animals. That’s according to recent analysis by Google’s DeepMind, Imperial College London, and University of Cambridge researchers assessing AI and non-human animals.

In a decades-long venture to advance machine intelligence, the AI research community has often looked to neuroscience and behavioral science for inspiration and to better understand how intelligence is formed. But this effort has focused primarily on human intelligence, specifically that of babies and children.

“This is especially true in a reinforcement learning context, where, thanks to progress in deep learning, it is now possible to bring the methods of comparative cognition directly to bear,” the researchers’ paper reads. “Animal cognition supplies a compendium of well-understood, nonlinguistic, intelligent behavior; it suggests experimental methods for evaluation and benchmarking; and it can guide environment and task design.”

DeepMind introduced some of the first forms of AI that combine deep learning and reinforcement learning, like the deep Q-network (DQN) algorithm, a system that played numerous Atari games at superhuman levels. AlphaGo and AlphaZero also used deep learning and reinforcement learning to train AI to beat a human Go champion and achieve other feats. More recently, DeepMind produced AI that automatically generates reinforcement learning algorithms.  ... "

Saturday, October 24, 2020

Benchmarking Voice Understanding

 Good points made.   I have been using Google voice assistant versus Amazon Alexa for a few years now.  Only now and then using Siri.   I see more 'balking' by Alexa (that is she does not answer coherently at all)  than Google assistant, but then more 'understanding'.  Alexa is in general more 'human' in conversation.  After that I don't see adequate contextual understanding from either in general.  It all depends on how important and risky the dependent decisions are.  Google does a good job of multilingual understanding when properly set up.   Here voicebot.ai has taken a broader look that is worth looking at.   Neither in my opinion can understand and answer that I would call 'complex questions'.

Understanding Is Crucial for Voice and AI: Testing and Training are Key To Monitoring and Improving It      By John Kelvie in Voicebot.ai


How well does your voice assistant understand and answer complex questions? It is often said, making complex things simple is the hardest task in programming, as well as the highest aim for any software creator. The same holds true for building for voice. And the key to ensuring an effortlessly simple experience for voice is the accuracy of understanding, achieved through testing and training.

To dig deeper into the process of testing and training for accuracy, Bespoken undertook a benchmark to test Amazon Echo Show 5, Apple iPad Mini, Google Nest Home Hub. This article explores what we learned through this research and the implications for the larger voice industry based on other products and services.

For the benchmark, we took a set of nearly 1,000 questions from the ComQA dataset and ran them against the three most popular voice assistants: Amazon Alexa, Apple Siri, and Google Assistant. The results were impressive – these questions were not easy, and the assistants handled them often with aplomb:  ... "

GM Can Manage an EV's Batteries Wirelessly and Remotely

Seems quite a considerable improvement of automotive battery use and management for electric vehicles.

Exclusive: GM Can Manage an EV's Batteries Wirelessly—and Remotely

The new system eliminates the rat's nest of wiring and collects information that can be used to design better batteries.   By Lawrence Ulrich

When the battery dies in your smartphone, what do you do? You complain bitterly about its too-short lifespan, even as you shell out big bucks for a new device. 

Electric vehicles can’t work that way: Cars need batteries that last as long as the vehicles do. One way of getting to that goal is by keeping close tabs on every battery in every EV, both to extend a battery’s life and to learn how to design longer-lived successors.

IEEE Spectrum got an exclusive look at General Motors’ wireless battery management system. It’s a first in any EV anywhere (not even Tesla has one). The wireless technology, created with Analog Devices, Inc., will be standard on a full range of GM EVs, with the company aiming for at least 1 million global sales by mid-decade. 

Those vehicles will be powered by GM’s proprietary Ultium batteries, produced at a new US $2.3 billion plant in Ohio, in partnership with South Korea’s LG Chem.    ... " 

European Quantum Computing Facility Goes Online

 With some useful statistics about usage and capabilities.  Note integration with simulator.  Good description of use and testing processes in place.

Home/News/First European Quantum Computing Facility Goes Online/Full Text

First European Quantum Computing Facility Goes Online,  By Arnout Jaspers

Quantum Inspire, hosted by QuTech, a collaboration of Delft University of Technology and TNO, the Netherlands Organization for Applied research, consists of two independent quantum processors, Spin-2 and Starmon-5, and a quantum simulator. Anyone can create an account, use the Web interface to write a quantum algorithm, and have it executed by one of the processors in milliseconds (if there is no queue), with the result returned within a minute. The process is fully automated.  

Seen from the outside, Spin-2 and Starmon-5 are two large, cylindrical cryostats hanging from the ceiling in a university building. One floor up, a man-size stack of electronics for each takes care of the cooling, feeding the quantum processor input from users and reading out the results. Usually, there is no one in these rooms.     

The facility officially went online on April 20, and over 1,000 accounts have been created since then. Though many curious visitors never returned, active users now upload about 6,000 jobs.... "

Radical Technique Lets AI Learn with Practically No Data

 Learning more accurately, efficiently, is always useful.

A Radical Technique Lets AI Learn with Practically No Data

MIT Technology Review,  By Karen Hao, October 16, 2020

Scientists at Canada's University of Waterloo suggest artificial intelligence (AI) models should be capable of “less than one”-shot (LO-shot) learning, in which the system accurately recognizes more objects than those on which it was trained. They demonstrated this concept with the 60,000-image MNIST computer-vision training dataset, based on previous work by Massachusetts Institute of Technology researchers that distilled it into 10 images, engineered and optimized to contain an equivalent amount of data to the full set. The Waterloo team further compressed the dataset by generating images that combine multiple digits and feeding them into an AI model with hybrid, or soft, labels. Said Waterloo’s Ilia Sucholutsky, “The conclusion is depending on what kind of datasets you have, you can probably get massive efficiency gains.”.. 


U.S. Government Agencies to Use AI to Cut Outdated Regulations

 Makes sense, we examined related methods for regulations.

U.S. Government Agencies to Use AI to Cull, Cut Outdated Regulations

Reuters,  David Shepardson

October 16, 202    The White House Office of Management and Budget (OMB) said federal agencies will use artificial intelligence (AI) to remove outdated, obsolete, and inconsistent requirements across government regulations. A 2019 pilot employing machine learning algorithms and natural-language processing at the U.S. Department of Health and Human Services turned up hundreds of technical errors and outdated mandates in agency rulebooks. The White House said agencies will utilize AI and other software "to comb through thousands and thousands of regulatory code pages to look for places where code can be updated, reconciled, and generally scrubbed of technical mistakes." According to OMB director Russell Vought, the initiative would help agencies "update a regulatory code marked by decades of neglect and lack of reform." Participating agencies include the departments of Transportation, Agriculture, Labor, and the Interior.

Friday, October 23, 2020

Leveraging NVidia GPUs to Power Analytics and AI

Below is an Ad from Nvidia,  Book worth looking at.   I like the fact that they say 'AI and Analytics', a rare clarification.  Its not all AI

  .... Free BOOK


Leveraging NVIDA GPUs to Power the Next Era of Analytics and AI

Apache Spark is a powerful execution engine for large-scale parallel data processing across a cluster of machines, which enables rapid application development and high performance.

In this ebook, learn how Spark 3 innovations make it possible to use the massively parallel architecture of GPUs to further accelerate Spark data processing.

Fill out the form below to download the ebook and learn about the following: The data processing evolution, from Hadoop to GPUs and the NVIDIA RAPIDS™ library

Spark, what it is, what it does, and why it matters

GPU-acceleration in Spark

DataFrames and Spark SQL

A Spark regression example with a random forest classifier

An example of an end-to-end machine learning workflow GPU-accelerated with XGBoost  ... "

Technology Tailoring in Education

Always an interesting question.  The nature of instruction.   In theory it could be precisely tailored to every student and by testing it could be altered in real time to get better results.  Personalized to get the best possible results for each student.  How practical is this,  how will it alter the business of education?   How much is the human touch needed?  

Using Technology to Tailor Lessons to Each Student,  The New York Times,  Janet Morrisey

Computer algorithms and machine learning are helping to personalize instruction to individual students, a trend experts say is long overdue. Some think the Covid-19 pandemic is accelerating U.S. schools' migration to personalized learning programs; American Federation of Teachers president Randi Weingarten said, "Innovations like this can help educators meet students where they are and address their individual needs." Companies like New Classrooms are striving to advance personalized learning; the nonprofit's Teach to One 360 algorithm gives each student access to multigrade curriculums and skills, in order to better address learning gaps in those who are several grades behind. Other companies working aggressively on personalized learning solutions include Eureka Math, iReady, and Illustrative Mathematics.

Towards Artificial Common Sense

 The key part of AI we don't know h0w to do yet. Good overview of current state and directions.   What most all of us consider the important starting point for useful intelligence.  It is often also has the ability to explain why and how it came to a conclusion.

Seeking Artificial Common Sense   By Don Monroe  in CACM

Communications of the ACM, November 2020, Vol. 63 No. 11, Pages 14-16 10.1145/3422588

Although artificial intelligence (AI) has made great strides in recent years, it still struggles to provide useful guidance about unstructured events in the physical or social world. In short, computer programs lack common sense.

"Think of it as the tens of millions of rules of thumb about how the world works that are almost never explicitly communicated," said Doug Lenat of Cycorp, in Austin, TX. Beyond these implicit rules, though, commonsense systems need to make proper deductions from them and from other, explicit statements, he said. "If you are unable to do logical reasoning, then you don't have common sense."

This combination is still largely unrealized; in spite of impressive recent successes of machine learning in extracting patterns from massive data sets of speech and images, they often fail in ways that reveal their shallow "understanding." Nonetheless, many researchers suspect hybrid systems that combine statistical techniques with more formal methods could approach common sense.

Importantly, such systems could also genuinely describe how they came to a conclusion, creating true "explainable AI" (see "AI, Explain Yourself," Communications 61, 11, Nov. 2018).   ... " 

Thursday, October 22, 2020

Quantum Safe Hybrid Digital Certificates

 Looking at quantum safe.

What Is a Quantum-Safe Hybrid Digital Certificate?

Sectigo’s Tim Callan, Jason Soroko and Alan Grau break down what quantum safe hybrid TLS certificates are and how they can help to prepare businesses for quantum-safe cryptography in Sectigo’s  

Quantum computing is poised to disrupt the technological world as we know it. And although quantum computing — and all of the advantages it offers — is still realistically years away, businesses and organizations need to prepare themselves for its inevitable downside: broken cryptosystems.

Quantum computers will break our existing asymmetric cryptosystem — something that cybercriminals will be ready and eager to take advantage of. This is why it’ll be necessary to migrate your existing IT and cryptosystems to their quantum-resistant or quantum-safe equivalents.

But, of course, upgrading to post quantum cryptographic (PQC) systems and infrastructure takes time and resources. So, one of the ways to help futureproof your cyber security through this process is through the use of hybrid digital certificates such as a hybrid TLS certificate. ... " 

RPA for Fintech, a Use Example

Here a good intro to RPA (Robotic Process Automation) for finance applications.  Nice too because most of us can understand basic financial statements, arithmetic and goals.   Below just the intro, full look at the link. 

RPA Guide For Fintech Industry   Posted by Amit Dua  from DSC

Technology is changing the way we live and breathe. We’d even go a step ahead, and quote: Everything we do as humans, including every feat we’ve achieved as a modern civilization, is marked by dynamic leaps in technology. 

What is a dynamic leap, you ask? Let’s understand technological advancements through an example of linear and dynamic steps.  When talking in the Linear terms, if you go from 1 to 30, you cover 30 steps.  Common sense, right? But wait. When talking in Dynamic terms, if you go from 1 to 30, you cover a Billion.  That’s what a dynamic leap is; and technology is evolving at a dynamic pace. Marshall McLuhan puts it best: ‘First, we build the tools; then they build us back.’

The same is true with the BFSI (banking, financial services, and Insurance) sector. 

Since the advancements in automation and digital technologies, it has become preemptive for financial institutions to change the dynamics and inculcate automation in their regulatory requirements.  If we follow the automation trend, it suggests that intelligent automation technologies like Robotic Process Automation (RPA) and AI can reduce costs in Fintech by up to 25%. 

Alt: RPA Implementation in Fintech Industry

What’s Fintech? According to Investopedia, ‘Financial technology (Fintech) is used to describe new tech that seeks to improve and automate the delivery and use of financial services.’.... "

Amazon will Pay You to Know what you Bought Somewhere Else

Amazon's paid shopper panel.   

Amazon will pay you to know what you bought somewhere else  by George Anderson in Retailwire

Amazon.com wants greater insights into what its customers are purchasing and it is willing to pay for the information. The e-tailing and technology giant has launched Amazon Shopper Panel, an invitation-only program that allows participants to earn monthly rewards by sharing receipts of purchases made outside of its website and retail stores.

Participants in the program are asked to upload photos of 10 eligible receipts per month taken with the Shopper Panel app. Alternatively, they can forward email receipts to Amazon. Additional rewards are available when participants fill out short surveys. Amazon customers can earn up to $10 a month that can be applied to their balance on the site or donated to charity.

Participation in the panel is voluntary and those involved can choose to stop participating at any time. Amazon collects only the information shared by panelists. The company said it “deletes any sensitive information, such as prescription information from drug store receipts.” Amazon said all personal information of panelists is secured and handled in accordance with its privacy policy.

Amazon’s Shopper Panel site says that the data gleaned from receipts will help brands offer better products and make ads more relevant on Amazon.....  "

MIT and Related Quantum Resources

Was just pointed to this (much more at the link):

MIT partners with national labs on two new National Quantum Information Science Research Centers

Co-design Center for Quantum Advantage and Quantum Systems Accelerator are funded by the U.S. Department of Energy to accelerate the development of quantum computers.

Kylie Foy | Sampson Wilcox | MIT Lincoln Laboratory | Research Laboratory of Electronics Publication Date:August 31, 2020   .... " 

Whats does a Space Force Do?

 Well one thing, guarding against cybersecurity threats.

US Space Force guards against cybersecurity threats miles above Earth

If space is indeed the “final frontier,” as narrated in the famous opening voiceover in “Star Trek,” it is also becoming the final line of defense against threats to technologies that have become essential for daily life.

From the use of GPS to navigate traffic congestion to fighting forest fires with heat sensors, orbiting satellites play a critical role in providing convenience and safety for countries around the globe. And as governments and private enterprises launch more satellites, the attack surface has expanded as well.

“Space is becoming congested and contested,” said Lt. Gen. John F. Thompson (pictured), commander of the Space and Missile Systems Center at the Los Angeles Air Force Base in California. “The cyber aspects of the space business are truly, truly daunting and important to all of us. Integrating cybersecurity into our space systems, both commercial and government, is a mandate.”

Thompson spoke with John Furrier, host of theCUBE, SiliconANGLE Media’s livestreaming studio, during the Space & Cybersecurity Symposium. They discussed the vital role of GPS infrastructure, threats from nation states, the military’s adoption of a DevSecOps mindset, hiring goals and funding for startups with innovative ideas designed to protect the final frontier.

Trillions in GPS value

As a division of the U.S. Space Force, the Space and Missile Systems Center is responsible for acquiring and developing military space systems. This includes both orbiting satellites and ground communications systems for the U.S. Space Force, critical partners in the Department of Defense and the intelligence community, according to Thompson. 

Wednesday, October 21, 2020

Extending Insight from Neural Networks

 Some thoughts about how trained networks can be used to further analyse chemical structure.

Opening the Black Box of Neural Networks

Pacific Northwest National Laboratory, Allan Brettman

Pacific Northwest National Laboratory (PNNL) researchers used deep learning neural networks to model water molecule interactions, unearthing data about hydrogen bonds and structural patterns. The PNNL team employed 500,000 water clusters from a database of more than 5 million water cluster minima to train a neural network, relying on graph theory to extract structural patterns of the molecules' aggregation. The method provides additional analysis after the network has been trained, allowing comparison between measurements of the water cluster networks' structural traits and the predicted neural network, enhancing the network's understanding in subsequent analyses. PNNL's Jenna Pope said, "If you were able to train a neural network, that neural network would be able to do computational chemistry on larger systems. And then you could make similar insights in computational chemistry about chemical structure or hydrogen bonding or the molecules’ response to temperature changes.”

Driverless in San Francisco

And yet more cars without drivers.     A tipping point?

GM to Run Driverless Cars in San Francisco Without Human Backups    Associated Press, Tom Krisher

General Motors' Cruise autonomous vehicle unit said it will remove human backup drivers from the driverless vehicles it is testing on San Francisco streets by year's end, as California's Department of Motor Vehicles has granted the company a permit to do so. This follows Google subsidiary Waymo's announcement last week that it would open its autonomous ride-hailing service in Phoenix, AZ, without human drivers. Said the University of California, Berkeley's Steven Shladover, "I don't see them as revolutionary steps, but they're part of this step-by-step progress toward getting the technology to be able to work under a wider range of conditions."

SpaceX and Microsoft

Here Comes the Space Cloud.   Faster anywhere, even in space.

 Microsoft, SpaceX Team Up to Bring Cloud Computing to Space

Nextgov. Frank Konkel

Microsoft has partnered with SpaceX and others to make its Azure cloud technology available and accessible to people anywhere on Earth, and potentially those in space. Microsoft will use SpaceX's forthcoming Starlink satellite constellation to bring customers in remote regions high-speed, low-latency broadband; the satellites will function as a channel for data between Microsoft's conventional datacenters and matched ground stations, and the company's modular datacenters. Microsoft also announced an expansion of its Azure Orbital partnership with satellite telecommunications company SES to broaden connectivity between its cloud data centers and edge devices. ....'

Hewlett Foundation on Security Cyber Design

Been a long while since I looked at anything by the Hewlett Foundation.  Attended their meetings.  Now back connected.  Visuals are a good thing for communication.  Especially for obscure security concepts.  

Hewlett Foundation Reveals Top Ideas in Cyber Design Competition

The William and Flora Hewlett Foundation today announced five top ideas in the international “Cybersecurity Visuals Challenge,” which is focused on producing easily-understandable visuals to better illustrate the complexity and importance of today’s cybersecurity challenges to broad audiences. 

Five winning designers produced a portfolio of openly-licensed designs aimed at explaining the stakes involved in cybersecurity topics like encryption or phishing in more human, relatable terms. 

The winners of the Cybersecurity Visuals Challenge are:  ....  (Details at the link) 

“The challenges we face today online keeping networks and devices secure are far too complex to be illustrated by a shadowy figure in a hoodie hunched over a laptop,” said Eli Sugarman, program officer at the Hewlett Foundation in charge of the Cyber Initiative, a ten-year grantmaking effort devoted to improving cyber policy. “Sophisticated organizations are attacking the security of the internet and we believe the images produced by the participating artists will help increase understanding of these issues for policymakers and the broader public alike.”  ... 

Expanding AI's Impact with Organizational Learning

I participated in the below study, they make the point that it will be available only for a short time.  Here is the start of the document:


Most companies developing AI capabilities have yet to gain significant financial benefits from their efforts. Only when organizations add the ability to learn with AI do significant benefits become likely.


Register to download the full report   *Registration Required .... 

Only 10% of companies obtain significant financial benefits from artificial intelligence technologies. Why so few?

Our research shows that these companies intentionally change processes, broadly and deeply, to facilitate organizational learning with AI. Better organizational learning enables them to act precisely when sensing opportunity and to adapt quickly when conditions change. Their strategic focus is organizational learning, not just machine learning.

Organizational learning with AI is demanding. It requires humans and machines to not only work together but also learn from each other — over time, in the right way, and in the appropriate contexts. This cycle of mutual learning makes humans and machines smarter, more relevant, and more effective. Mutual learning between human and machine is essential to success with AI. But it’s difficult to achieve at scale.

Our research — based on a global survey of more than 3,000 managers, as well as interviews with executives and scholars — confirms that a majority of companies are developing AI capabilities but have yet to gain significant financial benefits from their efforts. More than half of all respondents affirm that their companies are piloting or deploying AI (57%), have an AI strategy (59%), and understand how AI can generate business value (70%). These numbers reflect statistically significant increases in adoption, strategy development, and understanding from four years ago. What’s more, a growing number of companies recognize a business imperative to improve their AI competencies. Despite these trends, just 1 in 10 companies generates significant financial benefits with AI.

We analyzed responses to over 100 survey questions to better understand what really enables companies to generate significant financial benefits with AI. We found that getting the basics right — like having the right data, technology, and talent, organized around a corporate strategy — is far from sufficient. Only 20% of companies achieve significant financial benefits with these fundamentals alone. Getting the basics right and building AI solutions that the business wants and can use improve the odds of obtaining significant financial benefits, but to just 39%.

Our key finding: Only when organizations add the ability to learn with AI do significant benefits become likely. With organizational learning, the odds of an organization reporting significant financial benefits increase to 73%.

Organizations that learn with AI have three essential characteristics:

1. They facilitate systematic and continuous learning between   ..... " 

Tuesday, October 20, 2020

Sonos Makes a Small Smart Home Move

Not expected, intriguing. Hoping to compete with other players in the space?

Sonos speakers can now communicate with GE Appliances  They can notify you when the oven is preheated or a dishwasher load is done.

Igor Bonifacic, @igorbonifacic in Engadget .... 

API Security

Pointed out to me recently.   Have not been involved in API security, seems there are useful tips here.

Tips To Strengthen API Security  By Bill Doerrfeld in DevOps

If you haven’t noticed, digital organizations are building more and more APIs. ProgrammableWeb tracks more than 23,000 public web APIs at the time of writing, and the API market is estimated to be worth $5.1 billion by 2023. Building with APIs increases internal interoperability, reduces development time and can extend product functionality tremendously. In short, the value of APIs is rising. However, opening up with APIs brings security caveats that, if not addressed, could result in serious breaches that negate these benefits.  .... 

 ... APIs have been called “the next frontier in cybercrime.” Rightly so, as API breaches continue to pop up nearly every day. Take the recent API vulnerabilities at Cisco Systems, Shopify, Facebook, U.S. presidential campaign apps, and GCP as evidence. The most infamous was likely the Equifax breach—not enforcing formats on incoming API calls resulted in a massive data breach, which cost the company a $700 million lawsuit.  ... " 

Bletchley Park Contribution Over-Rated?

 Am a student of this effort, so this suggestion was surprising.

Bletchley Park’s contribution to WW2 'over-rated'

By Gordon Corera, Security correspondent

Code-breaking hub Bletchley Park's contribution to World War Two is often over-rated by the public, an official history of UK spy agency GCHQ says.  The new book - Behind the Enigma - is released on Tuesday and is based on access to top secret GCHQ files.   "Bletchley is not the war winner that a lot of Brits think it is," the author, Professor John Ferris of the University of Calgary, told the BBC.

But he said Bletchley still played an important role.  ... '

A GPU can Brute Force Your Passwords

Faster GPUs are eroding security.  Using a password manager, which give you a larger number of characters,  and/or a multifactor link up makes much sense.   

The Nvidia RTX 3090 GPU Can Probably Crack Your Passwords   By Ryan Whitwam

The new Nvidia GeForce RTX 3090 is a gaming powerhouse, but that’s not all it can do. According to the makers of a popular password recovery application, the RTX 3090 is also good at brute-forcing passwords. That’s great if you forget an important password, but that’s probably not why people are using such tools. The latest Nvidia cards could make cracking someone else’s files almost trivially easy. 

The RTX 3090 is Nvidia’s latest top-of-the-line GPU with a GA102 graphics processor sporting 10,496 cores and 24GB of GDDR6X memory. It is monstrously, obscenely powerful by today’s gaming standards, and comes with a correspondingly high price of $1,500, give or take a few hundred depending on supply. With a focus on high core counts, GPUs are also great for parallel computing. That’s why you couldn’t even buy a GPU for several months when Bitcoin was at its peak. In the same vein, GPUs are very good at cracking passwords.   .... "

Focus Music

Amazon Alexa Music has been pushing what they call 'Focus Time Music'   With claims for 'perfect sound when you are studying, working, reading or writing '.   They just suggested the idea to me.  Something I have tried myself for years, with playlists, particular artists, etc.    For me its jazz.   Does it really work?  More than other methods.   Seems you could do some controlled  experiements,anyone know of any?  How about taking that to creativity?   Here is what they write about it. 

On Synthetic Data

 Was asked about this, seems it has not come up for some time.   We used it to set up software and analyses for coming real data.  Some MIT thoughts.

The real promise of synthetic data   by Massachusetts Institute of Technology

After years of work, MIT's Kalyan Veeramachaneni and his collaborators recently unveiled a set of open-source data generation tools — a one-stop shop where users can get as much data as they need for their projects, in formats from tables to time series. They call it the Synthetic Data Vault. Credit: Arash Akhgari

Each year, the world generates more data than the previous year. In 2020 alone, an estimated 59 zettabytes of data will be "created, captured, copied, and consumed," according to the International Data Corporation—enough to fill about a trillion 64-gigabyte hard drives.

But just because data are proliferating doesn't mean everyone can actually use them. Companies and institutions, rightfully concerned with their users' privacy, often restrict access to datasets—sometimes within their own teams. And now that the COVID-19 pandemic has shut down labs and offices, preventing people from visiting centralized data stores, sharing information safely is even more difficult.

Without access to data, it's hard to make tools that actually work. Enter synthetic data: artificial information developers and engineers can use as a stand-in for real data.

Synthetic data is a bit like diet soda. To be effective, it has to resemble the "real thing" in certain ways. Diet soda should look, taste, and fizz like regular soda. Similarly, a synthetic dataset must have the same mathematical and statistical properties as the real-world dataset it's standing in for. "It looks like it, and has formatting like it," says Kalyan Veeramachaneni, principal investigator of the Data to AI (DAI) Lab and a principal research scientist in MIT's Laboratory for Information and Decision Systems. If it's run through a model, or used to build or test an application, it performs like that real-world data would.

But—just as diet soda should have fewer calories than the regular variety—a synthetic dataset must also differ from a real one in crucial aspects. If it's based on a real dataset, for example, it shouldn't contain or even hint at any of the information from that dataset.

Threading this needle is tricky. After years of work, Veeramachaneni and his collaborators recently unveiled a set of open-source data generation tools—a one-stop shop where users can get as much data as they need for their projects, in formats from tables to time series. They call it the Synthetic Data Vault.  .... "

Toshiba Targets Quantum Cryptography

Considerable effort under way here:

Toshiba targets $3 billion revenue in quantum cryptography by 2030

By Makiko Yamazaki in Reuters

TOKYO (Reuters) - Toshiba Corp 6502.T said on Monday it aims to generate $3 billion in revenue from its advanced cryptographic technology for data protection by 2030, as the Japanese sprawling conglomerate scrambles to find future growth drivers.

The cyber security technology, called quantum key distribution (QKD), leverages the nature of quantum physics to provide two remote parties with cryptographic keys that are immune to cyberattacks driven by quantum computers.  ... 

Monday, October 19, 2020

Bringing Power Tool From Math Into Quantum Computing

The implication that this idea can be used for problems already well solved by FFT methods, is considerable.  These kinds of pattern recognition techniques are already well known in engineering applications.    Would like to try this against some problems like machine maintenance.

Bringing Power Tool From Math Into Quantum Computing  Tokyo University of Science (Japan),   October 14, 2020

Scientists at Japan's Tokyo University of Science (TUS) have designed a novel quantum circuit that calculates the fast Fourier transform (FFT) in a faster, versatile, and more efficient manner than previously possible. The quantum fast Fourier transform (QFFT) circuit does not waste any quantum bits, and it exploits the superposition of states to boost computational speed by processing a large volume of information at the same time. Its versatility is another benefit. TUS' Ryoko Yahagi said, "One of the main advantages of the QFFT is that it is applicable to any problem that can be solved by the conventional FFT, such as the filtering of digital images in the medical field or analyzing sounds for engineering applications."

Deep Learning Takes on Synthetic Biology

Previously mentioned, we experimented with the idea before the current state of machine learning, with a kind of simulation more akin to 'digital twins'.    The ML method would have been useful to add.

Deep Learning Takes on Synthetic Biology

The Harvard Gazette    By Lindsay Brownell   October 7, 2020

Two teams of scientists from Harvard University and the Massachusetts Institute of Technology have developed machine learning algorithms that can analyze RNA-based "toehold switch" molecular sequences and predict which will reliably sense and respond to a desired target sequence. The researchers first designed and synthesized a massive toehold switch dataset, which Harvard's Alex Garruss said "enables the use of advanced machine learning techniques for identifying and understanding useful switches for immediate downstream applications and future design." One team trained an algorithm to analyze switches as two-dimensional images of base-pair possibilities, and then to identify patterns signaling whether a given image would be a good or a bad toehold via an interpretation process called Visualizing Secondary Structure Saliency Maps. The second team tackled the challenge with orthogonal techniques using two distinct deep learning architectures. Their Sequence-based Toehold Optimization and Redesign Model and Nucleic Acid Speech platforms enable the rapid design and optimizing of synthetic biology components.  ... " 

Quantum Engines?

Interesting proposal.  Speculative?    Relates to laws of thermodynamics brought up recently here. Can entanglement be a fuel?  Consider the implications.   

Perfect Energy Efficiency: Quantum Engines With Entanglement as Fuel? By UNIVERSITY OF ROCHESTER in SciTechDaily

University of Rochester researcher receives $1 million grant to study quantum thermodynamics.

It’s still more science fiction than science fact, but perfect energy efficiency may be one step closer due to new research at the University of Rochester.

In order to make a car run, a car’s engine burns gasoline and converts the energy from the heat of the combusting gasoline into mechanical work. In the process, however, energy is wasted; a typical car only converts around 25 percent of the energy in gasoline into useful energy to make it run.

Engines that run with 100 percent efficiency are still more science fiction than science fact, but new research from the University of Rochester may bring scientists one step closer to demonstrating an ideal transfer of energy within a system.

Andrew Jordan, a professor of physics at Rochester, was recently awarded a three-year, $1 million grant from the Templeton Foundation to research quantum measurement engines—engines that use the principles of quantum mechanics to run with 100 percent efficiency. The research, to be carried out with co-principal investigators in France and at Washington University St. Louis, could answer important questions about the laws of thermodynamics in quantum systems and contribute to technologies such as more efficient engines and quantum computers.

“The grant deals with several Big Questions about our natural world,” Jordan says.  .... 

Learning Microwave Ovens

My microwave oven learns, say to cook a baked potato, but this takes it to a new dimension.  For possible industry applications.    See also my previous note on using microwave ovens for health data detection.  Use the tag below 'microwave'.  

Researchers develop 'learning' microwave ovens    by University of Amsterdam

In a publication in the Journal of Cleaner Production, Prof. Bob van der Zwaan of the Van 't Hoff Institute of Molecular Sciences presents the first example of a learning curve for microwave ovens, which follows a learning rate of around 20%. The paper discusses opportunities for possible microwave heating applications in households and industry that can contribute to sustainable development. Rapidly reducing prices could lead to a meaningful role of microwave technology in the energy transition.

Sunday, October 18, 2020

Small Sensors and Robotics: Insects in Tow

Back to our interest here in small robotics and now taking it beyond bio mimicry to directly using insectoriva.

Researchers Use Flying Insects to Drop Sensors Safely    By UW News

 A Manduca sexta moth carrying a sensor on its back. ... 

University of Washington researchers have created a sensor small and light enough to ride on the back of an insect for deployment.

Researchers at the University of Washington (UW) have created a 98-milligram sensor that can access difficult- or dangerous-to-reach areas by riding on a small drone or an insect and being dropped when it reaches its destination.

The sensor is released when it receives a Bluetooth command and can fall up to 72 feet at a maximum speed of 11 mph without breaking, then collect data like temperature or humidity levels for nearly three years.

Said UW's Shyam Gollakota, "This is the first time anyone has shown that sensors can be released from tiny drones or insects such as moths, which can traverse through narrow spaces better than any drone and sustain much longer flights."The system could be used to create a sensor network within a study area researchers wish to monitor.

From University of Washington