/* ---- Google Analytics Code Below */

Thursday, September 30, 2021

Manish Gupta Talk on Human Inspired AI

Looks to be a good talk: 

Reminder: October 6 Talk, "Human Inspired Artificial Intelligence" with ACM Fellow and Director of Google Research India Manish Gupta

If you haven't done so yet, register now for the next free ACM TechTalk, "Human Inspired Artificial Intelligence," presented on Wednesday, October 6 at 11:00 AM ET/8:00 AM PT/20:30 IST by Manish Gupta, Director of Google Research, India and an ACM Fellow. Eve Andersson, Senior Director of Accessibility at Google and member of the ACM Practitioner Board, will moderate the questions and answers session following the talk.

Leave your comments and questions with our speaker now and any time before the live event on ACM's Discourse Page. And check out the page after the webcast for extended discussion with your peers in the computing community, as well as further resources on human inspired AI and more.

(If you'd like to attend but can't make it to the virtual event, you still need to register to receive a recording of the TechTalk when it becomes available.)


Manish Gupta, Director, Google Research, India; ACM Fellow

Manish Gupta is the Director of Google Research India. He holds an additional appointment as Infosys Foundation Chair Professor at IIIT Bangalore. Previously, Manish has led VideoKen, a video technology startup, and the research centers for Xerox and IBM in India. As a Senior Manager at the IBM T.J. Watson Research Center in Yorktown Heights, New York, Manish led the team developing system software for the Blue Gene/L supercomputer. IBM was awarded a National Medal of Technology and Innovation for Blue Gene by US President Barack Obama in 2009. Manish holds a PhD in Computer Science from the University of Illinois at Urbana Champaign. He has co-authored about 75 papers, with more than 7,000 citations in Google Scholar (and an h-index of 46), and has been granted 19 US patents.  .... '

Hardening your VPN

 Just posted, as usual these often get interesting comments.

Via Schneier:     Hardening Your VPN    

The NSA and CISA have released a document on how to harden your VPN .... 

Baby Face, Identity and Health Monitoring

Baby face and health monitoring. 

Baby Detector Software in Digital Camera Rivals ECG

By University of South Australia, August 27, 2021

University of South Australia (UniSA) scientists have developed computer vision-based baby detector software that uses a digital camera to automatically detect a baby's face in a hospital bed and remotely monitor its health.

UniSA's Javaan Chahl said tubes and other equipment can hinder computers from recognizing infants, and the system was trained on videos of babies in the neonatal intensive care unit to reliably identify their skin tone and faces.

High-resolution cameras recorded the infants, while advanced signal processing techniques extracted vital physiological data.

UniSA's Kim Gibson said using neural networks to detect babies' faces is a critical achievement for non-contact monitoring. ... 

University of South Australia researchers have designed a computer vision system that can automatically detect a tiny babys face in a hospital bed and remotely monitor its vital signs from a digital camera... 

From University of South Australia  View Full Article  

Safety Thoughts lead to Analysis

 The mentioned of simple linear optimization is interesting. 

When Accidents Happen, Drones Weigh Their Options

University of Illinois at Urbana-Champaign, Debra Levey Larson, September 21, 2021

Gauging unmanned aerial vehicles' ability to recover from malfunctions and complete missions safely is central to research by scientists at the University of Illinois at Urbana-Champaign (UIUC). The “quantitative resilience” of a control system attempts to verify such systems' capabilities following an adverse event, according to UIUC's Melkior Ornik. He said that task requires the drone to solve four nested, possibly nonlinear, optimization challenges, and reduces the computation of quantitative resilience to a single linear optimization problem through control theory and two novel geometric results. Ornik added, “This ability to work through when equipment malfunctions has real-life implications.”

Cyber Security in Oil and Gas

 Already broadly attacked, examples below. 

Oil and Gas Companies Must Act Now on Cybersecurity      Via Siemens

August 13, 2021, Oil and Gas Companies Must Act Now on Cybersecurity

The World Economic Forum’s Cyber Resilience in the Oil and Gas Industry: Playbook for Boards and Corporate Officers Provides a New Blueprint to Secure Critical Infrastructure, the most personally disruptive incident in recent memory came in May 2021 with the ransomware attack that shut down a major U.S. oil and gas pipeline responsible for supplying nearly half of the East Coast’s petroleum. But for global energy industry leaders – and the oil, gas and utility sectors in particular – this is another incident in a series of cyber attacks on critical infrastructure in the increasingly harried digitally connected energy ecosystem that requires an urgent solution.

The energy sector is no stranger to cyber attacks. For many American families and businss Navigating big challenges, from the NotPetya cyber attack on a Ukrainian utility in 2017 that shut down much of the country’s power grid, to the attack on the Colonial Pipeline in 2021, is a responsibility that now falls on the energy sector’s top executives and board members. These leaders need to mitigate cyber risk in a sector undergoing a digital revolution and is now frequently targeted for geopolitical purposes and financial gain by cyber criminals. While governments around the world develop new policies, norms and consequences for future cyber attacks, oil and gas executives and board members cannot wait on government to come to a geopolitical détente, issue new regulations or aid in efforts to secure critical energy systems.

Instead, CEOs and board members must draw from their decades of expertise in integrating energy assets with operational technology (OT) and leveraging information technology (IT) networks to reduce cyber risk across their hyperconnected operating environments. For decades, oil and gas companies have pursued productivity gains by linking physical energy assets with OT control systems and IT networks. That trend continues today with energy organizations seeking big data, artificial intelligence (AI), and automation solutions to reduce costs, improve efficiency and help reduce emissions. Throughout this process, industry executives have also pioneered key management principles and risk-based approaches to securing the technologies and processes that serve as the foundation for their hyperconnected industrial Internet of Things (IoT) business model.  ... '

Wednesday, September 29, 2021

Expanding Chinese Courier Robotics

Makes sense.   We need more to fulfill logistics. Note recognition of the need for collaboration.

Pandemic Pushes Chinese Tech Giants to Roll Out More Courier Robots  By Reuters

September 29, 2021

Chinese technology giants Alibaba, Meituan, and JD.com plan to increase their courier robot fleets fourfold to more than 2,000 by 2022, as the pandemic boosts demand for contactless services and the costs of manufacturing robots declines.   JD.com's Kong Qi said, "We want people and vehicles to work better together and not for vehicles to replace people. It is just the most boring section of the delivery guy's work that we will try to replace."

Alibaba currently operates over 200 robots and plans to have 1,000 in the field by March, and as many as 10,000 over the next three years.  JD.com operates about 200 robots, and plans add as many as 800 to its fleet by year’s end.

Meituan now has around 100 delivery robots in use.

From Reuters

View Full Article

Beyond Seti, Can AI get us There?

 A number of good questions are asked. 

AI Calling ET: How Artificial Intelligence Supports the Search for Extraterrestrial Life

By Karen Emslie,  Commissioned by CACM Staff,  September 28, 2021

Back in 1977, astronomer Jerry Ehman circled an anomaly on a printout of narrowband radio signal data recorded by Ohio State University's Big Ear telescope as it swept the skies for signs of extraterrestrial life. Alongside, Ehman wrote one now-famous word "Wow!"

We are still scouring the heavens for evidence that may answer the enduring question: are we alone in the universe? We don't yet know, but artificial intelligence (AI) is now supporting our investigations.

A number of methods have been deployed in the search for extraterrestrial life. They include in situ observations, like sending a rover to Mars to search for evidence of current or historical life, as well as remote sensing, like probing distant planetary atmospheres using technologies such as the future James Webb Space Telescope (JWST), and searching for signs of extraterrestrial technologies and communications.

AI is being used to support each of these research avenues.

Looking for planets that have similar life-supporting conditions to Earth, such as the presence of liquid water, is a vital part of the search. Exoplanets, or planets that orbit stars beyond our solar system, are a primary target. Explains Greg Olmschenk, a machine learning specialist at the U.S. National Aeronautics and Space Administration (NASA)  Goddard Institute for Space Sciences, "Our best bet right now for finding life elsewhere is to look for life as we know it, because that's the only kind of life that we know."

NASA missions, such the Transiting Exoplanet Survey Satellite (TESS) and the now-retired Kepler space telescope, survey the skies looking for exoplanets. Tiny dips in a star's perceived brightness can reveal the presence of an exoplanet as it transits, or crosses in front of, the star. These dips can be detected in light curves, or measurements of a star's light over time. TESS data, for example, contains around 60 million such light curves, which helps to explain the incorporation of AI into the process.

"It's not practical to have an astrophysicist look at each one of those and so we trained a neural network to do that," Olmschenk said.

To accomplish that, Olmschenk collaborated with researchers from the Universities Space Research Association, the Catholic University of America, the University of Maryland, and Science Systems and Applications, Inc. in the U.S., and the Università degli Studi di Napoli Federico II in Italy. They developed a one-dimensional (1D) convolutional neural network (CNN) to identify planetary transit signals. Olmschneck explained that while 2D convolutional networks are more common, as most neural networks are looking at images, "In this case, what we're looking at is one-dimensional time data."

Olmschenk said the CNN was trained to dismiss false positives, dips caused by other types of signals, such as the eclipsing of binary stars. It has a stack of convolutional layers that process input data, with each layer looking for certain patterns, such as light curves going up or down.

"You'll start to get pieces of the neural network that are recognizing peaks and troughs in the light curve, and eventually when you get down to the lower layers, you start to recognize things like actual transit dips."

The process filtered millions of light curves down to a few thousand that looked most likely to be planets. Astrophysicists then carried out follow-up observations to produce a short list of exoplanet candidates.

Sophisticated techniques such as radial velocity measurement, which detects the tell-tale 'wobble' produced by the gravitational tug of a planet on a star, ultimately determine whether candidates are truly planets. At the time of writing, 183 such candidates are still being investigated.

Searching for Signs of Intelligence

At the SETI Institute in Mountain View, CA, researchers have been looking for evidence of life on other worlds since 1985. The non-profit research organization was born from earlier SETI (search for extraterrestrial intelligence) projects funded by NASA.

SETI Institute CEO Bill Diamond draws a distinction between 'life' and 'intelligent life' on other worlds. While there may be biological life elsewhere in our solar system, he says, "We're talking about intelligent and technological beings that might exist on planets outside of our own solar system."

The Institute has close links with the nearby NASA Ames Research Center, and its researchers developed AI for NASA'S Kepler and TESS missions.  ... ' 

Why do Digital Twins Matter?

Am a  long term practitioner in simulation systems,  now looking to understand how Digital Twins fit into the mix and can be better linked to broader context.    Here is another piece by Ajit Jaokar

#Artificial Intelligence #23 - Why digital twins matter,   By  Ajit Jaokar

Course Director: Artificial Intelligence: Cloud and Edge Implementations - University of Oxford

Welcome to #Artificial Intelligence #23

As we come into October, I spend some time thinking of these complex areas we are teaching at the #universityofoxford

Here, is one .. Why do Digital Twins matter?

When we designed he course Digital Twins: Enhancing Model-based Design with AR, VR and MR (Online), I first became interested in the idea of the digital twin when I could see it as a way to combine IoT and AI. We then extended the idea of digital twins to Model based design (by combining elements like VR to Digital twins) and to additive manufacturing as a domain (by combining elements like 3D printing and design to digital twins)

The idea of the digital twin is still nascent but is getting a lot of traction

Up to 89% of all IoT Platforms will contain some form of digital twinning capability by 2025. –Researchandmarkets.com

As a result of COVID-19, 31% of respondents use digital twins to improve employee or customer safety, such as the use of remote asset monitoring to reduce the frequency of in-person monitoring. –Gartner

The global digital twin market size was valued at USD 3.1 billion in 2020 and is projected to reach USD 48.2 billion by 2026.–MarketsandMarkets

 But more importantly, digital twins are significant because they tie directly to real business problems  ... /

Amazon also Poses an Indoor Drone Camera

See also that Amazon/Ring has announced an indoor camera drone for security applications.    Announced at a much cheaper price than their rolling robot   You have to wonder how well a flying camera device will interact with other home residents.  Also noise?  But another interesting gizmo step forward for the smart home.  

Amazon’s indoor camera drone is ready to fly around your house  in ArsTechnica

Amazon's crazy indoor, flying drone camera—the ambiguously named "Ring Always Home Cam"—is actually for sale now in the US. This was announced a full year ago, but now it's available "exclusively by invitation" for $249.99. This is a "Day 1 Edition" (read: a beta product). So Amazon isn't letting just anyone buy it. You can request an invitation to give Amazon money on the product page.  


Cheap Sensors for Smarter Farmers

More details of operational information for agriculture. 

Cheap Sensors for Smarter Farmers  Two IoT sensors from this year's ARPA-E Summit can help farmers make better decisions By KAREN KWON 

Demonstrating that we are truly living in an era of "smart agriculture," many of the technologies showcased in this year's ARPA-E Summit were in the farming sector—most notably, sensors for crops and farmlands. Just like the smart devices that enable us to monitor our health every minute of the day, these agricultural sensors allow farmers to monitor plant and soil conditions close to real-time. Here are two that caught this writer's eyes.

First up is a 3D-printed, biodegradable soil sensor that checks moisture and nitrogen levels. One of the benefits of using print electronics is being able to mass-produce at a low cost, says Gregory Whiting at the University of Colorado, Boulder, one of the principal investigators of the team working on the sensors. "Agriculture is a pretty cost constrained industry," Whiting says, and 3D-printed sensors allow farmers to place many sensors throughout their large farmlands—often hundreds of acres—without spending a ton of money.

And this enables the farmers to monitor soil conditions in greater detail, Whiting says. Depending on factors such as how the sun hits the ground, the amount of water or the fertilizer needed could vary patch by patch. Traditional sensors were too expensive for the farmers to buy in large quantities, and, as a result, the special resolution wasn't high enough to reflect this variability. With the new, cheap sensors, farmers will be able to collect data on their farms without worrying about the variability.  ... ' 

Tuesday, September 28, 2021

Amazon Does a Home Robot: Astro

A number of such 'friendly' ranging home robots have been around, none successful.  I recall the Pepper, Yumi, Kuri, Jubi, Aibo robots.  All mentioned here.  Eldercare being hinted at.  Has also been rumored to be under development for years.  This is expensive. Notably it will not climb stairs, or I would assume even a single step, or rugs?

Amazon Does Astro,  a Rolling Home Robot

Introducing Amazon Astro, Household Robot for Home Monitoring, with Alexa, Includes 6-month Free Trial of Ring Protect Pro

Brand: Amazon  Day 1 Editions

Price: $999.99 & FREE Returns, or 5 monthly payments of $200.00

Introductory price. After introductory period, price will be $1,449.99. Terms and conditions apply., Available exclusively by invitation, Ships from and sold by Amazon.com Services LLC.,Please note this product can only ship to addresses in the 50 US states.

Keep home closer - Meet Astro, the household robot for home monitoring, with Alexa.

Introducing Intelligent Motion - Amazon Astro uses advanced navigation technology to find its way around your home and go where you need it. When you're not using Astro, it will hang out close by at the ready.

Stay connected from anywhere - Remotely send Astro to check on specific rooms, people, or things. Plus, get alerts if Astro detects an unrecognized person or certain sounds when you're away.

Unlock even more peace of mind - Activate your 6-month free trial of Ring Protect Pro subscription and have Astro proactively patrol, investigate activity, save videos in Ring's cloud storage for up to 60 days, and more.

Alexa Together subscription (Coming soon) - Remotely care for aging loved ones, giving you peace of mind while helping them live independently. Set up reminders, manage shopping lists, receive activity alerts, and more.

Put Alexa in motion - Astro can follow you with entertainment or find you to deliver calls, messages, timers, alarms, or reminders.

Designed to protect your privacy - Turn off mics, cameras, and motion with one press of a button and use the Astro app to set out of bounds zones to let Astro know where it's not allowed to go.

Customize with compatible products – Astro comes with a detachable cup holder and can carry other items (sold separately) like a Ziploc container, the OMRON blood pressure monitor, and a Furbo Dog Camera that tosses treats to your pet.

We want you to know:  Astro cannot go up or down stairs.

Sciam Says Self Driving Cars Emergent

Scientific American takes a look at a self driving future

'Self-Driving' Cars Begin to Emerge from a Cloud of Hype    By Scientific American, September 27, 2021

Although some observers may perceive that the bloom is off the rose for automated driving in this post-hype environment, the current situation actually marks a sign of progress. More realistic views of the opportunities and challenges for automated driving will motivate better-focused investment of resources and alignment of public perceptions with reality.

We should expect some limited implementations of automated long-haul trucking on low-density, rural highways and automated local small-package delivery in urban and suburban settings during the current decade. Automated urban and suburban ride-hailing services could become available on a limited basis as well, but the location-specific challenges to their deployment are sufficient that this is unlikely to reach a national scale soon.

From Scientific American 

Virtual Reality Is a Game-Changing Computing and Communication Platform

Very good piece on the topic from CACM.      With some useful examples of VR uses beyond games.  Have been skeptical about its general use, see key insights below for good overview.   Note mention of 'virtual worlds' below which we actively examined. 

Six Reasons Why Virtual Reality Is a Game-Changing Computing and Communication Platform for Organizations   By Osku Torro, Henri Jalo, Henri Pirkkalainen

Key Insights:

VR can solve many critical bottlenecks of conventional remote work while also enabling completely new business opportunities.

VR enables novel knowledge-management practices for organizations via enriched data and information, immersive workflows, and integration with appropriate IS and other emerging technologies.

VR enables high-performing remote communication and collaboration by simulating or transforming organizational communication, in which altered group dynamics and Al agents can also play an interesting role. ... 

Communications of the ACM, October 2021, Vol. 64 No. 10, Pages 48-55  10.1145/3440868

The COVID-19 pandemic created unprecedented disruptions to businesses, forcing them to take their activities into the virtual sphere. At the same time, the limitations of remote working tools have become painfully obvious, especially in terms of sustaining task-related focus, creativity, innovation, and social relations. Some researchers are predicting that the lack of face-to-face communication may lead to decreased economic growth and significant productivity pitfalls in many organizations for years to come.13

As the length and lasting effects of the COVID-19 pandemic cannot be reliably estimated, organizations will likely face mounting challenges in the ways they handle remote work practices. Therefore, it is important for organizations to examine which solutions provide the most value in these exceptional times. In this article, we propose virtual reality (VR) as a critical, novel technology that can transform how organizations conduct their operations.

VR technology provides "the effect of immersion in an interactive, three-dimensional, computer-generated environment in which virtual objects have spatial presence."5 VR's unique potential to foster human cognitive functions (that is, the ability to acquire and process information, focus attention, and perform tasks) in simulated environments has been known for decades.6,8,32 VR has, thus, long held promise for transforming how we work.33

Earlier organizational experiments with desktop-based virtual worlds (VWs)—3D worlds that are used via 2D displays—have mostly failed to attract participation and engagement.34,37 Increasing sensory immersion has been identified as necessary for mitigating these problems in the future.18 Therefore, sensory immersion in VR through the use of head-mounted displays (HMDs) can be seen as a significant step forward for organizations transferring their activities to virtual environments. In this regard, VR is now starting to fulfill the expectations that were placed upon VWs in the past decades, as per Benford et al, for instance.4

However, VR has only recently matured to a stage where it can truly be said to have significant potential for wider organizational use.17 In 2015, Facebook founder and CEO Mark Zuckerberg described VR as "the next major computing and communication platform."38 Although VR has received this kind of significant commercial attention, its potential in organizational use remains largely scattered or unexplored in the extant scientific literature.

Drawing on contemporary research and practice-driven insights, this article provides six reasons why VR is a fundamentally unique and transformative computing and communication platform that extends the ways organizations use, process, and communicate information. We relate the first three reasons with VR as a computing platform and its potential to foster organizations' knowledge management processes and the last three reasons with VR as a communication platform and its potential to foster organizations' remote communication processes.  ... ' 

Monday, September 27, 2021

Vint Cerf Examines Googles Misinformation

Short intro and link forward below. 


How Internet Pioneer Vint Cerf Illuminated Google's Misinformation Mess

By Fast Company,September 27, 2021

In June 2020, the Parliament of the U.K. published a policy report with numerous recommendations aimed at helping the government fight against the "pandemic of misinformation" powered by internet technology. The report is rather forceful on the conclusions it reaches: "Platforms like Facebook and Google seek to hide behind 'black box' algorithms which choose what content users are shown. They take the position that their decisions are not responsible for harms that may result from online activity. This is plain wrong."  ... .

Google chief Internet evangelist Vint Cerf testified that Google's evaluation of Websites includes "a manual process to establish criteria and a good-quality training set, and then a machine-learning system to scale up to the size of the World Wide Web, which we index. ... 

Full article. in FastCompany

Vaccination Without a Shot

My own informal conversations with people would indicate that there is a fear factor involved, and anything that would decrease that would increase participation. 

 3D-Printed Vaccine Patch Offers Vaccination Without a Shot

University of North Carolina at Chapel Hill,  September 23, 2021

Scientists at Stanford University and the University of North Carolina at Chapel Hill (UNC) have engineered a three-dimensionally (3D)-printed vaccine patch that offers more protection than a typical shot in the arm. The microneedle-studded polymer patch is applied directly to the skin, with a resulting 10-fold greater immune response and a 50-fold greater T-cell and antigen-specific antibody response compared to injection. UNC's Shaomin Tian said the microneedles are 3D-printed, which makes it easy to design patches specifically for flu, measles, hepatitis, or COVID-19 vaccines. Entrepreneur Joseph M. DeSimone said, "In developing this technology, we hope to set the foundation for even more rapid global development of vaccines, at lower doses, in a pain- and anxiety-free manner."

Growth of Amazon Connect

Brought to my attention, from BusinessWire   ...

AWS Shares New Business Momentum Milestones and Announces Three New Capabilities for Amazon Connect

Tens of thousands of AWS customers now using Amazon Connect to support more than 10 million contact center interactions every day  ...

SEATTLE--(BUSINESS WIRE)--Today at Enterprise Connect, Amazon Web Services, Inc. (AWS), an Amazon.com, Inc. company (NASDAQ: AMZN), shared new business momentum milestones and announced three new capabilities for Amazon Connect that improve contact center agent productivity and provide superior service by making customer interactions more effective, personal, and natural. AWS shared for the first time that tens of thousands of AWS customers are supporting more than 10 million contact center interactions a day on Amazon Connect, an easy-to-use, highly scalable, and cost-effective omnichannel cloud contact center solution. The new features announced today are designed to give agents the right information at the right time to answer customer questions faster, provide fast and secure caller authentication, and make communicating with customers easier and more efficient. To get started with Amazon Connect, visit aws.amazon.com/connect/.  ... ' 

Intel Moves with Chip Factories

 More and faster chips needed ... 

Intel Starts Construction of Two Arizona Computer Chip Factories

Intel broke ground on two new computer chip factories in Arizona as part of a $20 billion project to help meet the high demand for semiconductors ... 

Intel  (INTC) - Get Intel Corporation (INTC) Report on Friday broke ground on two new computer chip factories in Arizona as part of a $20 billion project to help alleviate the severe shortage of semiconductors in the U.S.

The Santa Clara, Calif.-based semiconductor chip manufacturer's CEO Pat Gelsinger led the project's groundbreaking ceremony at the company's Ocotillo campus in Chandler, Ariz., marking the largest private investment in the state's history.or semiconductors in the U.S.

Kirk O'Neil  ... ' 

Hide My Email

 In an email security application, Apple has included in their IPhone verision IOS 15, a method called 'Hide my Email', which constructs unique, random email addresses.     These private emails can be used to send messages, and then responses received to those messages will be forwarded to your primary email address.    In this way you are not revealing your primary email for anyone to misuse.     You can apparently create any number of these 'fake' addresses.   Keep or delete them as you desire.  Nice basic idea, about to seriously try ... '

Intuit to Buy Mailchimp

 EMail Marketing Acquisition.

Intuit's $12B Mailchimp Purchase Breathes New Life Into Email Marketing  By Jack M. Germain

Intuit on Monday announced an agreement to acquire Mailchimp, a global customer engagement and marketing platform for small and mid-market businesses, for $12 billion in cash and stock advances. The purchase could be the linchpin that thrusts the mostly financial software company into solving more fertile mid-market business challenges for its customers. .... ' 

Danel Kahneman vs Deep Learning

From Linkedin, by Ajit Joakar

(You need Linkedin membership to see this introduction) 

Artificial Intelligence #18: Of Daniel Kahneman and Deep Learning

Published on August 24, 2021

Course Director: Artificial Intelligence: Cloud and Edge Implementations - University of Oxford

 Welcome to Artificial Intelligence #18

 This week, I am still on work / holiday in Germany

 In this newsletter, we cover an important topic based on the book thinking fast and slow

 The main thesis of Daniel Kahneman’s landmark book thinking fast and slow is the dichotomy between two modes of thought: "System 1" (fast, instinctive and emotional) and  "System 2" (slower, more deliberative, and more logical).

The ever insightful Kahneman also points out to system 1 v.s. system 2 as one of the challenges of deep learning in an interview with Lex Fridman

 To summarise Daniel Kahneman's view on deep learning.  What is happening in deep learning is more like a system one product. Deep learning matches patterns and anticipate what's going to happen so it's highly predictive

However, deep learning doesn't have the ability to reason, manage temporal causality and represent meaning

Deep learning has made rapid progress but there are many problems where we need ability for reasoning. These shortcomings are also pointed out by Gary Marcus and other critics of AI.These are important limitations to deep learning and AI today.

. Solving these challenges could be a trigger for deep learning to evolve  ... .'

Robot Swarms Chatting Less Means Better Decisions

General statement is interesting as as give decisions over to groups of robots

Less Chat Can Help Robots Make Better Decisions

By University of Sheffield (U.K.), July 30, 2021

Robot swarms could cooperate more effectively if communication among members of the swarm were curtailed, according to research by an international team led by engineers at the U.K.'s University of Sheffield.

The research team analyzed how a swarm moved around and came to internal agreement on the best area to concentrate in and explore.  Each robot evaluated the environment individually, made its own decision, and informed the rest of the swarm of its opinion; each unit then chose a random assessment that had been broadcast by another in the swarm to update its opinion on the best location, eventually reaching a consensus.

The team found the swarm's environmental adaptation accelerated significantly when robots communicated only to other robots within a 10-centimeter range, rather than broadcasting to the entire group.

From University of Sheffield (U.K.)

Sunday, September 26, 2021

Podcast: Yann LeCun Talks Research Beginnings and Recent Developments

 Well worth listening to to understand the state of deep learning AI.  Not very technical.

Yann LeCun Talks Research Beginnings and Recent Developments

By The Gradient Podcast, September 23, 2021  in CACM

Turing Award-winner Yann LeCun is the vice president and chief AI scientist at Facebook and the Silver Professor at New York University.

Yann LeCun famously pioneered the use of convolutional neural nets for image processing in the 1980s and 1990s and is generally regarded as one of the people whose work was pivotal to the deep-learning revolution in AI. He received the 2018 ACM Turing Award (along with Geoffrey Hinton and Yoshua Bengio) for "conceptual and engineering breakthroughs that have mae deep neural networks a critical component of computing."

In an interview, LeCun talks about his early days in AI research and recent developments in self-supervised learning for computer vision.

From The Gradient Podcast:   Listen to Podcast  


Brought to my attention:


Welcome to the Lidar News blog page. We currently host “In the Scan“ – one of the world’s leading 3D laser scanning and lidar blogs with over 3000 posts over 9 years.

We are interested in your comments, and if you have a post or idea for a new blog please let us know – we might be interested in featuring it ... '

McKinsey Energy Strategy

Seen many, may of such strategies, and it depends highly on contexts.  Integrating adaption is best.  

Few companies believe that their current business models will remain viable if they don’t digitize. But making the shift to digital requires getting software into the core of your business model--or launching entirely new software businesses.  ... ' 

Sent from McKinsey Insights

Amazon Alexa Developers

Amazon Alexa Smart home developer highlights in September.  

Join the Developer Community.

Ask questions during Developer Office Hours, hear from members of the voice community on Alexa & Friends, and join the conversation in the Alexa Community Slack.  ... 

Shades of Computational Intelligence

Below the intro, I like the broad take.  But we have gotten more used to the AI thing, it remains, artificial.

Tired of AI? Let’s talk about CI.

Bryce Murray, PhD  in TowardsDataScience

Artificial Intelligence (AI) is everywhere. It has slowly crept away from its original definition and has become a buzzword for most automated algorithms. In this post, I don’t argue what AI is or isn’t — that’s a highly subjective argument at this point. However, I’d like to highlight Computational Intelligence — a well-defined topic.


What is Artificial Intelligence? Who knows. It’s an ever-moving target to define what is or isn’t AI. So, I’d like to dive into a science that’s a little more concrete — Computational Intelligence (CI). CI is a three-branched set of theories along with their design and applications. They are more mathematically rigorous and can separate you from the pack by adding to your Data Science toolbox. You may be familiar with these branches — — Neural Networks, Evolutionary Computation, and Fuzzy Systems. Diving into CI, we can talk about sophisticated algorithms that solve more complex problems.

A large community exists within CI. Specifically, within the IEEE, there is a large CI community— with a yearly conference for each branch. I’ve published/volunteered at the FUZZ-IEEE conference over the last few years, and it’s always an excellent opportunity to learn about emerging mathematics and algorithms. Each community drives innovation in the CI space, which trickles from academia into industry. Many CI methods began in academia and evolved into real-world applications.

One of the most common questions I’ve received when talking about CI is, “what problems does each branch solve?” While I can appreciate this question, the branches are not segmented by which problems they solve.

The inspiration of the theories segments the branches. So, it’s impossible to segment into their applications. “But Bryce, what is a CI theory?” In a nutshell, each theory begins as a mathematical representation then implemented into an algorithm (something a computer can do). In their own right, each of the branches deserves many articles. In this post, I give a high-level overview and example of each branch working together to solve a problem. As you read this, remember, it’s impossible to do more than scratch the surface with the methods contained in each branch. I’ll be writing more in-depth posts about specific instances of each of these branches, but I want to describe each of these at a high level so you can get a taste of what’s possible.   ... 

Saturday, September 25, 2021

Greece, USC, Wharton and AgentRisk Develop COVID Risk Analysis

 Interesting approach and claims.

How Greece Let in Tourists, Kept Out COVID-19,  By University of Southern California,  September 24, 2021

A hand clad in the colors of Greece holds a test tube containing a COVID test. 

The Eva algorithm caught nearly twice as many asymptomatic infected travelers than would have been caught if Greece had relied on only travel restrictions and randomized COVID testing.

Researchers at the University of Southern California (USC) Marshall School of Business, the University of Pennsylvania’s Wharton School of Business, wealth management advisory firm AgentRisk, and Greece's universities of Athens and Thessaly collaborated on the development of an algorithm that can identify asymptomatic COVID-19 infections in travelers.

The “Eva” algorithm utilizes real-time data to identify high-risk visitors for testing.  The researchers found the algorithm was able to identify nearly twice as many asymptomatic infected travelers to Greece than if the country had depended only on travel restrictions and randomized testing.  Eva was used to weed through data provided by tourists to develop profiles of those likely to be infected and asymptomatic.

From University of Southern California

Zscaler and Siemens Join for Zero Trust

Interesting example of Zero trust security, for operational and IOT systems

Zscaler and Siemens join to bring zero-trust security to operational technology systems

 Analysis by Zeus Kerravala

Zscaler Inc. and Siemens AG announced an interesting partnership this week wherein the two vendors are bringing zero-trust security to operational technology systems.

OT systems are most commonly found in industrial networks but are seeing increased adoption in other industries. Historically, OT systems ran on their own proprietary networks that were often isolated from the company’s data networks. Industry leaders have been predicting that information technology and OT systems would eventually come together, but that has been slow to materialize in industrial settings.

Some OT systems have been integrated with IT networks, such as building facilities like alarm systems, LED lighting and heating and air conditioning systems as part of smart building initiatives, but that has been more the exception than the norm in industrial settings.

The COVID-19 pandemic forced many organizations down the IT-OT path as workers required access to the OT systems from home and the most cost-effective way to do that was to enable VPN access through the data network. That enables workers to remotely manage and control systems and diagnose problems.

Although VPNs were successful in connecting workers to industrial systems quickly, they are not ideal because they create a back door into the industrial “internet of things” environments. That greatly expands the organization’s attack surface and exposes the business to large-scale network attacks.

Some organizations have turned to firewall-based network segmentation, and that can work, but it is very complicated to set up and is even more difficult to keep updated in dynamic environments. That’s because every time a device moves, the segmentation policies must be updated. Coarse-grained segmentation is widely used, but businesses have struggled with fine-grained segmentation, which is needed in IoT environments to minimize the impact of a breach.  ... ' 

Alternative use of SpaceX Sats

 Alternative to GPS use for location. 

SpaceX Satellite Signals Used Like GPS to Pinpoint Location on Earth

By Ohio State News,  September 24, 2021

Researchers at The Ohio State University and the University of California, Irvine have come up with a method for locating a position on Earth using signals broadcast by SpaceX's Starlink Internet service satellites. Signals from six Starlink satellites were used to identify a location on Earth within 8 meters, suggesting the method could serve as an alternative to GPS.

Ohio State's Zak Kassas said, "We eavesdropped on the signal, and then we designed sophisticated algorithms to pinpoint our location, and we showed that it works with great accuracy. And even though Starlink wasn't designed for navigation purposes, we showed that it was possible to learn parts of the system well enough to use it for navigation."  ... ' 

From Ohio State News

Dial the Trust Down to Zero

The way to make security work

Dialing the Trust Level Down to Zero

By R. Colin Johnson, Commissioned by CACM Staff, September 23, 2021

Twenty-first century cybersecurity has been steadily moving away from the "perimeter" mentality—authenticating users with passwords, then giving them free access to a computer system's resources at their security level. Stolen passwords, especially those with high levels of access, have resulted in catastrophic releases of vast swaths of personal information (like credit card numbers), government secrets (witness WikiLeaks' releases of classified information), and related crimes (including ransomware).

Now the trust bestowed on authenticated users is being rescinded. The perimeter defense architecture is being superseded by the Zero Trust Architecture (ZTA), which authenticates each user action before it is executed. The U.S. government mandated ZTA and other measures in the May 12, 2021 Executive Order on Improving the Nation's Cybersecurity, which reads, in part: "The Federal Government must adopt security best practices; advance toward Zero Trust Architecture; accelerate movement to secure cloud services, including Software as a Service (SaaS)…and invest in both technology and personnel to match these modernization goals."

The Executive Order also charged the National Institute of Standards and Technology (NIST) with detailing these best practices in a Zero Trust Architecture report.

Said Steve Turner, an analyst at Forrester Research, "Public policy has finally acknowledged that the current model of cybersecurity is broken and outdated, mandating that the model of Zero Trust Architecture become the default method for implementing cybersecurity. With the relentless destructive attacks on computer systems, such as ransomware, there's been a collective realization that Zero Trust should be the de facto standard to secure organizations."

At the same time, the computer hardware itself must be adapted to the ZTA, starting with end-to-end encryption of all data before, after, and ideally even while it is inside the processor. Ubiquitous encryption is just the start. Today, any component—from wireless routers to individual server chips—can offer unauthorized access to intruders. Firmware—from unauthorized swapping of solid state disks (SSDs) in the datacenter, to thumb-drives plugged into user-access devices—are especially vulnerable. Even hardware components without firmware can become dispensers of malware via, for instance, hidden hardware Trojan horses that are impossible to detect by visually inspecting chips. As a result, hardware Roots-of-Trust with certifiable validation followed by chain of custody verification also are being incorporated into the ZTA—starting from the hardware for an initial computer installation, and continuing unabated through firmware and hardware updates, until its eventual retirement.  ... ' 

Friday, September 24, 2021

Low Code Tipping Point?

 Is this reasonable? Well only maybe if it allowed non-coding deicision makers to code them selves. And informed them well of they understood the decisions they were embedding in the code.

The low-code ‘tipping point’ is here    in Venturebeat

Half of business technologists now produce capabilities for users beyond their own department or enterprise. That’s the top finding in a new report from Gartner, which cites “a dramatic growth” in digitalization opportunities and lower barriers to entry, including low-code tools and AI-assisted development, as the core factors enabling this democratization beyond IT professionals. What’s more, Gartner reports that 77% of business technologists — defined as employees who report outside of IT departments and create technology or analytics capabilities — routinely use a combination of automation, integration, application development, or data science and AI tools in their daily work.

“This trend has been unfolding for many years, but we’re now seeing a tipping point in which technology management has become a business competency,” Raf Gelders, research vice president at Gartner, told VentureBeat. “Whether all employees will soon be technical employees remains to be seen. Do your best sales reps need to build new digital capabilities? Probably not. Do you want business technologists in sales operations? Probably yes.”

Harnessing the Power of Personalization, Automation to Deliver Real-time, Intelligent Digital Experiences 1

The rise of low-code 

Low-code development tools — such as code-generators and drag-and-drop editors — allow non-technical users to perform capabilities previously only possible with coding knowledge. Ninety-two percent of IT leaders say they’re comfortable with business users leveraging low-code tools, with many viewing the democratization as helpful at a time when they’re busier than ever. With the rise of digital transformation, which has only been accelerated by the pandemic, 88% of IT leaders say workloads have increased in the past 12 months. Many report an increase in demand for new applications and say they’re concerned about the workloads and how this might stifle their ability  ... ' 

Computer Chip Supply Chain Threatened

Supply chain quality issues

Counterfeit, Substandard Chips are Penetrating the Supply Chain, Industry Insiders Warn  By Computing (U.K.), September 20, 2021

Global markets are already seeing an increase in prices for component and electronics as a result of the ongoing chip production crisis, but there is another negative effect that semiconductor shortage could have on supply chains: a flood of fake, substandard chips.

As Nikkei Asia reports, Japanese electronics manufacturer Jenesis is one of those that have suffered the most as a result of the fake chip phenomena in recent months.

The firm was unable to procure microcomputers from its normal sources, and its division in southern China placed an order through a site run by Chinese e-commerce giant Alibaba. Unfortunately, when the chips arrived, they were faulty.

An expert who inspected the chips at Jenesis' request found the specifications of the chips sent from China differed completely from what Jenesis had ordered. Interestingly, the chip manufacturer's name on the packages seemed to be genuine.

Jenesis, which had already made payments for the order, was unable to contact the supplier later.

From Computing (U.K.)

Defying Cost Volatility

 In an era where pricing is important. 

Defying cost volatility: A strategic pricing response

Input cost increases provide an opportunity to restructure and improve pricing while also institutionalizing best practices. Yet margins will suffer if they are not done carefully.

Key takeaways

- The ongoing input cost increases and volatility, while representing a difficult challenge, provide an opportunity to improve pricing through adoption of best practices.

- Organizations that adopt a strategic approach to pricing actions can significantly minimize margin leakage during a pricing action.

- Our four-step approach can put your organization on a path to pricing excellence, allowing you to recover cost increases and minimize negative impacts to financial performance in a responsible, transparent, and customer-centric manner.  ... '

Expanding the Zoom of it

Been intrigued by what innovative things could be done with methods like Zoom to make them more valuable.  Here some new features from Zoom, interesting but not very remarkable.

Zoom will soon tell your boss if you’re late for a meeting    By Joel Khalili in TechRadar

New attendee status feature shows who isn't yet present

Zoom has unveiled a series of updates for its video conferencing and collaboration platforms, one of which may prove unpopular with the scatterbrains of this world.

Courtesy of integrations with Google and Outlook calendars, Zoom will soon begin to display a list of meeting invitees that have yet to join the call.

“Their invite response (accepted, declined, maybe) is listed with their name and the host can easily invite them to the meeting from the participants panel,” the company explained.

Here's our list of the best business webcams available

Check out our list of the best headsets for conference calls

We've built a list of the best VoIP services out there

Although useful in a number of contexts (such as taking the register for a virtual class, for example), gone will be the days of slinking into a team meeting a few minutes late, with the host none the wiser. ... ' 

Why Investing in Technology Is No Longer a Choice

 Insightful  thoughts by Bain & Company, with good graphics.

Why Investing in Technology Is No Longer a Choice

In a rapidly changing business climate, our survey finds that only 14% of businesses are making the most of technology to stay ahead of the game. Here’s how the rest can catch up.

By Vishy Padmanabhan and Lauren Brom  .... ' 

Thursday, September 23, 2021

Cybersecurity Threat Vectors For AI Autonomous Cars

Cyber Hacking Trends for Autonomous Vehicles

Cybersecurity Threat Vectors For AI Autonomous Cars   September 16, 2021 in AiTrends

Over-the-air software updating for autonomous cars is one of many potential paths for cyber hackers targeting an AI driving system. ... 

Dr. Lance B. Eliot is a renowned global expert on AI, he is Chief AI Scientist at Techbrium Inc. and currently an invited Stanford Fellow at Stanford University, previously was a professor at USC, headed a pioneering AI Research Lab, was a top exec at a major VC, and serves as a longstanding regular contributor for AI Trends. His AI writings have amassed over 3.8+ million views, his podcasts exceed over 150,000 downloads, and he is the author of numerous top ranked books on AI.  ... 

Follow Lance on Twitter @LanceEliot ... 

IBM, Biz Ready Quantum

More on this effort from Fortune.  The effort by IBM would seems to be considerable part of Quant  strategy.   I would want to get a good time line of matching my problem to likely developed specs, like number of Qubits, error tolerance.    Following.

IBM is getting business ready for a future with quantum computing

BY JEREMY KAHN September 22, 2021 11:39 AM EDT in Fortune

IBM has launched a new service to help companies deploy quantum computers and train staff to use the emerging technology.

Called the Quantum Accelerator, the program will involve IBM experts working with companies to figure out the most valuable near-term use cases for quantum computers in their business and then help them research and deliver proof-of-concept projects around those use cases. It will also involve the development of what Katie Pizzolato, IBM’s head of quantum strategy, said are bespoke curriculums to train a customer’s staff on how to use quantum computers and quantum algorithms.

The company said it would tailor its educational offering to the needs of different groups within each company. Some high-level executives might need a basic grounding in what quantum computers are and how quantum algorithms differ from conventional ones and where these might help with business needs. Meanwhile, software engineers might require far more technical training in how to program quantum computers or write their own quantum algorithms.   ..  " 

UpComing EmTech MIT

 Upcoming EmTech MIT     Tuesday, September 28, 2021 - Thursday, September 30, 2021 

Technology, trends, and the new rules of business: Immerse yourself in bold thinking and innovation. Discover which breakthrough technologies and global trends have staying power and get the trustworthy, actionable guidance you need for your strategic and visionary planning.

For more than 20 years, EmTech has brought together global innovators, change makers, thought leaders, technologists, and industry veterans to help leaders take advantage of what’s probable, plausible, and possible with the most significant technology trends.   ... " 

How Fast do Algorithms Improve?

 Could be a way to track where effort should be expended.     Matching quality and value of solutions to effort.

How Quickly Do Algorithms Improve?

MIT News, Rachel Gordon, September 20, 2021

Massachusetts Institute of Technology (MIT) researchers reviewed 57 textbooks and over 1,110 research papers to chart algorithms' historical evolution. They covered 113 algorithm families, sets of algorithms solving the same problem highlighted by textbooks as paramount; the team traced each family's history, tracking each time a new algorithm was proposed, and noting the more-efficient program. Each family averaged eight algorithms, of which a few upgraded efficiency. Forty-three percent of families dealing with large computing problems saw year-on-year gains equal to or larger than Moore's Law-dictated improvements, while in 14% of problems, algorithmic performance improvements overtook hardware-related upgrades. MIT's Neil Thompson said, “Through our analysis, we were able to say how many more tasks could be done using the same amount of computing power after an algorithm improved."

Wednesday, September 22, 2021

Conversational AI and Healthcare Chatbots

Used Eliza.  Still need to see better studies of chatbots in complete and varying contexts.  And comparison to having a professional available.  And also compared to professional in the room where they can watch reactions to that they say. 

Conversational AI Making Headway in Powerful Healthcare Chatbots   By John P. Desmond, AI Trends Editor  

Conversational AI has come a long way since ELIZA, which was intended by its creator in 1964 to be a parody of the responses of a psychotherapist to his patient, as a demonstration that communication between a human and a machine could only be superficial.  

What surprised Joseph Weizenbaum of the MIT AI lab was that many people, including his secretary, assigned human-like feelings to the computer program. It is acknowledged as the original chatbot.   

Pranay Jain, cofounder and CEO, Enterprise Bot

In the 50 years since then, chatbots have evolved first to engage users in dialogues for customer service in many fields, and now to dialogues on personal medication information. “With the advent of cognitive intelligence, chatbots were given a facelift. They were able to analyze context, process intent, and formulate adequate responses,” stated Pranay Jain, cofounder and CEO of Enterprise Bot, in a contribution to AI Trends. The Switzerland-based company was founded five years ago.   

Still, chatbots incorporating AI today are challenged to successfully process technical commands, to understand human intent, to exhibit conversational intelligence and understand different languages, accents and dialects.   

Today, “The ability to understand the subtle nuances of human tonalities, speech patterns, and mimic human empathy in the form of texts and voices is what makes a chatbot truly successful across industries and verticals,” Jain stated.   

Chatbots in healthcare had been perceived as high risk, with healthcare professionals skeptical that patients would provide confidential medical information to a virtual assistant. “Today, chatbots are being designed and deployed to perform preliminary pathology and aid healthcare professionals,” Jain stated, noting that chatbots now gather initial personal information and then ask about symptoms.   .... ' 

Reservoir Computing from Ohio State

OK, mostly new to me, sounds useful. Following up. 

Scientists Develop the Next Generation of Reservoir Computing

By Ohio State University, September 22, 2021

Researchers have found a way to make reservoir computing, a machine learning approach to processing that requires small training data sets and minimal computing resources, work between 33 and a million times faster than a traditional reservoir computer.

In one test of next-generation reservoir computing, researchers solved a complex computing problem in less than a second on a desktop computer, a task that would otherwise require a supercomputer, says Daniel Gauthier, professor of physics at Ohio State University. "We can perform very complex information processing tasks in a fraction of the time using much less computer resources compared to what reservoir computing can currently do," he says.

The system is described in "Next Generation Reservoir Computing," published in the journal Nature Communications. 

Using an artificial neural network, scientists fed data on a dynamical network into a "reservoir" of randomly connected artificial neurons in a network. The network produces useful output that the scientists can interpret and feed back into the network, building a more and more accurate forecast of how the system will evolve in the future.

From Ohio State University  

Designing a Quantum Computer

 in ACM TECHNEWS  Implications? 

Quantum Computer Helps Design Better Quantum Computer  By New Scientist, September 22, 2021

Researchers at the University of Science and Technology of China in Shanghai used a quantum computer to design a quantum bit (qubit) that could power better quantum systems.

The resulting plasonium qubit is smaller, less noisy, and can retain its state longer than the team's current qubit model.  The researchers used a variational quantum eigensolver algorithm to simulate particle behavior in quantum circuits and smooth out negative properties while developing advantageous features, without needing to build many physical prototypes.

Each plasonium qubit is only 240 micrometers long, or 40% the size of a transmon qubit, which will allow processor miniaturization.

The plasonium qubit's strong anharmonicity also means the additional states the qubits can reach are varied and less likely to be found accidentally, reducing the potential for computational errors. ... 

Research at the University of Science and Technology of China in Shanghai has led to the invention of a new type of qubit which is physically smaller, less noisy, and able to retain its state longer than the current qubit design.

From New Scientist  

Beware Food and Agriculture

 In ZDNet: 

FBI warns of ransomware attacks targeting food and agriculture sector  ... 

In addition to the May attack on JBS, the FBI listed dozens of ransomware incidents that have taken place over the last six months targeting the food sector.

By Jonathan Greig    ... 

Podcast: Future of the Office

 How well did working at home really accomplish things? 

What’s the Future of the Office?

Wharton’s Peter Cappelli talks about his new book, ‘Future of the Office: Work from Home, Remote Work, and the Hard Choices We All Face.’

Audio Player ... of podcast

Wharton management professor Peter Cappelli is the author of the new book, The Future of the Office: Work from Home, Remote Work, and the Hard Choices We All Face. Cappelli, who has for decades studied the forces shaping and changing the workplace, says the choices employees and employers must make about the future of work could be among the most important they face.

Brett LoGiurato, senior editor at Wharton School Press, sat down with Cappelli to talk about his new book. They discussed work during the COVID-19 pandemic, the complications with return-to-office hybrid models, and how employees and employers can make the best choices about what to do.

An edited transcript of the conversation follows. 

Brett LoGiurato: Could you share your overall message about what you believe is at stake for the future of the office?

Peter Cappelli: I don’t think it’s going to surprise many people to get the sense of how big an issue this is, about whether we go back to the office or not. If you think about the value of commercial real estate, what happens if we don’t need offices and all the supporting services and the little businesses and restaurants that support offices? And commuting? All those sorts of things matter. In addition to whether this might be better for employees, one of the things we know is that not everybody agrees that they want to work from home. There is the issue of whether it’s actually going to work for the employers, and that’s not completely clear.

Part of the message of the book is that we don’t know how well things worked during the pandemic’s work-from-home phase. A lot of organizations said that things were fine. A lot of employees said they got their own work done. But closer examination is suggesting that maybe it wasn’t quite so great and things didn’t work quite as well, and more to the point, there were a lot of things that were unique about the pandemic that are not going to carry over afterward.

For example, most people felt a special effort to pull together and try to get things done [because] we were keeping businesses together and keeping our jobs together. Is that going to continue afterward? Post-pandemic is unlikely to look much like what happened during the pandemic. We know a fair bit about that situation because we’ve studied it. We’ve studied telework for quite a while. That is regular businesses operating more or less as they did, with some people working at home and some people working in the office. The results there were not as nice as you might expect. People working remotely don’t do as well, and their careers don’t do as well, either.    ... ' 

Tuesday, September 21, 2021

Unlimited Digital Sensing

Not following this yet, but intriguing suggestion ...

Unlimited Digital Sensing Unleashed for Imaging, Audio, Driverless Cars

Imperial College London (U.K.), Caroline Brogan, September 17, 2021

A new method devised by researchers at the U.K.'s Imperial College London (ICL) and Germany's Technical University of Munich could support unlimited digital sensing. The team used modulo-sampling analog-to-digital converters (ADCs) to determine whether sensors could process more information via a different type of voltage; the researchers built a prototype with an algorithm that causes the ADC to switch to modulo voltage once a stimulus limit is reached, and folds these signals into smaller ones. Potential applications include enhanced imaging, audio perception, and improved driverless-car cameras. Said ICL's Ayush Bhandari, "By combing new algorithms and new hardware, we have fixed a common problem—one that could mean our digital sensors perceive what humans can, and beyond."

3D Printing Gas Sensors

New sensor capability. Small, cheap, monitoring for the smart home and elsewhere.

Scientists 3D-Print Microscopic Gas Sensors

Trinity College Dublin (Ireland), September 15, 2021

Scientists at Ireland’s Trinity College Dublin, the AMBER (Advanced Materials and BioEngineering Research) Science Foundation Ireland (SFI) Research Center for Advanced Materials and BioEngineering Research, and GE Research in New York have three-dimensionally (3D)-printed microscopic gas sensors that can be monitored in real time. Trinity's Colm Delaney said the team produced the color-changing sensor materials via direct laser-writing, by focusing a laser to produce tiny 3D structures from laboratory-developed soft polymers. Trinity's Larisa Florea said, "The ability to print such an optically responsive material has profound potential for their incorporation into connected, low-cost sensing devices for homes, or into wearable devices for monitoring analytes."

Spot Robot Dog Gets an Upgrade

Considerable look from Venturebeat on the Boston Dynamics Robot Spot:

Boston Dynamics has just released the latest update for its famous quadruped robot Spot, giving it better capability to make inspections and collect data without the need for human intervention.

Called Spot Release 3.0, the new update adds “flexible autonomy and repeatable data capture, making Spot the data collection solution you need to make inspection rounds safer and more efficient.”   ... 

Though it has been strongly criticized for anything that looks like it might support police.  Not that we want to be any safer.   And Boston Dynamics is suggesting it will not be configured for home use any time soon.  So it remains as they say an expensive niche device for industrial inspection, tracking and protection.  

Now we are Cooking with Lasers

 A long time amateur chef

Now We're Cooking With Lasers   By Columbia Engineering, September 20, 2021

Researchers at Columbia Engineering have digitized food creation and cooking processes, using 3D printing technology to tailor food shape and texture and lasers of various wavelengths to cook it.

"Precision Cooking for Printed Foods via Multiwavelength Lasers,"    Science of Food, explores various modalities of cooking. The researchers printed chicken samples as a test bed and assessed a range of parameters. They found that blue lasers (445 nm) are best for penetrative cooking, and infrared lasers (980 nm and 10.6 μm) best for browning. Laser-cooked meat shrinks 50% less and retains double the moisture as meat cooked in conventional ovens.

"Our two blind taste-testers preferred laser-cooked meat to the conventionally cooked samples, which shows promise for this burgeoning technology," says Jonathan Blutinger, a Ph.D. student in the Creative Machines Lab at Columbia University.

 "What we still don't have is what we call 'Food CAD,' sort of the Photoshop of food," says Professor Hod Lipson, lead author of the study. "We need a high level software that enables people who are not programmers or software developers to design the foods they want. And then we need a place where people can share digital recipes, like we share music."

From Columbia Engineering   View Full Article

Monday, September 20, 2021

City Wide Quantum Data Network

 China and Quantum Data Networks

City-Wide Quantum Data Network in China Is Largest Ever Built  By New Scientist, September 20, 2021

China has been running a city-wide quantum communications network in Hefei, the largest quantum network demonstration built to date, for almost three years.

Designed by scientists at the University of Science and Technology of China, the network's commercial fiber-optic hardware connects 40 computers at government buildings, banks, and universities clustered into three subnetworks, each separated by about 15 kilometers (9 miles). Smaller subnetworks and switches support routes between different users as needed, while three trusted relays streamline the architecture.

The researchers claim the network can be linked to other similar frameworks through long-distance quantum connections and satellite relays, clearing a path toward a global quantum network.

In New Scientist

Next Generation Brain-Computer Systems

Has been kicked around for a long time, how ready is it? 

Researchers Take Step Toward Next-Generation Brain-Computer Interface System

Brown University, August 12, 2021

A team of scientists from Brown University, Baylor University, the University of California, San Diego, and wireless technology company Qualcomm has moved a step toward a future brain-computer interface system that uses a network of ultra-small sensors to record and stimulate brain activity. These silicon "neurograins" independently capture and transmit neural signals wirelessly to a central hub for coordination and processing; the hub is a thumbprint-sized patch attached to the scalp, which also wirelessly powers the tiny chips. The team used 48 neurograins to record signals in a rodent characteristic of spontaneous brain activity, and to stimulate activity through tiny electrical pulses. Brown's Arto Nurmikko said, "Our hope is that we can ultimately develop a system that provides new scientific insights into the brain and new therapies that can help people affected by devastating injuries."

Removing Location Tracking

Thwarting cellphone tracking.

Is Your Mobile Provider Tracking Your Location? This Technology Could Stop It.

By USC Viterbi School of Engineering, August 16, 2021A new system devised by researchers at the University of Southern California Viterbi School of Engineering (USC Viterbi) and Princeton University can thwart the tracking of cellphone users by network operators while maintaining seamless connectivity.

The Pretty Good Phone Privacy software architecture anonymizes personal identifiers sent to cell towers, effectively severing phone connectivity from authentication and billing without altering network hardware.

The system transmits an anonymous, cryptographically signed token in place of a personally identifiable signal to the tower, using a mobile virtual network operator like Cricket or Boost as a substitute or intermediary.

USC Viterbi's Barath Raghavan said, "Now the identity in a specific location is separated from the fact that there is a phone at that location."

The system also ensures that location-based services still function normally.

From USC Viterbi School of Engineering

Sunday, September 19, 2021

Finding Unexpected Training Data?

Can we find data that will ultimately lead to novel results.   Like the idea of fitting it to possible applications. In a  primitive way we looked at means to learn from huge amounts of historical existing data we had.    We note the 'at scale' issue. 

Unveiling Unexpected Training Data in Internet Video   By Tali Dekel, Noah Snavely

Communications of the ACM, August 2021, Vol. 64 No. 8, Pages 69-79  10.1145/3431283 

One of the most important components of training machine-learning models is data. The amount of training data, how clean it is, its diversity, how well it reflects the real world—all can have a dramatic effect on the performance of a trained model. Hence, collecting new datasets and finding reliable sources of supervision have become imperative for advancing the state of the art in many computer vision and graphics tasks, which have become highly dependent on machine learning. However, collecting such data at scale remains a fundamental challenge.

In this article, we focus on an intriguing source of training data—online videos. The Web hosts a large volume of video content, spanning an enormous space of real-world visual and auditory signals. The existence of such abundant data suggests the following question: How can we imbue machines with visual knowledge by directly observing the world through raw video? There are a number of challenges faced in exploring this question. ... '   (Much more detail at link) 

Pitfalls of Binary Decisions

Good thoughts, but simple is also good. 

Five ways to avoid the pitfalls of binary decisions

Before you decide, check how the question is framed to ensure you have all the information you need and have considered all your options.

by Eric J. McNulty

Deciding is easy: true or false? The first challenge in answering this question is that it’s impossible to know without more information. Which decisions, with what stakes, and on what timeline—these are just a few of the contextual factors most of us would want to consider before answering. The second challenge is that it’s probably not a true-or-false proposition.  ... '

Ajit Joakar on Digital Twins

Nicely placed background on analytical method and design based on physical models.

The significance of Digital Twins in design and simulation     By Ajit Jaokar

Open in Linkedin:


I have been tracking Digital Twins closely over the last year – based on the recent work from Paul Clarke who is a mentor to our course and to me personally for the AI and Edge at the University of Oxford   ...   AjitJoakar

The idea of Digital Twins itself is not new – but we see that this technology will have significant impact over the next few years – especially as a technique to unify AI and IoT.

 Specifically, in my course I am interested in the role of Digital Twins in an engineering context for enhancing Model-based Design and simulation where AI will drive IoT but also extend to AR (Augmented Reality) and VR (Virtual Reality)

We are launching a new course on Digital Twins: Enhancing Model-based Design with AR, VR, and MR which focusses on the engineering aspects of digital twins. If you are interested in it, please contact me.

In this post, I outline some ideas for Digital twins in design and simulation in an engineering context. This is a complex topic, so will revisit in subsequent posts.


The idea of "Digital Twins" originated with NASA. Digital Twins were then adopted into the manufacturing industry as a conceptual version of the PLM (Product Lifecycle Management). However, the core idea behind Digital twins remains the same, i.e., a virtual model that incorporates all the necessary information about a physical ecosystem to solve a particular problem.

Engineering systems have always used abstraction techniques to model complex problems. 

But the Digital Twin takes this idea further by allowing you to model a problem and simulate it. A variety of machine learning and deep learning techniques (collectively referred to as artificial intelligence AI) play a part in the simulation aspects of the digital twin. AI helps to simulate scenarios via the Digital twin but also to make autonomous decisions. Further, we could also use Augmented reality (AR), Virtual Reality (VR), and other strategies for modelling engineering problems.

Collectively, the techniques described above are referred to as 'Model-based design.' Model-based design help engineers and scientists to design and implement complex dynamic systems using a set of virtual (digital) modelling technologies. As a result, you can iterate your design through fast, repeatable tests. In addition, you can automate the end-to-end lifecycle of your project by connecting virtual replicas of physical components in a digital space.  Once the system is modelled as a twin, various existing and new engineering problems can be modelled and simulated, such as predictive maintenance, anomaly detection, etc.


We consider the following terminology:

Model-based design: A set of technologies and techniques that help engineers and scientists to design and implement complex, dynamic, end-to-end systems using a set of virtual (digital) modelling technologies. Collectively, these technologies can simulate and model physical objects and processes in multiple industries.

A digital twin is a digital representation that functions as a shadow/twin of a physical object or process. Digital twins are designed to model and simulate a process to understand it and predict its behaviour. Digital twin originates from engineering and is related to model-based systems engineering (MBSE) and surrogate modelling. The usage of digital twins is now more mainstream in software development, especially for IoT. Digital twins can be combined with AR and VR to model physical processes.  

Virtual Reality (VR) creates an immersive experience through VR devices like headsets and simulates a three-dimensional world. VR is used in instructional content and educational material for field workers, oil and gas, defence, aviation, etc.

Augmented Reality (AR) overlays digital information on a physical world. Typically, AR uses conventional devices like mobile phones. Pokemon GO is an example of AR usage.

Mixed Reality (MR) allows the manipulation of both physical and digital objects in an immersive world. Hololens is an example of mixed reality.  ... '

Saturday, September 18, 2021

Biases in AI Systems

Excellent piece, broadly useful beyond AI applications.  

May 12, 2021, Volume 19, issue 2

Download PDF version of this article PDF  

Biases in AI Systems     

A survey for practitioners

Ramya Srinivasan and Ajay Chander   in CACM

A child wearing sunglasses is labeled as a "failure, loser, nonstarter, unsuccessful person." This is just one of the many systemic biases exposed by ImageNet Roulette, an art project that applies labels to user-submitted photos by sourcing its identification system from the original ImageNet database.7 ImageNet, which has been one of the instrumental datasets for advancing AI, has deleted more than half a million images from its "person" category since this instance was reported in late 2019.23 Earlier in 2019, researchers showed how Facebook's ad-serving algorithm for deciding who is shown a given ad exhibits discrimination based on race, gender, and religion of users.1 There have been reports of commercial facial-recognition software (notably Amazon's Rekognition, among others) being biased against darker-skinned women.6,22

These examples provide a glimpse into a rapidly growing body of work that is exposing the bias associated with AI systems, but biased algorithmic systems are not a new phenomenon. As just one example, in 1988 the UK Commission for Racial Equality found a British medical school guilty of discrimination because the algorithm used to shortlist interview candidates was biased against women and applicants with non-European names.17

With the rapid adoption of AI across a variety of sectors, including in areas such as justice and health care, technologists and policy makers have raised concerns about the lack of accountability and bias associated with AI-based decisions. From AI researchers and software engineers to product leaders and consumers, a variety of stakeholders are involved in the AI pipeline. The necessary expertise around AI, datasets, and the policy and rights landscape that collectively helps uncover bias is not uniformly available among these stakeholders. As a consequence, bias in AI systems can compound inconspicuously.

Consider, for example, the critical role of ML (machine learning) developers in this pipeline. They are asked to: preprocess the data appropriately, choose the right models from several available ones, tune parameters, and adapt model architectures to suit the requirements of an application. Suppose an ML developer was entrusted with developing an AI model to predict which loans will default. Unaware of bias in the training data, an engineer may inadvertently train models using only the validation accuracy. Suppose the training data contained too many young people who defaulted. In this case, the model is likely to make a similar prediction about young people defaulting when applied to test data. There is thus a need to educate ML developers about the various kinds of biases that can creep into the AI pipeline.

Defining, detecting, measuring, and mitigating bias in AI systems is not an easy task and is an active area of research.4 A number of efforts are being undertaken across governments, nonprofits, and industries, including enforcing regulations to address issues related to bias. As work proceeds toward recognizing and addressing bias in a variety of societal institutions and pathways, there is a growing and persistent effort to ensure that computational systems are designed to address these concerns.

The broad goal of this article is to educate nondomain experts and practitioners such as ML developers about various types of biases that can occur across the different stages of the AI pipeline and suggest checklists for mitigating bias. There is a vast body of literature related to the design of fair algorithms.4 As this article is directed at aiding ML developers, the focus is not on the design of fair AI algorithms but rather on practical aspects that can be followed to limit and test for bias during problem formulation, data creation, data analysis, and evaluation. Specifically, the contributions can be summarized as follows:

• Taxonomy of biases in the AI pipeline. A structural organization of the various types of bias that can creep into the AI pipeline is provided, anchored in the various phases from data creation and problem formulation to data preparation and analysis.

• Guidelines for bridging the gap between research and practice. Analyses that elucidate the challenges associated with implementing research ideas in the real world are listed, as well as suggested practices to fill this gap. Guidelines that can aid ML developers in testing for various kinds of biases are provided.

The goal of this work is to enhance awareness and practical skills around bias, toward the judicious use and adoption of AI systems......' 

The Hacking of McD's Ice Cream Machine

 A little surreal, but a reminder that anything can be hacked.  I had heard of the problem from McDonalds,  Had been unaware of some of the details involved.  Just the intro below.  Quite a story.  

They Hacked McDonald’s Ice Cream Machines—and Started a Cold War  in Wired

Secret codes. Legal threats. Betrayal. How one couple built a device to fix McDonald’s notoriously broken soft-serve machines—and how the fast-food giant froze them out.OF ALL THE mysteries and injustices of the McDonald’s ice cream machine, the one that Jeremy O’Sullivan insists you understand first is its secret passcode.

Press the cone icon on the screen of the Taylor C602 digital ice cream machine, he explains, then tap the buttons that show a snowflake and a milkshake to set the digits on the screen to 5, then 2, then 3, then 1. After that precise series of no fewer than 16 button presses, a menu magically unlocks. Only with this cheat code can you access the machine’s vital signs: everything from the viscosity setting for its milk and sugar ingredients to the temperature of the glycol flowing through its heating element to the meanings of its many sphinxlike error messages.

“No one at McDonald’s or Taylor will explain why there’s a secret, undisclosed menu," O’Sullivan wrote in one of the first, cryptic text messages I received from him earlier this year.  ... '

Machine Learning Developed in Space

We have used computers in space for a long term, now a continuation of developing code there.

Raspberry Pi Heading into Space for Python Programming Challenge

ZDNet, Liam Tung, September 14, 2021

Upgraded Raspberry Pi computers will return to the International Space Station (ISS) for use in what the European Space Agency (ESA) calls the Mission Zero and Mission Space Lab challenges. Mission Zero invites coders to write a Python algorithm to take a humidity reading onboard the ISS that is shown to the astronauts with a personalized message. ESA said Mission Space Lab challenges teams of young people "to design and write a program for a scientific experiment that enhances our understanding of either life on Earth or life in space." The new Astro Pi units are Raspberry Pi 4 Model B featuring 8 GB of memory, a camera, a machine learning accelerator, sensors, gyroscope, accelerometer, magnetometer, and a light-emitting diode matrix for visual feedback. ESA said the accelerator will allow teams "to develop machine learning models enabling high-speed, real-time processing."

Wal-Mart Testing Self Driving Cars with Ford

Unexpected thing.  With a delivery angle too.. 

Walmart to Test Self-Driving Cars with Ford, Argo AI

CNBC, Michael Wayland, September 15, 2021

Walmart, the world’s largest retailer, is extending its self-driving vehicle program to include Ford Motor and Ford/Volkswagen-backed autonomous car startup Argo AI, in order to trial autonomous deliveries in Miami, the District of Columbia, and Austin, TX. Customers can place order groceries and other items online for door-to-door autonomous delivery under the program. Walmart's Tom Ward said, "This collaboration will further our mission to get products to the homes of our customers with unparalleled speed and ease, and in turn, will continue to pave the way for autonomous delivery." An Argo AI spokesperson said the partners initially will deploy a small fleet of autonomous delivery vehicles in each of the three trial cities, which will expand over time. Argo AI's Bryan Salesky said the deployments will demonstrate "the potential for autonomous vehicle delivery services at scale."

Friday, September 17, 2021

Visirule Business Risk Advisor

Have never stopped looking at simplified rule based systems to examine and  model decisions.  Some time ago we looked at some of Clive Spenser's work in this area.  Simplified and impressive.     Used similar methods in the 90s at P&G.

Storing and using complex knowledge does not have to be overly complex or too much like 'AI'.   Here it addresses risk, but can be aimed at other knowledge based applications.  Plan to test further, do take a look. Be glad to introduce you. 

Clive writes:   The BRAT initiative is based on work we have been doing for a major retail bank in the area of Testing Risk Assessment.

We simply recast it for demonstration purposes into the area of risk associated with various business activities.

The table of artifacts (Risk Areas and Risk Topics) which underpins both systems is represented using a flex frame hierarchy

The questions are standard VisiRule questions and there's a handful of statement boxes which look at the answers and determine which Risk Areas are relevant, set their initial priority levels and calculate some risk levels.

The calculated risk levels are used to calibrate the priority levels for each Risk Topic within a triggered Risk Area. Different Risk Areas are triggered by the various Business Activities.

You can play with the demo on:
So the whole thing is pretty configurable and runs on our VisiRule/Flex/Prolog AWS web server using IIS/CGI  ...   

Regards,     Clive Spenser,  LPA VisiRule,  www.visirule.co.uk,    www.lpa.co.uk