/* ---- Google Analytics Code Below */

Wednesday, June 30, 2021

Fintech AI Applications

AI and influence on financial moves, large and small.

How AI is Helping Mastercard, Siemens, John Deere   By AI Trends Staff 

AI is having an impact in business, government and healthcare. But nowhere is it having more impact than for the biggest companies with the most resources. 

Advantages big companies have include access to lots of data and funds to buy smaller companies with the expertise to do something innovative and profitable with the data. Each company has had to decide on the best way to leverage AI for their business.   

Ed McLaughlin, Chief Emerging Payments Officer, Mastercard

“The question is how do you use AI right or use it wisely,” stated Ed McLaughlin, Chief Emerging Payments Officer for Mastercard, at the recent EmTech Digital event on AI and big data, as reported in MIT Sloan Review. “The biggest lesson learned is how to take these powerful tools and start backwards from the problem,” McLaughlin stated. “What are the things you’re trying to solve for, and how can you apply these new tools and techniques to solve it better?” 

Mastercard is very focused on fraud prevention, while it wants as many good transactions as possible to go through. Mastercard built a data platform that holds over two billion card profiles and performs analysis tuned for accuracy. Using 13 AI technologies and some rules-based tools, the system makes decisions within 50 milliseconds.   

“We were able to have a three-time reduction in fraud and a six-time reduction in false positives using AI with that graded dataset,” McLaughlin stated.  

Mastercard has been investing in AI for fraud protection for years. Its 2017 acquisitions of technology vendors Brighterion and NuData Security were strategic.   ... " 

New Perspectives.  (Updated) 

Machine learning Security,   By Ben Dickson  in bdtechtalks

 At this year’s International Conference on Learning Representations (ICLR), a team of researchers from the University of Maryland presented an attack technique meant to slow down deep learning models that have been optimized for fast and sensitive operations. The attack, aptly named DeepSloth, targets “adaptive deep neural networks,” a range of deep learning architectures that cut down computations to speed up processing.

Recent years have seen growing interest in the security of machine learning and deep learning, and there are numerous papers and techniques on hacking and defending neural networks. But one thing made DeepSloth particularly interesting: The researchers at the University of Maryland were presenting a vulnerability in a technique they themselves had developed two years earlier.

In some ways, the story of DeepSloth illustrates the challenges that the machine learning community faces. On the one hand, many researchers and developers are racing to make deep learning available to different applications. On the other hand, their innovations cause new challenges of their own. And they need to actively seek out and address those challenges before they cause irreparable damage.

Shallow deep networks

One of the biggest hurdles of deep learning the computational costs of training and running deep neural networks. Many deep learning models require huge amounts of memory and processing power, and therefore they can only run on servers that have abundant resources. This makes them unusable for applications that require all computations and data to remain on edge devices or need real-time inference and can’t afford the delay caused by sending their data to a cloud server.

In the past few years, machine learning researchers have developed several techniques to make neural networks less costly. One range of optimization techniques called “multi-exit architecture” stops computations when a neural network reaches acceptable accuracy. Experiments show that for many inputs, you don’t need to go through every layer of the neural network to reach a conclusive decision. Multi-exit neural networks save computation resources and bypass the calculations of the remaining layers when they become confident about their results.  ... ' 

Interaction of Law, Software, Evidence, Compliance

 When software acts as evidence.   And also when it further acts in conjunction with compliance-required  data.    Brought up as part of a study of compliance-based systems, and integration of smart contract concepts. 

The article below was pointed to me by Bruce Schneier in his blog:  'Risks of Evidentiary Based Software'   , where there is likely to be useful comments on the topic.  Other comments?   ....

Dangers Posed by Evidentiary Software—and What to Do About It  in Lawfareblog.com  By Susan Landau   ...   '

IKEA Robotic Furniture Assembly

 Had heard a previous overview of this, if this could be done effectively it  would likely increase sales.  As I understand it is still a proposal, tell me if otherwise.

Need help building IKEA furniture? This robot can lend a hand  by Caitlin Dawson, University of Southern California  in Techxplore

As robots increasingly join forces to work with humans—from nursing care homes to warehouses to factories—they must be able to proactively offer support. But first, robots have to learn something we know instinctively: how to anticipate people's needs.

With that goal in mind, researchers at the USC Viterbi School of Engineering have created a new robotic system that accurately predicts how a human will build an IKEA bookcase, and then lends a hand—providing the shelf, bolt or screw necessary to complete the task. The research was presented at the International Conference on Robotics and Automation on May 30, 2021.

"We want to have the human and robot work together—a robot can help you do things faster and better by doing supporting tasks, like fetching things," said the study's lead author Heramb Nemlekar. "Humans will still perform the primary actions, but can offload simpler secondary actions to the robot."

Nemlekar, a Ph.D. student in computer science, is supervised by Stefanos Nikolaidis, an assistant professor of computer science, and co-authored the paper with Nikolaidis and SK Gupta, a professor of aerospace, mechanical engineering and computer science who holds the Smith International Professorship in Mechanical Engineering.

Adapting to variations

In 2018, a robot created by researchers in Singapore famously learned to assemble an IKEA chair itself. In this new study, the USC research team aims to focus instead on human-robot collaboration.

There are advantages to combining human intelligence and robot strength. In a factory for instance, a human operator can control and monitor production, while the robot performs the physically strenuous work. Humans are also more adept at those fiddly, delicate tasks, like wiggling a screw around to make it fit.

The key challenge to overcome: humans tend to perform actions in different orders. For instance, imagine you're building a bookcase—do you tackle the easy tasks first, or go straight for the difficult ones? How does the robot helper quickly adapt to variations in its human partners?  ... ' 

Machine Learning Giving Smarter Driving Advice

 Interesting how this is guided assistance, based on multiple goals.   Makes me recall using process maps to also drive goals.

Home/News/Using ML to Build Maps That Give Smarter Driving Advice/Full Text


Using ML to Build Maps That Give Smarter Driving Advice

By MIT Technology Review. June 29, 2021

Scientists at Qatar's Hamad Bin Khalifa University (HBKU) applied machine learning (ML) to develop QARTA, a new automatic mapping service that can enhance traffic management with greater intelligence. HBKU's Rade Stanojevic and colleagues collaborated with the taxi firm Karwa to collect full global positioning system data on its fleet's comings and goings, so QARTA can advise routes for drivers at Karwa and other operators.

Stanojevic said QARTA's deeper understanding of actual road and traffic conditions in the city of Doha helps drivers shorten trips, translating into 5% to 10% greater fleet-wide efficiency.Stanojevic speculates that ML-based routing advice could factor into holistic views of cities, and help fleets slash carbon emissions by avoiding traffic jams.

From MIT Technology Review

Tuesday, June 29, 2021

DHS Offers ML Funding

Examples of funding out there.   Note detection and classification examples.  Fast and relatively simple applications. Interesting Opportunities.

DHS Awards $2M for Small Businesses to Develop Machine Learning Tech   By U.S. Department of Homeland Security, June 28, 2021

The U.S. Department of Homeland Security recently awarded funding to two small businesses to develop non-contact, inexpensive machine learning training and classification technologies.

Physical Sciences Inc., Andover, Mass., and Alakai Defense Systems Inc., Largo, Fla., each received approximately $1 million in Phase II funding from the Small Business Innovation Research Program to develop technologies that can rapidly and accurately identify unknown spectrometer signals as safe or threatening.

Physical Sciences in Phase II will continue to develop its deep-learning algorithm for detection and classification of trace explosives, opioids, and narcotics on surfaces for optical spectroscopic systems. Alakai will continue development of the Agnostic Machine Learning Platform for Spectroscopy that rapidly and accurately detects trace quantities of hazardous and related chemicals from a variety of spectroscopic instruments.

"The SBIR Program provides the opportunity for S&T to partner with innovative small businesses and develop machine learning tools critical to addressing threat detection needs," says DHS Senior Official Kathryn Coulter Mitchell.

From U.S. Department of Homeland Security

Pose Detecting Carpet for Health Monitoring

 Quite a remarkable set of hardware and AI software out of MIT to detect human 'poses'.  See an image at the link to see what this looks like and how it can be applied.   To detect health states by ML matching to proper states.  Might be further used for athletic or proper exercise pose applications?  Very interesting unusual application. 

Intelligent Carpet Gives Insight into Human Poses  By MIT Computer Science and Artificial Intelligence Laboratory    via CACM,    June 28, 2021

A new tactile sensing carpet assembled from pressure-sensitive film and conductive thread is able to calculate human poses without cameras.

Built by engineers at the Massachusetts Institute of Technology (MIT)'s Computer Science and Artificial Intelligence Laboratory, the system's neural network was trained on a dataset of camera-recorded poses; when a person performs an action on the carpet, it can infer the three-dimensional pose from tactile data.

More than 9,000 sensors are woven into the carpet, and convert the pressure of a person's feet on the carpet into an electrical signal.

The computational model can predict a pose with a less than 10-centimeter error margin, and classify specific actions with 97% accuracy.

MIT's Yiyue Luo said, "You can imagine leveraging this model to enable a seamless health monitoring system for high-risk individuals, for fall detection, rehab monitoring, mobility, and more."

From MIT Computer Science and Artificial Intelligence Laboratory

Food & Beverage Consumer Research and Prospective Marketing

 Mady new kinds of prospective marketing enabled by AI methods that 

How PepsiCo uses AI to create products consumers don’t know they want  in Venturebeat

By Sage Lazzaro  @sagelazzaro

If you imagine how a food and beverage company creates new offerings, your mind likely fills with images of white-coated researchers pipetting flavors and taste-testing like mad scientists. This isn’t wrong, but it’s only part of the picture today. More and more, companies in the space are tapping AI for product development and every subsequent step of the product journey.

At PepsiCo, for example, multiple teams tap AI and data analytics in their own ways to bring each product to life. It starts with using AI to collect intel on potential flavors and product categories, allowing the R&D team to glean the types of insights consumers don’t report in focus groups. It ends with using AI to analyze how those data-driven decisions played out. 

“It’s that whole journey, from innovation to marketing campaign development to deciding where to put it on shelf,” Stephan Gans, chief consumer insights and analytics officer at PepsiCo, told VentureBeat. “And not just like, ‘Yeah, let’s launch this at the A&P.’ But what A&P. Where on the shelf in that particular neighborhood A&P.”  ... ' 

Blinkist for Speeding up Content Acquisition

This brought to mind some work I was involved with at the University of Pennsylvania's Language Laboratory.   We worked with more efficient ways to deliver books and reading material, and one of the experiments was to compress books and make them available for select classes.  Overall a quite simple technique. Then measured the effort/effort/use/value that was achieved.    In general this worked well with some kinds of class content, we touched on neural based techniques   The approach called Blinkist, mentioned below,  is apparently doing some something similar.  I have not tested this, nor has Engadget,  who originally posted the below.  Also I have not received any compensation for posting this.   But am intrigued by the application.  May take a further look.  

Read bestselling books in 15 minutes with Blinkist   in Engadget

Blinkist Premium offers thousands of condensed nonfiction books and podcasts that you can process in just 15 minutes, with 70 new titles added every month.

Every year, we tell ourselves that we need to read more. Perhaps we’ll crack open that book we were gifted months ago. Maybe pick up an Amazon bestseller will finally get us into the habit. And yet, the reading list keeps growing.

Between your professional and personal life, there’s little time for intellectually engaging pursuits. That’s where Blinkist comes in handy. This app contains condensed ideas from thousands of bestselling nonfiction books, so you can stay up to date with your daily reading while you go about your busy schedule. Right now, you can purchase a two-year Blinkist Premium subscription for just $99 — that’s a $285 discount.

Blinkist identifies the main ideas from popular podcasts and nonfiction books and condenses them into digestible, 15-minute text and audio files. You can read or listen to over 4,500 summarized bestsellers ranging in topics from personal development to psychology. This subscription gives you unlimited access to everything in the Blinkist library, including 70 new titles that are added every month.... "  ... ' 

Monday, June 28, 2021

Dell Releases Open Source Suite Omnia

 This was new to me, lots more detail at the link. I like the combination of AI, analytics and process workload,  which is the way it should be.  

Dell releases open source suite Omnia to manage AI, analytics workloads

By Kyle Wiggers  @Kyle_L_Wigger   in Venturebeat

Dell today announced the release of Omnia, an open source software package aimed at simplifying AI and compute-intensive workload deployment and management. Developed at Dell’s High Performance Compute (HPC) and AI Innovation Lab in collaboration with Intel and Arizona State University (ASU), Omnia automates the provisioning and management of HPC, AI, and data analytics to create a pool of hardware resources. ... '

Amazon's Ring Will Ask Police to Publicly Request User Videos

Making things as hard for police as you can. 

Amazon's Ring Will Ask Police to Publicly Request User Videos

By Bloomberg,  June 7, 2021

Starting this week, law enforcement agencies seeking videos from Ring Internet-connected doorbells will be required to submit a Request for Assistance post to Ring's Neighbors app/portal.

Amazon subsidiary and Internet-connected doorbell maker Ring said police departments that require help in investigations must publicly request home security video from doorbells and cameras.

Law enforcement agencies now must post such Requests for Assistance on Neighbors, Ring's video-sharing and safety-related community discussion portal; nearby users with potentially helpful videos can click a link within the post and select which videos they wish to submit.

Ring, which has been accused of having a too-cozy relationship with law enforcement, explained on its blog that it has been working with independent third-party experts to help give people greater insight into law enforcement's use of its technology.

From Bloomberg

View Full Article

UK Adopts Alan Turing Note


New 50-pound British Banknote Honors Computer Pioneer Alan Turing

By Centrum Wiskunde & Informatica (Netherlands)

The Bank of England this week will release a new 50-pound banknote bearing an image of computer pioneer Alan Turing.

On Wednesday 23 June, the Bank of England releases a new polymer 50 pound banknote featuring mathematician, computer pioneer and codebreaker Alan Turing (1912-1954). The banknote contains lots of geeky features from Turing's pioneering work in mathematics, computer science, code breaking, and even mathematical biology. Centrum Wiskunde & Informatica (CWI) researchers and professors Lynda Hardman and Jurgen Vinju comment on the meaning of Turing's work for present day computer science and society.

In 2019, Alan Turing was selected from a shortlist of British scientists to be featured on the new banknote. The release date 23 June was chosen because it is Turing's birthday. CWI researcher professor Jurgen Vinju is delighted with the Alan Turing-banknote: "It emphasizes the fact that information technology in general and computers in particular are such fundamental infrastructures for the modern society. The honour is due to Turing who gave the field its theoretical foundation and the corresponding motivation to make digital computers a reality."

Turing's work is still highly relevant, says Vinju. "I just finished writing an academic paper in which I refer to the Turing machine from 1936. In the paper I designed a new programming language. I was looking for the limits to calculate certain numbers, and the limits are defined by the Turing machine."

From Centrum Wiskunde & Informatica (Netherlands)

Extending Zero Trust Security

Security in Industrial Networks

Extending Zero Trust Security to Industrial Networks

Ruben Lobo, Cisco

Recent cyber attacks on industrial organizations and critical infrastructures have made it clear: operational and IT networks are inseparably linked. With digitization, data needs to seamlessly flow between enterprise IT and industrial OT networks for the business to function. This tighter integration between IT, OT, and Cloud domains has increased the attack surface of both – the industrial and the enterprise networks.

The traditional security perimeter that industrial organizations have built over the years by installing industrial demilitarized zone (IDMZ) is no longer sufficient. While this is still the mandatory first step to protect operations, embracing the digital industry revolution requires additional security measures, assuming that no user, application, or connected device are trustworthy anymore.

The Zero Trust Security model that many are now implementing to secure the enterprise workforce, workloads, and the workplace must be extended to industrial operations. It continuously verifies resources to establish trust and compliance in every access request. It identifies not just users, but endpoints, and applications to grant them the absolute minimum access they need.

I recently presented a webinar explaining the specific Zero Trust requirements for IoT/OT networks:  ... '

Emotion AI creator Affectiva acquires Smart Eye

Another example of Acquisition of Advanced AI Capabilities.

Emotion AI Creator Affectiva Acquires Smart Eye for $73.5M

By Ryan Daws | May 26, 2021 | TechForge Media

Categories: Applications, Connected Cars, Face Recognition,

Ryan is an editor at TechForge Media with over a decade of experience covering the latest technology and interviewing leading industry figures. He can often be sighted at tech conferences with a strong coffee in one hand and a laptop in the other. If it's geeky, he’s probably into it. Find him on Twitter: @Gadget_Ry

MIT spin-out Affectiva, one of our most innovative companies to watch in 2021, has acquired Smart Eye for $73.5 million.

Affectiva is the creator of Emotion AI, a category of artificial intelligence which can understand human emotions, cognitive states, activities, and the objects people use through analysing facial and vocal expressions. The AI was trained using more than 10 million face videos from 90 countries.

Smart Eye are pioneers in eye-tracking that understands, assists, and predicts human intentions and actions to “bridge the gap between man and machine for a better, sustainable tomorrow.” .... 

Sunday, June 27, 2021


Given what we have seen China and Europe doing, makes lots of sense to continue to push AI.

NHS Gets £36-million Boost for AI Technologies

By Scientific Computing World, June 22, 2021

Thousands of patients and staff at the U.K.'s National Health Service (NHS) will benefit from dozens of new pioneering projects awarded a share of £36 million to test state-of-the-art artificial intelligence (AI) technology. The projects will help the NHS to transform the quality of care and the speed of diagnoses for conditions such as lung cancer.

At CogX Festival today last week, U.K. Health and Social Care Secretary Matt Hancock announced the winners of the second wave of the NHS AI Lab's AI in Health and Care Award. The 38 projects backed by NHSX — a joint unit of NHS England and the Department of Health and Social Care and Accelerated Access Collaborative (AAC) — include:... '

Simulation with Python

By far my biggest effort in the enterprise was doing simulations.   Often integrated with optimizations, and later AI applications, both knowledge based and neural approaches.    Nice to see how it can be integrated with Python.   We used IBM's forms,  GPSS  and Simscript; Sometimes specified by clients.  Below an intro, more at the link

Monte Carlo Simulation and Variants with Python

Your Guide to Monte Carlo Simulation and Must Know Statistical Sampling Techniques With Python Implementation

By Tatev Karen

Monte Carlo Simulation is based on repeated random sampling. The underlying concept of Monte Carlo is to use randomness to solve problems that might be deterministic in principle. Monte Carlo simulation is one of the most popular techniques to draw inferences about a population without knowing the true underlying population distribution. This sampling technique becomes handy especially when one doesn’t have the luxury to repeatedly sample from the original population. Applications of Monte Carlo Simulation range from solving problems in theoretical physics to predicting trends in financial investments.

Monte Carlo has 3 main usages: estimate parameters or statistical measures, examine the properties of the estimates, approximate integrals.   ..... ' 

China and AI

In McKinsey, China needs watching on this, has fewer constraints. 

Forward Thinking on China and artificial intelligence  with Jeffrey Ding

This researcher is making sure more AI information flows back from China to the West, and his insights are surprising  .... '

Saturday, June 26, 2021

AI at the Edge

 UK and elsewhere, attempts at merging advanced tech efforts.

Towards Broad Artificial Intelligence (AI) & The Edge in 2021  by 7wData

Artificial intelligence (AI) has quickened its progress in 2021. 

A new administration is in place in the US and the talk is about a major push for Green Technology and the need to stimulate next generation infrastructure including AI and 5G to generate economic recovery with David Knight forecasting that 5G has the potential - the potential - to drive GDP growth of 40% or more by 2030. The Biden administration has stated that it will boost spending in emerging technologies that includes AI and 5G to $300Bn over a four year period.

On the other side of the Atlantic Ocean, the EU have announced a Green Deal and also need to consider the European AI policy to develop next generation companies that will drive economic growth and employment. It may well be that the EU and US (alongside Canada and other allies) will seek ways to work together on issues such as 5G policy and infrastructure development. The UK will be hosting COP 26 and has also made noises about AI and 5G development.

The world needs to find a way to successfully end the Covid-19 pandemic and in the post pandemic world move into a phase of economic growth with job creation. An opportunity exists for a new era of highly skilled jobs with sustainable economic development built around next generation technologies.  ... '

China Does Quantum Data Link

This apparently uses 'quantum key distribution',  a method we have also recently examined.  Details are interestingly beyond limits we have seen.

Quantum Data Link Established Between 2 Distant Chinese Cities

By New Scientist, June 23, 2021

Researchers at the University of Science and Technology of China have created a secure quantum link extending 511 kilometers (almost 320 miles) between two Chinese cities.

The researchers strung a fiber-optic connection between Jinan and Qingdao, with a central receiver located between the two cities in Mazhan. Lasers at both ends of the cable send photons toward each other. The relay in the middle does not read the data, checking only whether the two signals matched. The researchers found the two ends could exchange a quantum key that could be used to encrypt data sent over traditional networks.

University of Sussex's Peter Kruger said, "Single photons over hundreds of kilometers is quite remarkable."

From New Scientist

Smart Tires

Sensors and adapting AI to drive better maintenance. 

Smart Tires Hit the Road

By The Wall Street Journal, June 21, 2021

Tire manufacturers Goodyear Tire & Rubber and Bridgestone are launching new smart tire features for last-mile delivery vehicles transporting packages from e-commerce sites like Amazon.com.

Goodyear's SightLine solution runs data from a sensor through proprietary machine learning algorithms to capture tire wear, pressure, road-surface conditions, and other variables to forecast flats or other problems days ahead of time.

Goodyear's Chris Helsel said SightLine could detect 90% of tire-related issues ahead of time in a test that involved about 1,000 vehicles operated by 20 customers.

Meanwhile, Bridgestone Americas is developing an intelligent tire system that combines sensors, artificial intelligence algorithms, and digital twins to predict tire wear and readiness for retreading.

From The Wall Street Journal

AI Contributing to Marketing

 Some of our earliest work touched on this, but its hard.  

How Much Can AI Contribute to Increasing Efficiency for the Marketing Department?  by 7wData

Many businesses — and their marketing teams — are increasingly adopting intelligent technology solutions to boost operational efficiency while enhancing the customer experience (CX). With these platforms, marketers can obtain a more nuanced, comprehensive understanding of their target audiences.

The insights collected through this process can then be employed to drive conversions while lessening the marketing teams' workload.

In a nutshell, artificial Intelligence (AI) is a technology that is meant to imitate human psychology and Intelligence. It is a computer science field focused on creating machines that seem like they possess human intelligence. We call these machines’ intelligence “artificial” because humans create it, and it does not exist naturally.

Machine learning (ML) is a popular subset of AI. ML algorithms are computer-implementable instructions. They take data as input and perform computations to discover patterns within that data and use those patterns to predict the future.

An ML model improves its performance over time as it encounters more and more data and self-corrects on making mistakes to reduce the chance of repeating them in the future. ML is mostly used in systems that capture huge volumes of data. In marketing, this data is of your customers.

A successful marketing campaign’s mark is a great user experience. There’s a higher chance of prospects’ conversion when they can resonate with the content. It is what turns loyal customers into brand evangelists. And this is where AI can assist in improving customer experience.

Marketers can determine which form of content is most relevant for their target audience by analyzing AI-generated data. Factors such as historical data, past behavior, and location can recommend the most useful content for the users.

You can observe an instance of this capability in your online shopping experiences. Everyone knows how Amazon shares relevant products to buyers according to their views, purchases, and previous searchers. That is AI at work!  ... .' 

Friday, June 25, 2021

Deep Learning in AI Lecture

 Of considerable interest, pointers to the future?  

"Deep Learning in AI," the Turing Lecture of Yoshua Bengio, Yann LeCun, and Geoffrey Hinton, describes the origins and recent advances of deep learning, and its future challenges. The 2018 ACM A.M. Turing Award recipients discuss efforts to bridge the gap between machine learning and human intelligence in an original video at https://bit.ly/35Dvwfs.    in CACM.       Reading ...

Detecting, Recognizing Voices at a Distance

Hmm, talk about loss of privacy.   Will this do it through a mask? Will we be wearing new kinds of 'voice' masks?  Note the training data.  Probably will be quickly restricted for use in the West.   But elsewhere?  

Biomimetic Resonant Acoustic Sensor Detecting Far-Distant Voices Accurately to Hit the Market

KAIST (South Korea), June 14, 2021  in CACM

Researchers at South Korea's Korea Advanced Institute of Science and Technology (KAIST) have developed a bioinspired flexible piezoelectric acoustic sensor with a multi-resonant ultrathin piezoelectric membrane that acts like the basilar membrane of the human cochlea to achieve accurate and far-distant voice detection. The miniaturized sensor can be embedded into smartphones and artificial intelligence speakers for machine learning-based biometric authentication and voice processing. Compared to a MEMS condenser microphone, the researchers found the speaker identification error rate for their resonant mobile acoustic sensor was 56% lower after it experienced 150 training datasets, and 75% lower after 2,800 training datasets. ... ' 

Software Verification

Not something we did formally in the enterprise, a couple of experiments with externally developed systems. but its high time it were included. 

Formal Software Verification Measures Up   By Samuel Greengard

Communications of the ACM, July 2021, Vol. 64 No. 7, Pages 13-15  10.1145/3464933

The modern world runs on software. From smartphones and automobiles to medical devices and power plants, executable code drives insight and automation. However, there is a catch: computer code often contains programming errors—some small, some large. These glitches can lead to unexpected results—and systematic failures.

"In many cases, software flaws don't make any difference. In other cases, they can cause massive problems," says Kathleen Fisher, professor and chair of the computer science department at Tufts University and a former official of the U.S. Defense Advanced Research Projects Agency (DARPA).

For decades, computer scientists have imagined a world where software code is formally verified using mathematical proofs. The result would be applications free from coding errors that introduce bugs, hacks, and attacks. Software verification would ratchet up device performance while improving cybersecurity and public safety. By applying specialized algorithms and toolkits, "It's possible to show that code completely meets predetermined specifications," says Bryan Parno, an associate professor in the computer science and electrical and computer engineering departments at Carnegie Mellon University.

At last, the technique is being used to verify the integrity of code in a growing array of real-world applications. The approach could fundamentally change computing. Yet, it is not without formidable obstacles, including formulating algorithms that can validate massive volumes of code at the speed necessary for today's world. The framework also suffers from the same problem every computing model does: if a verified proof is based on the wrong assumptions, it can validate invalid code and produce useless, and even dangerous, results.

"Formal verification is simply a way to up the ante," Fisher explains. "It's a way to modernize and improve the way software is written and ensure that it runs the way it is supposed to operate."  ... ' 

Causal Machine Learning

Another area we experimented with, causal elements in the AI knowledge being used.  Would have liked to experiment with Alice. 

Microsoft Research Podcast

Econ2: Causal machine learning, data interpretability, and online platform markets featuring Hunt Allcott and Greg Lewis

Published June 2, 2021

Episode 122 | June 2, 2021 

In the world of economics, researchers at Microsoft are examining a range of complex systems—from those that impact the technologies we use to those that inform the laws and policies we create—through the lens of a social science that goes beyond the numbers to better understand people and society. 

In this episode, Senior Principal Researcher Dr. Hunt Allcott speaks with Microsoft Research New England office mate and Senior Principal Researcher Dr. Greg Lewis. Together, they cover the connection between causal machine learning and economics research, the motivations of buyers and sellers on e-commerce platforms, and how ad targeting and data practices could evolve to foster a more symbiotic relationship between customers and businesses. They also discuss EconML, a Python package for estimating heterogeneous treatment effects that Lewis has worked on as part of the ALICE (Automated Learning and Intelligence for Causation and Economics) project at Microsoft Research. ... "

Thursday, June 24, 2021

Choosing Software for a Critical Future

 I am inclined to think the critical future will be Lo and No-Code rather than old languages and methods. 

Code as Infrastructure

The Next Critical Talent Shortage Won’t Be Fortran  via O'Reilly and ACM

By Mike Loukides, June 8, 2021

A few months ago, I was asked if there were any older technologies other than COBOL where we were in serious danger of running out of talent. They wanted me to talk about Fortran, but I didn’t take the bait. I don’t think there will be a critical shortage of Fortran programmers now or at any time in the future. But there’s a bigger question lurking behind Fortran and COBOL: what are the ingredients of a technology shortage? Why is running out of COBOL programmers a problem?

The answer, I think, is fairly simple. We always hear about the millions (if not billions) of lines of COBOL code running financial and government institutions, in many cases code that was written in the 1960s or 70s and hasn’t been touched since. That means that COBOL code is infrastructure we rely on, like roads and bridges. If a bridge collapses, or an interstate highway falls into disrepair, that’s a big problem. The same is true of the software running banks.

Fortran isn’t the same. Yes, the language was invented in 1957, two years earlier than COBOL. Yes, millions of lines of code have been written in it. (Probably billions, maybe even trillions.) However, Fortran and COBOL are used in fundamentally different ways. While Fortran was used to create infrastructure, software written in Fortran isn’t itself infrastructure. (There are some exceptions, but not at the scale of COBOL.) Fortran is used to solve specific problems in engineering and science. Nobody cares anymore about the Fortran code written in the 60s, 70s, and 80s to design new bridges and cars. Fortran is still heavily used in engineering—but that old code has retired. Those older tools have been reworked and replaced.  Libraries for linear algebra are still important (LAPACK), some modeling applications are still in use (NEC4, used to design antennas), and even some important libraries used primarily by other languages (the Python machine learning library scikit-learn calls both NumPy and SciPy, which in turn call LAPACK and other low level mathematical libraries written in Fortran and C). But if all the world’s Fortran programmers were to magically disappear, these libraries and applications could be rebuilt fairly quickly in modern languages—many of which already have excellent libraries for linear algebra and machine learning. The continued maintenance of Fortran libraries that are used primarily by Fortran programmers is, almost by definition, not a problem. ...  '

(Much more at the link)  ....

Finalists for P&G Innovation Challenge

 P&G Ventures: 

Procter & Gamble : P&G Ventures Announces Four Finalists in Latest Innovation Challenge

06/24/2021 | 01:31pm EDT

CINCINNATI, June 24, 2021 /PRNewswire/ -- P&G Ventures, the early stage startup studio within P&G (NYSE:PG), has announced the four finalists in its latest Innovation Challenge, which will be held online on Wednesday, July 14 at 1pm ET.

These four finalists were chosen from a competitive pool of applicants whose products are solving consumer pain points in categories such as active aging, safe and effective germ protection, non-toxic home and garden, as well as other consumer packaged goods spaces where P&G doesn't currently compete.

The finalists will pitch their businesses to a panel of expert judges who will determine the winner. The panel includes Alex Betancourt, Vice President, P&G Ventures; Anu Duggal, Founding Partner, Female Founders Fund; Mike Jensen, Senior Vice President of Research & Development, P&G Ventures; Michael Olmstead, Chief Revenue Officer, Plug and Play; and Clarence Wooten, Co-founder and General Partner, Revitalize Venture Studio.

The winner will receive a $10,000 prize and the opportunity to continue developing their product with P&G Ventures.

The 2021 P&G Ventures Innovation Challenge finalists are:

NanoSpun Technologies, based in Los Altos, CA and represented by Founder & CEO Ohad Bendror (Bendas), develops and produces disruptive, first-of-its-kind, live-active biological tissues for skincare, medical and industrial applications.

One Skin, based in San Francisco, CA and represented by Co-Founder Juliana Carvalho, is the first topical supplement designed to extend your skin's lifespan on a molecular level, improving skin health and strength, giving users youthful skin for longer.

Ready, Set, Food!, based in Los Angeles, CA and represented by CEO & Co-Founder Daniel Zakowski, is a ground-breaking solution to early allergen introduction, making it easy for families to follow new food allergy prevention medical guidelines.

Wellesley Pharmaceuticals, based in Newtown Grant, PA and represented by CEO & President David Dill, designed Nocturol™ – a pill designed to provide 8 hours of protection for those who suffer from frequent bathroom trips overnight.  ..... ' 

Pharma and Quantum Computing

 McKinsey writes: 

Pharma’s digital Rx: Quantum computing in drug research and development

Quantum computing could be game-changing for drug development in the pharmaceutical industry. Businesses should start preparing now. ... " 

Changing Someone's Mind

(Click though to Podcast for complete article) 

Changing Someone’s Mind: A Powerful New Approach

Jun 22, 2021 North America

Supports K@W's  Leadership Content

Nano Tools for Leaders® — a collaboration between Wharton Executive Education and Wharton’s Center for Leadership and Change Management — are fast, effective leadership tools that you can learn and start using in less than 15 minutes, with the potential to significantly impact your success as a leader and the engagement and productivity of the people you lead.

Contributor: Jonah Berger, Wharton marketing professor and author of The Catalyst: How to Change Anyone’s Mind.

The Goal:

To change minds, organizations, industries, and the world — stop trying to persuade, and instead, encourage people to persuade themselves.

Nano Tool:

No one likes to feel like someone is trying to influence them. The natural reaction to being told what to do, whether it’s to support a change initiative, accept a starting salary, or stop smoking, is to resist by pushing back. The more you allow for autonomy and allow people to participate in the process, the more effective you’ll be. Use one or a combination of four proven tactics for helping guide people to make the choice you prefer.

Provide a menu: Allow people to choose a path from a selection of your choice, giving them more say in how they’ll get where you are hoping they’ll go. Advertising agencies do this when presenting work to clients. Instead of offering one idea, which the client could then spend the rest of the meeting poking holes in, they offer two or three. This “bounded choice” provides autonomy and the greater likelihood one of the ideas will be chosen.

Ask, don’t tell: Statements like “junk food makes you fat” and “smoking causes cancer” don’t change minds. Asking questions instead shifts the listener’s role, much like providing a menu does. Rather than counterarguing or thinking about all the reasons they disagree with a statement, they’re occupied with the task of answering the question (voicing their opinion about the issue — which most people are more than happy to do). Questions also increase buy-in. Because the answer they give is theirs, it’s more likely to drive them to action. Think about how “Do you think junk food is good for you?” would work better than the statement.   .... '

Faster Neural Networks

Speed for solutions and speed for alternative maintenance updates of models.

Latest Neural Nets Solve World’s Hardest Equations Faster Than Ever Before

Two new approaches allow deep neural networks to solve entire families of partial differential equations, making it easier to model complicated systems and to do so orders of magnitude faster.

Partial differential equations, such as the ones governing the behavior of flowing fluids, are notoriously difficult to solve. Neural nets may be the answer.

Alexander Dracott for Quanta Magazine,  Anil Ananthaswamy

In high school physics, we learn about Newton’s second law of motion — force equals mass times acceleration — through simple examples of a single force (say, gravity) acting on an object of some mass. In an idealized scenario where the only independent variable is time, the second law is effectively an “ordinary differential equation,” which one can solve to calculate the position or velocity of the object at any moment in time.

But in more involved situations, multiple forces act on the many moving parts of an intricate system over time. To model a passenger jet scything through the air, a seismic wave rippling through Earth or the spread of a disease through a population — to say nothing of the interactions of fundamental forces and particles — engineers, scientists and mathematicians resort to “partial differential equations” (PDEs) that can describe complex phenomena involving many independent variables.

The problem is that partial differential equations — as essential and ubiquitous as they are in science and engineering — are notoriously difficult to solve, if they can be solved at all. Approximate methods can be used to solve them, but even then, it can take millions of CPU hours to sort out complicated PDEs. As the problems we tackle become increasingly complex, from designing better rocket engines to modeling climate change, we’ll need better, more efficient ways to solve these equations.

Now researchers have built new kinds of artificial neural networks that can approximate solutions to partial differential equations orders of magnitude faster than traditional PDE solvers. And once trained, the new neural nets can solve not just a single PDE but an entire family of them without retraining.

To achieve these results, the scientists are taking deep neural networks — the modern face of artificial intelligence — into new territory. Normally, neural nets map, or convert data, from one finite-dimensional space (say, the pixel values of images) to another finite-dimensional space (say, the numbers that classify the images, like 1 for cat and 2 for dog). But the new deep nets do something dramatically different. They “map between an infinite-dimensional space and an infinite-dimensional space,” said the mathematician Siddhartha Mishra of the Swiss Federal Institute of Technology Zurich, who didn’t design the deep nets but has been analyzing them mathematically.  ... '

How Companies Impact Households

 Quite a considerable piece here from McKinsey, with lots of graphics

A new look at how corporations impact the economy and households

May 31, 2021 | Discussion Paper  (84 pages) 

By James Manyika, Michael Birshan, Sven Smit, Jonathan Woetzel, Kevin Russell, and Lindsay Purcell

A new look at how corporations impact the economy and households

We map the different ways in which the economic value that large companies create flows to households in OECD economies and examine what has changed in the past quarter-century.

The role of companies in the economy and their responsibilities to stakeholders and society at large has become a major topic of debate. Yet there is little clarity or consensus about how the business activity of companies impacts the economy and society. In this discussion paper, the first in a series, we assess how the economic value that companies create flows to households in the 37 OECD countries, and how these flows have shifted over the past 25 years. We identify patterns in what different types of companies do and how they do it, and how the mix of these companies and their patterns of economic impact have changed.

At its core are two analyses: The first maps all the pathways through which a dollar of company revenue reaches households—not just traditional measures of labor and capital income but also less-discussed aspects such as consumer surplus and supplier payments. The second is an algorithmic clustering of companies into eight “archetypes,” based on what they do and their impacts on society. This clustering transcends traditional sectoral views and highlights the similarities and differences between companies in how they affect households. For both analyses, we seek to understand the situation today and how it has changed over the past 25 years.  .... '

Wednesday, June 23, 2021

Machine Learning Security

 Long look at the topic covering many aspects of the idea. 

Machine learning security needs new perspectives and incentives  By Ben Dickson  in BDTechTalks

At this year’s International Conference on Learning Representations (ICLR), a team of researchers from the University of Maryland presented an attack technique meant to slow down deep learning models that have been optimized for fast and sensitive operations. The attack, aptly named DeepSloth, targets “adaptive deep neural networks,” a range of deep learning architectures that cut down computations to speed up processing.

Recent years have seen growing interest in the security of machine learning and deep learning, and there are numerous papers and techniques on hacking and defending neural networks. But one thing made DeepSloth particularly interesting: The researchers at the University of Maryland were presenting a vulnerability in a technique they themselves had developed two years earlier.

In some ways, the story of DeepSloth illustrates the challenges that the machine learning community faces. On the one hand, many researchers and developers are racing to make deep learning available to different applications. On the other hand, their innovations cause new challenges of their own. And they need to actively seek out and address those challenges before they cause irreparable damage.  ...  '

Flawed Algorithms

Always looking at data and context involved when algorithms fail.  Like any kind of predictive approach, its rarely perfect.   The same for classing analytics and for machine learning based methods.  And for that matter for human predictive methods as well.   And that can change as data and context changes over time.    So these kinds of test examples are useful.

Algorithm That Predicts Deadly Infections Is Often Flawed   By Wired

A study using data from nearly 30,000 patients in University of Michigan hospitals suggests Epic Systems' early warning system for sepsis infections performs poorly.

An algorithm designed by U.S. electronic health record provider Epic Systems to forecast sepsis infections is significantly lacking in accuracy, according to an analysis of data on about 30,000 patients in University of Michigan (U-M) hospitals.

U-M researchers said the program overlooked two-thirds of the approximately 2,500 sepsis cases in the data, rarely detected cases missed by medical staff, and was prone to false alarms.

The researchers said Epic tells customers its sepsis alert system can correctly differentiate two patients with and without sepsis with at least 76% accuracy, but they determined it was only 63% accurate.

U-M's Karandeep Singh said the study highlights wider shortcomings with proprietary algorithms increasingly used in healthcare, noting that the lack of published science on these models is "shocking."

From Wired

Creating 'Digital Twins' at Scale

Relates to something I am thinking about now.  Good find.

Creating Precise Computer Simulation Twins at Scale

The Massachusetts Institute of Technology (MIT)'s Michael Kapteyn and colleagues have designed a model for generating digital twins—precise computer simulations—at scale. The researchers tested the probabilistic graphical model in scenarios involving an unpiloted aerial vehicle (UAV). The model mathematically characterizes a pair of physical and digital dynamic systems connected via two-way data streams as they evolve; the parameters of the UAV's digital twin are initially aligned with data collected from the physical counterpart, to accurately reflect the original at the onset. This ensures the digital twin matches any changes the physical asset undergoes over time, and can anticipate the physical asset's future changes. Kapteyn said this simplifies the production of digital twins for a large number of similar physical assets.  .... '

Wharton says Robots are Coming, but Just for Your Management

Robots are Coming, is Your Firm ready?

Wharton’s Lynn Wu talks about her research on how automation is reshaping the workplace in unexpected ways.

Audio Player at the link above.

Use Up/Down Arrow keys to increase or decrease volume.

If you’re worried that robots are coming for your job, you can relax — unless you’re a manager.

A new survey-based study explains how automation is reshaping the workplace in unexpected ways. Robots can improve efficiency and quality, reduce costs, and even help create more jobs for their human counterparts. But more robots can also reduce the need for managers.

The study is titled “The Robot Revolution: Managerial and Employment Consequences for Firms.”   The co-authors are Lynn Wu, professor of operations, information and decisions at Wharton; Bryan Hong, professor of entrepreneurship and management at the University of Missouri Kansas City’s Bloch School of Management; and Jay Dixon, an economist with Statistics Canada. The researchers said the study, which analyzed five years’ worth of data on businesses in the Canadian economy, is the most comprehensive of its kind on how automation affects employment, labor, strategic priorities, and other aspects of the workplace.

Wu recently spoke with Knowledge@Wharton about the paper and its implications for firms. (Listen to her full interview in the podcast at the top of this page.)

More Robots, More Workers

Contrary to popular belief, robots are not replacing workers. While there is some shedding of employees when firms adopt robots, the data show that increased automation leads to more hiring overall. That’s because robot-adopting firms become so much more productive that they need more people to meet the increased demand in production, Wu explained.

“Any employment loss in our data we found came from the non-adopting firms,” she said. “These firms became less productive, relative to the adopters. They lost their competitive advantage and, as a result, they had to lay off workers.”  ... '

Smartphone Camera Can Illuminate Bacteria Causing Acne, Dental Plaque

Diagnosis Application.

Smartphone Camera Can Illuminate Bacteria Causing Acne, Dental Plaque

By University of Washington News, June 17, 2021

Researchers at the University of Washington have developed a technique to identify potentially harmful bacteria on skin and in the mouth using images taken by conventional smartphone cameras.

Their cost-effective approach — which could become the basis for home-based methods to assess basic skin and oral health — combines a smartphone-case modification with image-processing methods.

The researchers attached a three-dimensional printed ring featuring 10 LED black lights around the smartphone case's camera opening, which researcher Qinghua He said serve to "'excite' a class of bacteria-derived molecules called porphyrins, causing them to emit a red fluorescent signal that the smartphone camera can then pick up."

Researcher Ruikang Wang explained, "If you have bacteria producing a different byproduct that you want to detect, you can use the same image to look for it — something you can't do today with conventional imaging systems."

From University of Washington News ... 

Impact of the Waves of COVID on Internet Traffic

 Quite an interesting piece on how changes in external context can change the ebb and flow of the internet.    And an attempt to make sense of the numbers.   Useful for future planning and redesign.  And more consideration of how the Internet is now a real part of our key infrastructure.  And determination of the nature of normal. Reading. 

A Year in Lockdown: How the Waves of COVID-19 Impact Internet Traffic  in CACM

By Anja Feldmann, Oliver Gasser, Franziska Lichtblau, Enric Pujol, Ingmar Poese, Christoph Dietzel, Daniel Wagner, Matthias Wichtlhuber, Juan Tapiador, Narseo Vallina-Rodriguez, Oliver Hohlfeld, Georgios Smaragdakis

Communications of the ACM, July 2021, Vol. 64 No. 7, Pages 101-108 10.1145/3465212

In March 2020, the World Health Organization declared the Corona Virus 2019 (COVID-19) outbreak a global pandemic. As a result, billions of people were either encouraged or forced by their governments to stay home to reduce the spread of the virus. This caused many to turn to the Internet for work, education, social interaction, and entertainment. With the Internet demand rising at an unprecedented rate, the question of whether the Internet could sustain this additional load emerged. To answer this question, this paper will review the impact of the first year of the COVID-19 pandemic on Internet traffic in order to analyze its performance. In order to keep our study broad, we collect and analyze Internet traffic data from multiple locations at the core and edge of the Internet. From this, we characterize how traffic and application demands change, to describe the "new normal," and explain how the Internet reacted during these unprecedented times .....' 

Tuesday, June 22, 2021

China Claims to Exceed GPT-3 Language

Continued push on more powerful language models.

China outstrips GPT-3 with even more ambitious AI language model

By Anthony Spadafora   in TechRadar,  First Published 2 weeks ago

WuDao 2.0 model was trained using 1.75tn parameters

A Chinese AI institute has unveiled a new natural language processing (NLP) model that is even more sophisticated than those created by both Google and OpenAI.

The WuDao 2.0 model was created by the Beijing Academy of Artificial Intelligence (BAAI) and developed with the help of over 100 scientists from multiple organizations. What makes this pre-trained AI model so special is the fact that it uses 1.75tn parameters to simulate conversations, understand pictures, write poems and even create recipes.

Parameters are variables that are defined by machine learning models and as these models evolve, the parameters themselves also improve to allow an algorithm to get better at finding the correct outcome over time. Once a model has been trained on a specific data set like human speech samples, the outcome can then be applied to solving other similar problems.  ... " 

Measuring Trust in AI

Accurate measures in context would be very useful.

NIST Wants to Measure Trust in AI   By Wired, June 22, 2021

The National Institutes of Standards and Technology wants to quantify user trust in artificial intelligence.

"Trust and Artificial Intelligence," by Brian Stanton at NIST's Information Technology Laboratory and Theodore Jensen at the University of Connecticut, aims to help businesses and developers who deploy AI systems make informed decisions and identify areas where people don't trust AI.

Without trust, Stanton says, adoption of AI will slow or halt.

NIST views the initiative as an extension of its more traditional work establishing trust in measurement systems. Public comment is being accepted until July 30.

From Wired  

Improving the Virtual Classroom

Presentation and paper on efforts at improving the virtual classroom.

How To Improve the Virtual Classroom

By University of California, San Diego, June 21, 2021

Researchers at the University of California, San Diego studied the shift to virtual learning precipitated by the COVID pandemic to better understand where remote education fell short and how it might be improved.

"'It Feels Like I am Talking into a Void': Understanding Interaction Gaps in Synchronous Online Classrooms,"   presented at ACM's 2021 CHI Conference on Human Factors in Computing Systems, examines faculty and student attitudes towards virtual classrooms and proposes several technological refinements that could improve the experience, such as flexible distribution of student video feeds and enhanced chat functions.

"We wanted to understand instructor and student perspectives and see how we can marry them," says senior author Nadir Weibel, associate professor at UC San Diego's Department of Computer Science and Engineering. "How can we improve students' experience and give better tools to instructors?"

First author on the study is Matin Yarmand. Co-authors are UC San Diego Professor Scott Klemmer and Carnegie Mellon University Ph.D. student Jaemarie Solyst.

From University of California, San Diego

Full Article

Stop the Ransomware Pandemic

Excerpt from The Economist via ACM: 

Stop the Ransomware Pandemic, Start with the Basics  By The Economist, June 22, 2021

TWENTY YEARS ago, it might have been the plot of a trashy airport thriller. These days, it is routine. On May 7th cyber-criminals shut down the pipeline supplying almost half the oil to America's east coast for five days. To get it flowing again, they demanded a $4.3m ransom from Colonial Pipeline Company, the owner. Days later, a similar "ransomware" assault crippled most hospitals in Ireland.

Such attacks are evidence of an epoch of intensifying cyber-insecurity that will impinge on everyone, from tech firms to schools and armies. One threat is catastrophe: think of an air-traffic-control system or a nuclear-power plant failing. But another is harder to spot, as cybercrime impedes the digitisation of many industries, hampering a revolution that promises to raise living standards around the world.

The first attempt at ransomware was made in 1989, with a virus spread via floppy disks. Cybercrime is getting worse as more devices are connected to networks and as geopolitics becomes less stable. The West is at odds with Russia and China and several autocracies give sanctuary to cyber-bandits.

Trillions of dollars are at stake. Most people have a vague sense of narrowly avoided fiascos: from the Sony Pictures attack that roiled Hollywood in 2014, to Equifax in 2017, when the details of 147m people were stolen. The big hacks are a familiar but confusing blur: remember SoBig, or SolarWinds, or WannaCry?

From The Economist

View Full Article

Laundry in Space

 Former employer activity.   Not new, recall some similar activity in the 70s.

How do you do laundry in space? NASA taps P&G to find solution

By Andy Brownfield  –  Staff Reporter, Cincinnati Business Courier

Dec 9, 2020, 7:16am EST

The National Aeronautics and Space Administration has tapped Procter & Gamble for a project that takes the consumer goods company back to its roots, albeit in a space-age way.

NASA has signed a contract with P&G (NYSE: PG) to develop the first detergent for washing clothes in space. Currently, astronauts' dirty laundry is packaged up and ejected alongside other waste in a capsule to burn in the atmosphere, or returned to Earth as garbage.

It's not just laundry though. According to the contract between NASA and P&G, the space agency wants Procter & Gamble to create and improve the best methods for cleaning clothing and crew quarters for space exploration missions. .... " 

Single Use Face Masks

Former employer tech activity

Circular economy for plastics

Fraunhofer, SABIC, and Procter & Gamble join forces in closed-loop recycling pilot project for single-use face-masks

Press Release / June 16, 2021

The Fraunhofer Cluster of Excellence Circular Plastics Economy CCPE and its Institute for Environmental, Safety and Energy Technology UMSICHT have developed an advanced recycling process for used plastics. The pilot project with SABIC and Procter & Gamble serves to demonstrate the feasibility of closed-loop recycling for single-use facemasks.  ... ' 

Monday, June 21, 2021

China Blocking Banks

 Complete implications unclear.

China Says Banks Must Block Crypto Transactions; Market Falls

China's central bank says institutions must not provide trading, clearing, and settlement for crypto transactions ... 

By Omkar Godbole, Jun 21, 2021 at 6:30 a.m. EDT

China Says Banks Must Block Crypto Transactions; Market Falls

The People’s Bank of China (PBOC) on Monday told the country’s major financial institutions to stop facilitating virtual-currency transactions, increasing the negative sentiment in crypto markets.  ... '

Sensing Buried Items

I remember an agricultural problem where we needed to sense buried cables in sandy soil.  Recall there were easy methods to sense them,  not sure how this differs, but might be used in conjunction with other methods. Perhaps determining density of roots.    Other apps for buried objects in granular matter.

Slender Robotic Finger Senses Buried Items

MIT News, Daniel Ackerman, May 26, 2021

Massachusetts Institute of Technology (MIT) researchers have developed a slender robot finger with a sharp tip and tactile sensing capabilities that can help identify buried objects. Dubbed "Digger Finger," the robot can sense the shape of items submerged within granular materials like sand or rice. The researchers adapted their GelSight tactile sensor for the Digger Finger, and added vibration to make it easier for the robot to clear jams in the granular media that occur when particles lock together. MIT’s Radhen Patel said the Digger Finger's motion pattern must be adjusted based on the type of media in which it is searching, and the size and shape of its grains. MIT’s Edward Adelson said the Digger Finger “would be helpful if you’re trying to find and disable buried bombs, for example.”

Sunday, June 20, 2021

Worlds Smallest Computer at Work

An unusual example of embedding computing using sensors.

Snails Carrying World's Smallest Computer Help Solve Mass Extinction Survivor Mystery

University of Michigan News Service

By Katherine McAlpine; Catharine June  University of Michigan,  June 15, 2021

University of Michigan (U-M) biologists and engineers used the world's smallest computer to learn how the South Pacific Society Islands tree snail Partula hyalina survived a mass extinction. Former U-M researcher Inhee Lee adapted the Michigan Micro Mote (M3) sensor to test the theory that P. hyalina survived the deliberate introduction of the predatory rosy wolf snail to its environment as attributable to its light-reflecting white shell. The researchers deployed 50 M3s in Tahiti, gluing some to rosy wolf snails while others were stuck on leaves harboring the P. hyalina, which rests in daytime. Lee wirelessly downloaded data from each M3 at the end of the day. Based on that data, the researchers suspect P. hyalina avoids predation because the rosy wolf snail will not venture far into its sunlight-heavy habitat.

Technical Artifacts as Non Fungible Tokens

 Still find  the notion of Non-Fungible Tokens (NFT)  to be bizarre. Purely digital artifacts being placed up for non trivial sums.  But the concept continues to expand. 

WWW Code That Changed the World Up for Auction as NFT

Reuters,   By Guy Faulconbridge,   June 15, 2021

Computer scientist Tim Berners-Lee's original source code for what would become the World Wide Web now is part of a non-fungible token (NFT) that Sotheby's will auction off, with bidding to start at $1,000. The digitally signed Ethereum blockchain NFT features the source code, an animated visualization, a letter by Berners-Lee, and a digital poster of the code from the original files, which include implementations of the three languages and protocols that Berners-Lee authored: Hypertext Markup Language (HTML), Hypertext Transfer Protocol (HTTP), and Uniform Resource Identifiers. Berners-Lee said the NFT is "a natural thing to do ... when you're a computer scientist and when you write code and have been for many years. It feels right to digitally sign my autograph on a completely digital artifact."  ... '

Why Python as Most Common AI Language?

Still think Low and No-code are advancing, but for now Python reigns.

Three Reasons Python Is The AI Lingua Franca

By Calvin Hendryx-Parker   in Datanami

Earlier this year, Python celebrated its 30th anniversary as a programming language. For any software language to last three decades and maintain relevance to developers of all stripes is something special.

Much of what made Python a spectacular achievement when Guido van Rossum released version 0.9.0 in 1991 informs its success today. Python has always been simple and consistent, offering readable code and an entry ramp for developers learning a new language. These aspects of the language, along with its “batteries included” philosophy, paved the way for amateurs and professionals alike to push the boundaries of open source software programming over the last 30 years.

Recently, this has meant integration of artificial intelligence (AI) and machine learning (ML). Python’s initial release came before AI was a broadly accessible business tool, but quite a lot has changed since 1991. The 1996 chess match between IBM’s Deep Blue and Grand Champion Gary Kasparov demonstrated that AI was capable of complex algorithmic problem solving at levels well above even the most skilled human beings. Thereafter, the business of AI began to boom. The market for AI/ML in software development is growing at a rapid pace as AI streamlines industries as diverse as insurance and higher education. According to a Fortune Business Insights report from July 2020, the market size of the global AI market was valued at about $27 billion in 2019 and is projected to reach more than $250 billion by 2027.

Developers should expect AI/ML projects to comprise a greater and greater amount of their overall work in the coming years, and the time is now to learn the best language for artificial intelligence: Python. What makes Python so well-suited to AI and ML? Here are three reasons why Python can be the most important tool in your AI toolbox.    ... 

Saturday, June 19, 2021

Google's AI-Designed Chip

Designing the next generation of Tensor processing units.  Sounds like a kind of layout game. 

What Google’s AI-designed chip tells us about the nature of intelligence

By Ben Dickson   @BenDee983  in Venturebeat

In a paper published in the peer-reviewed scientific journal Nature.    last week, scientists at Google Brain introduced a deep reinforcement learning technique for floorplanning, the process of arranging the placement of different components of computer chips.

The researchers managed to use the reinforcement learning technique to design the next generation of Tensor Processing Units, Google’s specialized artificial intelligence processors.

The use of software in chip design is not new. But according to the Google researchers, the new reinforcement learning model “automatically generates chip floorplans that are superior or comparable to those produced by humans in all key metrics, including power consumption, performance and chip area.” And it does it in a fraction of the time it would take a human to do so.

The AI’s superiority to human performance has drawn a lot of attention. One media outlet described it as “artificial intelligence software that can design computer chips faster than humans can” and wrote that “a chip that would take humans months to design can be dreamed up by [Google’s] new AI in less than six hours.”

Another outlet wrote, “The virtuous cycle of AI designing chips for AI looks like it’s only just getting started.”

But while reading the paper, what amazed me was not the intricacy of the AI system used to design computer chips but the synergies between human and artificial intelligence.

The paper describes the problem as such: “Chip floorplanning involves placing netlists onto chip canvases (two-dimensional grids) so that performance metrics (for example, power consumption, timing, area and wirelength) are optimized, while adhering to hard constraints on density and routing congestion.”

Basically, what you want to do is place the components in the most optimal way. However, like any other problem, as the number of components in a chip grows, finding optimal designs becomes more difficult.

Existing software help to speed up the process of discovering chip arrangements, but they fall short when the target chip grows in complexity. The researchers decided to draw experience from the way reinforcement learning has solved other complex space problems, such as the game Go.

“Chip floorplanning is analogous [emphasis mine] to a game with varying pieces (for example, netlist topologies, macro counts, macro sizes and aspect ratios), boards (varying canvas sizes and aspect ratios) and win conditions (relative importance of different evaluation metrics or different density and routing congestion constraints),” the researchers wrote.... ' 

Optimize Your Omnichannel Marketing Strategy

Useful look at the space via Wharton. 

How to Optimize Your Omnichannel Marketing Strategy


Wharton’s Raghuram Iyengar talks about his research on how firms can harness the full benefits of omnichannel marketing.

Audio Player (at the link) 

Use Up/Down Arrow keys to increase or decrease volume.

Supports K@W's  Marketing Content

Omnichannel marketing seems like a simple enough concept. Consumers like to shop online, offline, and across different channels, so firms need to meet them wherever they are. But coming up with an omnichannel marketing strategy is a lot more complicated than just collecting cookies and tracking purchases. A new study that appears in a special issue of the Journal of Marketing in collaboration with the Marketing Science Institute explains why omnichannel is not a panacea.

There are three big challenges to making it work. Those challenges are outlined in the study, along with some solutions that include using machine learning and blockchain technology to harness the full benefits of omnichannel marketing. Wharton marketing professor Raghuram Iyengar is a co-author of the paper, titled “Informational Challenges in Omnichannel Marketing: Remedies and Future Research.” The other co-authors are: Tony Haitao Cui, marketing professor at the University of Minnesota’s Carlson School of Management; Anindya Ghose, marketing professor at New York University’s Stern School of Business; Hanna Halaburda, technology, operations and statistics professor also at NYU Stern; Koen Pauwels, marketing professor at Northeastern University’s D’Amore-McKim School of Business; S. Sriram, marketing professor at Michigan University’s Stephen M. Ross School of Business; Catherine Tucker, management and marketing professor at MIT Sloan School of Management; and Sriram Venkataraman, marketing professor at the University of North Carolina’s Kenan-Flagler Business School. 

Iyengar joined Knowledge@Wharton to talk about the findings. Listen to the full podcast at the top of this page or keep reading for an edited transcript of the conversation.

Knowledge@Wharton: Not only are firms trying to execute omnichannel marketing better, but researchers like you are trying to understand it better, even while the rapid evolution of technology makes that a moving target. What does this study add to the literature?

Raghuram Iyengar: Omnichannel certainly is a very hot topic. When companies are thinking about omnichannel, they sometimes want to think about distinguishing from multichannel. The big distinguishing aspect of it is multichannel has different ways in which you’re reaching the customer. Omnichannel is that as well, but it should be in synergy.

If you are, for example, a customer of REI, you might have a mobile application, you might have emails coming in. And if they are pursuing an omnichannel strategy, they are hoping that the customer is seeing different pieces of information in conjunction with each other and, in some sense, are complementary to each other.

Carrying that out is not that easy because you need to have a good sense of what the data is like — all the different touchpoints that the customer has had with REI or any other company — and then be able to execute it on the back end. Putting it all together is not as simple as it seems. ... ' 

Fusions Plants Rolling

Like to see this getting started, and the experience requires to run the built.  What is the comparative cost for power?

Nuclear energy: Fusion plant backed by Jeff Bezos to be built in UK   By Matt McGrath

A company backed by Amazon's Jeff Bezos is set to build a large-scale nuclear fusion demonstration plant in Oxfordshire.

Canada's General Fusion is one of the leading private firms aiming to turn the promise of fusion into a commercially viable energy source. The new facility will be built at Culham, home to the UK's national fusion research programme.  It won't generate power, but will be 70% the size of a commercial reactor. General Fusion will enter into a long-term commercial lease with the UK Atomic Energy Authority following the construction of the facility at the Culham campus.

While commercial details have not been disclosed, the development is said to cost around $400m. ... ' 

Emotions to Drive Autonomous Vehicles?

Some interesting things have come out of FAU, we participated.   Here another.  But can human emotions be use to autonomously drive?  Even in part? Test it well.

Invention Uses Machine-Learned Human Emotions to 'Drive' Autonomous Vehicles  By Florida Atlantic University

Florida Atlantic University (FAU)'s Mehrdad Nojoumian has designed and patented new technology for autonomous systems that uses machine-learned human moods to respond to human emotions. Nojoumian's adaptive mood control system employs non-intrusive sensory solutions in semi- or fully autonomous vehicles to read the mood of drivers and passengers.

In-vehicle sensors collect data based on facial expressions and other emotional cues of the vehicle's occupants, then use real-time machine learning mechanisms to identify occupants' moods over time. The vehicle responds to perceived emotions by selecting a suitable driving mode (normal, cautious, or alert).

FAU's Stella Batalama said Nojoumian's system overcomes self-driving vehicles' inability to accurately forecast the behavior of other self-driving and human-driven vehicles.

From Florida Atlantic University

Synthetic Biology

Touched on Synthetic Biology way back, though it never was used directly in the enterprise.  Apparently reemerging.  A revisit to something that is apparently reemerging.  Beyond Biomimicry.

Biodesign and Synthetic Biology

What Is Biodesign?

By Daniel Grushkin  in Issues.org

In 2009 Nature Biotechnology asked a group of synthetic biologists to define “synthetic biology.” None of the scientists could agree on a definition. Yet today the synthetic biology market—a field evidently without a widely accepted understanding of itself—is worth $9.5 billion. When Nature Biotechnology posed its question I was a reporter covering the emerging discipline. I soon realized that definitions are less important than the groups of people who gather around and advance a particular set of ideas.

So what, then, is “biodesign”? Today I would say it’s a big tent where everyone who self-identifies as a biodesigner can hang out. Of course before I founded the Biodesign Challenge in 2015, I probably would have said that it’s a design practice that incorporates biotechnology, or one that uses design to critique the biotech industry. Both of these definitions are accurate, but today I see the unbounded potential of the community of people as much as the possibilities within the ideas themselves.  .... ' 

AI Protecting Privacy

 AI as a contra method for privacy loss.

AI Technology Protects Privacy

Technical University of Munich (Germany)

May 24, 2021

Technology developed by researchers at Germany's Technical University of Munich (TUM) ensures that the training of artificial intelligence (AI) algorithms does not infringe on patients' personal data. The team, collaborating with researchers at the U.K.'s Imperial College London and the OpenMined private AI technology nonprofit, integrated AI-based diagnostic processes for radiological image data that preserve privacy. TUM's Alexander Ziller said the models were trained in various hospitals on local data, so "data owners did not have to share their data and retained complete control." The researchers also used data aggregation to block the identification of institutions where the algorithm was trained, while a third technique was utilized to guarantee differential privacy. TUM's Rickmer Braren said, "It is often claimed that data protection and the utilization of data must always be in conflict. But we are now proving that this does not have to be true."  .... ' 

Friday, June 18, 2021

Tracking Missing Packages with AI

Good to see USPS in on this.

USPS Uses Edge AI Apps to Help Track Down Missing Packages Faster  By FedTech Magazine

If your package gets lost in the mail, artificial intelligence may soon be coming to the rescue to speed up its delivery.

The U.S. Postal Service has partnered with the technology company NVIDIA to roll out edge artificial intelligence applications that are helping the service track down packages in a matter of hours instead of days, according to an NVIDIA blog post.

Together with NVIDIA, the U.S.P.S. has stood up the Edge Computing Infrastructure Program to analyze the billions of images the postal service's processing centers generate. The program, which leverages the NVIDIA EGX platform, can process vast quantities of data and quickly help the U.S.P.S. hunt for packages.

From FedTech Magazine

Google's Method For Software Supply Chain Attacks

 Quite interesting  ...  a  'Software Supply Chain Attack', is inserting malware during its creation, transport or inclusion in some system. 

Google dishes out homemade SLSA, a recipe to thwart software supply-chain attacks  in TheRegister

Thomas Claburn in San Francisco Fri 18 Jun 2021 // 00:05 UTC

Google has proposed a framework called SLSA for dealing with supply chain attacks, a security risk exemplified by the recent compromise of the SolarWinds Orion IT monitoring platform.

SLSA – short for Supply chain Levels for Software Artifacts and pronounced "salsa" for those inclined to add convenience vowels – aspires to provide security guidance and programmatic assurance to help defend the software build and deployment process.

"The goal of SLSA is to improve the state of the industry, particularly open source, to defend against the most pressing integrity threats," said Kim Lewandowski, Google product manager, and Mark Lodato, Google software engineer, in a blog post on Wednesday. "With SLSA, consumers can make informed choices about the security posture of the software they consume."

Supply chain attacks – attempting to exploit weaknesses in the software creation and distribution pipeline – have surged recently. Beyond the SolarWinds incident and the exploitation of vulnerabilities in Apache Struts, there have been numerous attacks on software package registries like npm, PyPI, RubyGems, and Maven Central that house code libraries developers rely on to support complex applications.

According to security biz Sonatype [PDF], attacks on open source projects increased 430 per cent during 2020. One of the various plausible reasons is that compromising a dependency in a widely used library ensures broad distribution of malware. As noted in a 2019 TU Darmstadt research paper, the top five npm packages in 2018 "each reach between 134,774 and 166,086 other packages, making them an extremely attractive target for attackers."    ... ' 

Differences Between Simulation and Digital Twins

Interesting.  Have done lots of Monte Carlo style simulation modeling, much less using digital twins.   Is this a good description?  For what contexts? .  We also did agent based modeling, which has similarities to DTs.  Anyone with similar experiences have some good thoughts?   Point me to other resource examples of value.    What to collaborate on this?  Say for choice of models or their combination? 

What is the Difference Between a Simulation and a Digital Twin Model?

 15 Apr, 2020  in Exorint

The increasing use of emerging technology to simplify complex tasks has proved rewarding across every industry in diverse ways. This includes increased operational efficiency, automating manual tasks, training and validation, as well as data analysis. It is a known fact that the integration of emerging tech has brought on the fourth industrial revolution, in which data analytics and automation are important components. The digital transformation of traditional processes is also another aspect of Industrie 4.0 and, here, simulation and the digital twin play starring roles. But what are these roles?

This article will discuss:

The definition and application of simulation technology and the digital twin

The differences between a simulation and a digital twin

The symbiotic relationship between simulations and the digital twin

What is simulation?

In computing, simulations refer to digital models that imitate the operations or processes within a system. Such simulations are used for analyzing the performances of systems and the testing and implementation of new ideas. Engineers and technicians make use of simulations across a variety of industries to test products, systems, processes, and concepts.

In many circles, simulations are run using computer-aided design software applications. But for more advanced simulations with many variables, specialized simulation software is used. Typical examples of how simulations function include their use in finite element analysis and stress analysis. In the real world, these tests involve analyzing the effect of external pressure on metals or products to enhance their design or features.

Other types of simulations include discrete event simulations, stochastic simulations, and deterministic simulations. In these types, the variables used in running the simulation are either known or random. To run simulations, some level of digitization is needed. This process may involve only mathematical concepts or the design of 2D or 3D models representing assets within a process or a product. The simulation is then run by introducing variables into the digital environment or interface.

What is a digital twin?

In its basic form, a digital twin is the digital representation of physical or non-physical processes, systems, or objects. The digital twin also integrates all data produced or associated with the process or system it mirrors. Thus, it enables the transfer of data within its digital ecosystem, mirroring the data transfer that occurs in the real world. The data used in digital twins are generally collected from Internet of Things devices, edge hardware, HMIs, sensors, and other embedded devices. Thus, the captured data represents high-level information that integrates the behavioral pattern of digitized assets in the digital twin.

The real-time digital representation a digital twin provides serves as a world of its own. Within this digital world, all types of simulation can be run. It can also be used as a planning and scheduling tool for training, facility management, and the implementation of new ideas. This highlights the fact that a digital twin is a virtual environment, thus it must consist of either 2D or 3D assets or the data they produce or are expected to produce. In the modeled virtual environment, individuals can do what they choose with few limitations including the running of simulations.

Industrial cloud

Highlighting the differences between simulations and digital twins

Although the definitions of both concepts already highlight key differences, the use of case studies makes these differences more relatable. In 2019, CKE Holdings Inc., the parent company of Hardee’s and Carl Jr’s was interested in enhancing productivity levels within these facilities. The idea was to make order picking by staff easier and reduce shop floor traffic through better layout designs.

While simulations can be used to analyze the shortest distance between workstations or the effects of more storage facilities within the restaurant, a digital twin can do much more. Using a digital twin, CKE Holdings Inc. was able to recreate digital representations of existing shop floors and run multiple simulations, design, and scheduling ideas to enhance productivity. This resulted in improvements in every aspect of the facility’s operation from staff training, scheduling, and meeting customer demands more efficiently.

This shows that while simulations may help with understanding what may happen when changes are introduced, a digital twin helps with understanding both what is currently happening and what may happen within a process. Some key differences include:

Real-time simulations – Traditional simulations are run in virtual environments that may be representations of physical environments but do not integrate real-time data. The regular transfer of information between a digital twin and its corresponding physical environment makes real-time simulation possible. This increases the accuracy of predictive analytical models and the management and monitoring policies of enterprises.

Enhancing product design – Advanced simulations have the capacity to analyze thousands of variables to provide diverse answers, but a digital twin can be used to achieve more. Boeing’s integration of digital-twin technology in aircraft design and production is an example of its capabilities. In this case, a digital twin was used to simulate parts of an aircraft to analyze how diverse materials will fare throughout the life-cycle of the aircraft. With these calculations, Boeing was able to achieve 40% improvement in the quality of certain parts it designed.

Optimize real-world products and processes – Every Tesla automobile running today has a digital twin that captures the large data sets each car produces. The captured data is used in optimizing design, predictive analytics, enhancing self-driving initiatives, and maintenance. This highlights how a digital twin immediately or directly affects the physical entity it represents unlike the theoretical results simulations provide.


Regardless of the path taken, the digital transformation of assets and processes enhances industrial effort in many ways. This includes refining product design, real-time troubleshooting, and implementing new ideas. To achieve a comprehensive digital transformation of existing or planned entities, systems, and processes, accurate data capture is required. Enter smart edge technology or devices.

The accuracy of a simulation or a digital twin relies heavily on the accuracy of the data used in designing its models. In today’s shop floors, data capture is being made possible by smart edge device and human-machine interfaces and only with these types of data can a digital transformation occur. ... '