/* ---- Google Analytics Code Below */

Saturday, July 31, 2021

Pepper Robot Shelved

 And the much touted Pepper robot from Japanese Softbank Group, has also been shelved.  Much touted at one time as human friendly in many soft domains.   Saw this impressively demonstrated in Japan.

Softbank Robotics Shelved   By John P. Desmond, Editor, AI Trends  

SoftBank has stopped production of its humanoid robot, named Pepper, and is downsizing its robotics staff from an acquisition in France.  

Some observers saw that in putting a human-looking robot on the market, SoftBank suggested that the robot could display more human intelligence than it was actually capable of, creating unreasonable expectations.   

Production of Pepper was stopped last year, according to a report from Reuters.     SoftBank produced 27,000 units of Pepper. Chief Executive Masayoshi Son planned to make SoftBank the leader in the robotics industry, to produce human-like machines that could serve customers and help babysit children.     ... ' 

Status of Autonomous Last-Mile

 Had wondered the status of this, after I saw some neighborhood bans reported.  have even volunteered to look at the Nuro example and report on it in situ.  Risk of sharing sidewalks with people.  Coming back? 

Last-Mile Delivery Robots Making a Comeback After Initial Bans  

July 22, 2021 , By John P. Desmond, Editor, AI Trends  

The last-mile delivery market for autonomous delivery robots is poised to make a comeback, with startups raising money and partnerships working to get needed permission from local governments.  

The autonomous vehicle delivery market was interrupted in 2017, when the city of San Francisco instituted a ban. Some pedestrians had complained that the delivery robots crowded the sidewalks and posed a hazard to humans.  

About a month after the first bot rolled down the sidewalk, San Francisco Supervisor Norman Yee proposed a ban on the use of the technology, citing public safety concerns, according to an account in ZDNet.    

“I resolutely believe that our sidewalks should be prioritized for humans,” Yee stated to the San Francisco Examiner. “We do not allow bicycles and Segways on our sidewalks.” (Segways were banned on city sidewalks in 2002, an action criticized as heavy-handed at the time.) Yee did not have the votes to pass the ban, so he settled for strict limitations at that time.  

He had heard from many pedestrians and some community activists about the risks of the delivery vehicles.  

“Sidewalks, I believe, are not playgrounds for the new remote controlled toys of the clever to make money and eliminate jobs,” stated Lorraine Petty, an activist with the community group Senior and Disability Action, at the hearing on Yee’s proposed rules. “They’re for us to walk.”  

“Not every innovation is all that great for society,” Yee stated at the hearing.  

Two years later, in 2019, grocery delivery company Postmates was given the first permit in San Francisco to test sidewalk delivery robots. The company worked with Yee for two years to get it done, according to a report in TechCrunch. 

The Postmates rover, called Serve, is semi-autonomous, with a human pilot monitoring the fleet and able to interact with customers via video chat. The robot, which has Velodyne Lidar sensors and an Nvidia Xavier processor, can carry 50 lbs and travel 30 miles on a single battery charge.  

The company worked on making Serve friendly. “We are spending a lot of time going in and refining and inventing new ways that Serve can communicate,” stated Ken Kocienda, an Apple veteran who had joined Postmates in 2019. (He now is product architect at Humane of San Francisco.) “We want to make it socially intelligent. We want people, when they see Serve going down the street, to smile at it and to be happy to see it there.”     ... ' 

Bayesian Keyboards

Nice piece of how keyboards are made smart.

A Look Inside the Bayesian Keyboard

A visual tour of the data-driven features that make modern smartphone keyboards smart

Daniel Buschek  in TowardsDatascience

What you type is what you get? Not with modern touch keyboards. This article visually explains four features at the heart of your smartphone’s keyboard — including personalisation, auto-correction, and word prediction. Based on material I created for my “Intelligent User Interfaces” lecture, we examine the inner workings of these features in our daily typing, and conclude with takeaways for inspiring, evaluating, and critically reflecting on data-driven and “intelligent” user interfaces.  ... '

On Communication with Animals

Are we closer to communication with parrots, chimps, dolphins?  Will it be a key aspect of AI? 

On Communication By Vinton G. Cerf  in CACM.

Communications of the ACM, August 2021, Vol. 64 No. 8, Page 5  10.1145/3472146

As I write this, summer is upon us in the Northern Hemisphere. I have just attended an online lecture about non-human species communication, sponsored by the Interspecies Internet project (interspecies.io). While the primary objective of the project is to determine experimentally whether it is possible to demonstrate communication between non-human species, there is also considerable interest in understanding the nature of intraspecies communication. The lecturer, Ofer Tchernichovski, explored years of experience with zebra finches. Of particular interest were their songs and how they propagated through generations of "tutors" and "pupils" among families of finches. Among the interesting observations he made was a concern that we sometimes bring preconceived but unwarranted notions to science. For example, consider the way in which we might analyze bird songs. We make audio recordings and spectral Fourier diagrams of the songs. We segment these vocalizations as if they might represent phonemes, but our segmentation could be inappropriately influenced by what we know of human speech.

Linguists have learned a great deal about human speech, how it is produced, and how the phonemes give structure to utterances. Whether we can apply such structural assumptions to bird songs is a matter for research. Tchernichovski points out that an alien arriving on planet Earth, even if it is capable of sensing human speech, might not have any idea how to segment sounds into phonemes and words. Language is a concept that organizes sound into phonemes, words, and sentences representing structures that follow grammatical rules and from which semantic content can be derived. The alien might not have any a priori clue as to how human languages are expressed, parsed, and give rise to semantic meaning. If the alien itself has language, it might adopt a protocol for human language discovery, starting, for example, with self-identification. .... "

Friday, July 30, 2021

Ear-Worn eBP (Blood Pressure) Sensor

I had understood that such BP methods were hard to do, my apologies if I was mistaken.  Here  considerable detail provided at the link.


eBP: An Ear-Worn Device for Frequent and Comfortable Blood Pressure Monitoring

We developed eBP to measure blood pressure from inside a user's ear aiming to minimize the measurement's impact on normal activities while maximizing its comfort...

By Nam Bui, Nhat Pham, Jessica Jacqueline Barnitz, Zhanan Zou, Phuc Nguyen, Hoang Truong, Taeho Kim, Nicholas Farrow, Anh Nguyen, Jianliang Xiao, Robin Deterding, Thang Dinh, Tam Vu

From Communications of the ACM | August 2021

Machine Learning and Science

 Good intro and positioning.   A bit of an overreach perhaps. 

How will machine learning change science?   by Tanya Petersen, Ecole Polytechnique Federale de Lausanne    in Techxplore

Machine learning has burst onto the scene in the past two decades and will be a defining technology of the future. It is transforming large sectors of society, including healthcare, education, transport, and food and industrial production, as well as having an enormous impact on science and research.

A subset of artificial intelligence, machine learning is a process that helps computers to learn without direct instruction, and from experience. It does this by using algorithms to identify patterns within data, which are then used to create models that can make predictions. And data is the key. Machine learning, and the spiraling availability of vast amounts of data, promises to revolutionize the production of knowledge. Indeed, today's exponential and virtuous cycle of growth in deep learning, among other technologies, has been compared to the Cambrian Explosion of half a billion years ago when life on Earth experienced a short period of very rapid diversification. ... ' 

A New Era for Mechanical CAD

Considerable piece.   We actively worked with computer aided design, and used sketchpad too.  Had always thought there would be better models for 'assistance' for its use.  Early on we examined automated efficiency of design measures.    And a means to more readily integrate designs with existing contexts and uses.  Technical. 

A New Era for Mechanical CAD

Time to move forward from decades-old design

Jessie Frazelle  in Queue.acm.org

CAD (computer-aided design) has been around since the 1950s. The first graphical CAD program, called Sketchpad, came out of MIT [designworldonline.com]. Since then, CAD has become essential to designing and manufacturing hardware products. Today, there are multiple types of CAD. This column focuses on mechanical CAD, used for mechanical engineering.

Digging into the history of computer graphics reveals some interesting connections between the most ambitious and notorious engineers. Ivan Sutherland, who won the Turing Award for Sketchpad in 1988, had Edwin Catmull as a student. Catmull and Pat Hanrahan won the Turing award for their contributions to computer graphics in 2019. This included their work at Pixar building RenderMan [pixar.com], which was licensed to other filmmakers. This led to innovations in hardware, software, and GPUs. Without these innovators, there would be no mechanical CAD, nor would animated films be as sophisticated as they are today. There wouldn't even be GPUs.

Modeling geometries has evolved greatly over time. Solids were first modeled as wireframes by representing the object by its edges, line curves, and vertices. This evolved into surface representation using faces, surfaces, edges, and vertices. Surface representation is valuable in robot path planning as well. Wireframe and surface representation contains only geometrical data. Today, modeling includes topological information to describe how the object is bounded and connected, and to describe its neighborhood. (A neighborhood of a point consists of a set of points containing that point where one can move some distance in any direction away from that point without leaving the set.)

OpenCascade, Parasolid, and ACIS are all boundary-representation (B-rep) kernels. A B-rep model is composed of geometry and topology information. The topology information differs depending on the program used. B-rep file formats include STEP (Standard for the Exchange of Product Model Data), IGES (Initial Graphics Exchange Specification), NX's prt, Solid Edge's par and asm, Creo's prt and asm, SolidWorks' sldprt and sldasm, Inventor's ipt and iam, and AutoCAD's dwg.

Visual representation (vis-rep) models tend to be much smaller in data size than B-rep models. This is because they do not contain as much structural or product management information. Vis-rep models are approximations of geometry and are composed of a mass of flat polygons. Vis-rep file formats include obj, STL, 3D XML, 3D PDF, COLLADA, and PLY.

CAD programs tend to use B-rep models, while animation, game development, augmented reality, and virtual reality tend to use vis-rep models. However, the two are interchanged frequently. For example, if you were using a B-rep model for manufacturing but wanted to load it into Apple's ARKit for some animations, you would first convert it to COLLADA, a vis-rep file format. The file should already be a bit smaller from dropping all the CAD data, but if you wanted to make it even smaller, you could tweak the polygon counts on each of the meshes for the various parts.

The tools used to build with today are supported on the shoulders of giants, but a lot could be done to make them even better. At some point, mechanical CAD lost some of its roots of innovation. Let's dive into a few of the problems with the CAD programs that exist today and see how to make them better. .... ' 

Is the Dream of Self-Driving Car Over?

Are we slipping on compete autonomy?  I don't think so.  It may take longer than expected, but it will be here.   It has too many long range benefits to ignore.    It will also be delivered overseas and the pressure to catch up will be large. 

Why Self-Driving Cars Could be Going the Way of the Jetpack By New Scientist, July 29, 2021

As revolutions go, this one has been rather lacking in revs. For the past decade or so, there have been confident predictions that gas-guzzling cars driven by accident-prone humans would soon be on the slip road to oblivion. The future of mobility was to be all-electric – and all-autonomous.

Electric cars are already on the move, although we must go much further and faster if we are to meet climate goals. Meanwhile, however, the "autonomous" bit seems to be stuttering, to say the least.

To be sure, some of the latest commercially available cars come with ever more computing smarts, such as adaptive cruise control, which allows for occasional hands-free use in very specific road conditions. But beyond a few small-scale tests of truly autonomous vehicles, drivers must keep their eyes and minds on the road at all times. A future where the average motorist can sit back, relax, even take a nap and let the car's computer get them all the way from home to work and back, say, seems barely on the horizon.

Some observers are now openly saying the dream of full autonomy is a mirage: creating robot vehicles able to tackle any kind of road or traffic situation is just too tough a nut to crack. Are they right? And if so, what exactly is keeping down the self-driving car?  ...

A series of high-profile accidents involving fatalities both inside and outside driverless cars has shaken faith in the idea that they are safer overall.  .... 

From New Scientist:      

Thursday, July 29, 2021

Cost of Data Breaches

What it costs to have a Data Breach, useful data.

IBM Report: Data-Breach Costs Hit 17-Year High of $4.24M

Nancy Chenyizhi Liu | Editor in SdxCentral

July 28, 2021 12:01 AM

Data-breach costs jumped nearly 10% from an average of $3.86 million to $4.24 million per incident over the past year, according to IBM’s latest Cost of a Data Breach Report.    It marks the highest average total cost in this report’s 17-year history and the largest single-year increase in the last seven years. 

The 2021 Cost of a Data Breach Report  is based on analysis of 537 real-world data breaches in 17 different industries across 17 countries and regions that occurred between May 2020 and March 2021.

Despite the overall cost growth, organizations with more mature security postures that deployed tools including artificial intelligence (AI), automation, zero trust, and cloud security saw significantly lower costs.

IBM’s report indicates that around 35% of the surveyed organizations had implemented a zero-trust security approach, and 48% of those were in the mature stage. The average data breach cost for companies with a mature zero-trust strategy was $3.28 million, which was $1.76 million less than the ones without zero trust.    ... ' 

IBM Weather Company Best Weather Forecast

 We actively used weather prediction data and programs for supply chain models.  I now use IBM's Weather Company almost daily. Quite good, very accurate, not perfect.   Often small errors can make a big difference.   Risk for these small errors need to be considered.  Forecasting is still hard unless you can find and accurately quantify all inputs.  Forecasting is usually only one of a number of things that need to be forecast to optimize supply chains.

IBM's Weather Company the Most Accurate Forecaster Overall  in PRNewswire.

NEW YORK and ATLANTA, July 29, 2021 /PRNewswire/ -- IBM (NYSE: IBM) and its subsidiary The Weather Company, which includes The Weather Channel digital properties, were determined to be "the overall most accurate provider globally" by ForecastWatch, a premier organization for evaluating the accuracy of weather forecasts. In its latest comprehensive study of forecast accuracy released today,ii Global and Regional Weather Forecast Accuracy Overview, 2017-2020, commissioned by IBM, ForecastWatch   ... '

Disrupting Ransomware by disrupting Cybercurrecy

Good piece in Schneier.  Disrupt the chain of reward.  

Disrupting Ransomware by Disrupting Bitcoin

Ransomware isn’t new; the idea dates back to 1986 with the “Brain” computer virus. Now, it’s become the criminal business model of the internet for two reasons. The first is the realization that no one values data more than its original owner, and it makes more sense to ransom it back to them — sometimes with the added extortion of threatening to make it public — than it does to sell it to anyone else. The second is a safe way of collecting ransoms: bitcoin. ... 

With as usual some good comments.   ...

This essay was written with Nicholas Weaver, and previously appeared in Slate.com. 

Amex Uses Synthetic Data for Rare Fraud Patterns

Something we modeled even before machine learning to test 'nearness' to rare,  or even nonexistent patterns. 

Companies Beef Up AI Models with Synthetic Data  By The Wall Street Journal, July 28, 2021

Companies are building synthetic datasets when real-world data is unavailable to train artificial intelligence (AI) models to identify anomalies.

Dmitry Efimov at American Express (Amex) said researchers have spent several years researching synthetic data in order to enhance the credit-card company's AI-based fraud-detection models.  Amex is experimenting with generative adversarial networks to produce synthetic data on rare fraud patterns, which then can be applied to augment an existing dataset of fraud behaviors to improve general AI-based fraud-detection models.

Efimov said one AI model is used to generate new data, while a second model attempts to determine the data's authenticity. Efimov said early tests have demonstrated that the synthetic data improves the AI-based model's ability to identify specific types of fraud.

From The Wall Street Journal 

View Full Article - May Require Paid Subscription

What3Words Location Database

And we continue to follow up the complexity, security and implications of determining location.  With a system called What3Words, which I had not heard of before.  Described quickly below.  See also: What3words.com

Lost in L.A.? Fire Department Can Find You with What3words Location Technology

CNet, Stephen Shankland, July 22, 2021  in CACM

The Los Angeles Fire Department (LAFD) has entered into a partnership with digital location startup What3words, which assigns a unique three-word name to each of 57 billion 10-foot-square spots on Earth. The department had been testing the application since last year, using it to locate places that emergency crews needed to reach even if the sites lacked conventional addresses. LAFD receives What3words locations through 911 calls on Android phones or iPhones, or through text messages sent by dispatchers with links that retrieve the three-word addresses. People also can use the What3words app to pinpoint their own locations. Increasing numbers of signs identify locations with their What3words designations, particularly in wildlands.

Wednesday, July 28, 2021

Concerns about QR Codes

Good points.    I had not known that some of this data was always gathered.   We did formally tesgrt the use of QR codes in store to get information about products being purchased.   But at the time things like location data were not captured.  Thinking back, would have been easy to do that.  And as mentioned in a previous comment here:  'Location data is always personal'.   Good example below.

QR Codes Are Here to Stay. So Is the Tracking They Allow.   By The New York Times, July 27, 2021  in the CACM

When people enter Teeth, a bar in San Francisco's Mission neighborhood, the bouncer gives them options. They can order food and drinks at the bar, he says, or they can order via a QR code.

Each table at Teeth has a card emblazoned with the code, a pixelated black-and-white square. Customers simply scan it with their phone camera to open a website for the online menu. Then they can input their credit card information to pay, all without touching a paper menu or interacting with a server.

A scene like this was a rarity 18 months ago, but not anymore. "In 13 years of bar ownership in San Francisco, I've never seen a sea change like this that brought the majority of customers into a new behavior so quickly," said Ben Bleiman, Teeth's owner.

QR codes — essentially a kind of bar code that allows transactions to be touchless — have emerged as a permanent tech fixture from the coronavirus pandemic. Restaurants have adopted them en masse, retailers including CVS and Foot Locker have added them to checkout registers, and marketers have splashed them all over retail packaging, direct mail, billboards and TV advertisements. ... 

QR codes can store digital information such as when, where, and how often a scan occurs. They can also open an app or a Website that then tracks peoples personal information, or requires them to input it .... 

From The New York Times 

View Full Article

How are those Drone Swarm Light Shows Done?

I mentioned that we proposed drone and robotic swarms for industrial trials way back,   Now we are seeing the same kind of thing in shows that replace fireworks.  Such as in the opening ceremonies of the Olympics.   For spectacular entertainment.   How are these produced, effectively and technically?   Do they point to other 'swarm' activity other than entertainment?   Was pointed to some resources:

Technical Overview:  https://verge.aero/everything-about-drone-light-shows/ 

https://www.sciencedirect.com/search?qs=drone%20swarms    General articles on Drone Swarms

Intel services in creating and running Drone Swarm shows:


Thermal Sensor on a Smartphone

Always looking for data, and thus for good sensors.   We often used these for plant systems. Now smartphones give the opportunity for better portable applications   Here a thermal sensor.  Ultimately such IOTs will become very important.

Team's Sensor Fit for Smartphones, Autonomous Vehicles

By National Research Council of Science & Technology

July 28, 2021

A joint research team from the Korea Institute of Science and Technology (KIST) and Sungkyunkwan University has developed a thermal-imaging sensor that overcomes existing problems of price and operating-temperature limitations. The sensor can operate at temperatures up to 100°C without a cooling device and could pave the way for applications in smartphones and autonomous vehicles.

To be integrated with the hardware of smartphones and autonomous vehicles, sensors must operate stably without difficulties at high temperatures of 85°C and 125°C, respectively. Conventional thermal-imaging sensors do not meet this criterion without a costly independent cooling device.

The team developed a device using a vanadium dioxide (B) film that is stable at 100°C. The device detects and converts the infrared light generated by heat into electrical signals, which eliminates the need for cooling devices.

The work is described in "Wide-Temperature (Up to 100°C) Operation of Thermostable Vanadium Oxide Based Microbolometers with Ti/MgF2 Infrared Absorbing Layer for Long Wavelength Infrared (LWIR) Detection," published in the journal Applied Surface Science.

From National Research Council of Science & Technology

Full article.

Process Hacker Detects Intrusions

Some of my work now examines systems security and related issues.  The system mentioned below covers 'Process Hacker', a system that detects and notifies you when background services are added to your systems.   This is an 'expert' thing and not recommending anyone else utilize it, but appears very useful for detecting the kinds of malware now rampant.  Will add later experiences.

https://twit.tv/shows/security-now/episodes/829?autostart=false  Security Now Podcast

Below from: https://www.grc.com/sn/SN-829-Notes.pdf   By Steve Gibson 

Windows’ Process Hacker

The Sentinel Labs guys discovered this whole HP printer driver mess when a tool they had running at the time, known as “Process Hacker” popped up a notification that a new “SSPORT” service had just been created as a result of something they were doing. I, for one, would love the idea of being proactively notified when something has just added a background service or driver to my system. So I wanted to take a moment to shine a light on the tool they used, known as “Process Hacker.”  ... '

Training as Maintenance

Much of my early work  in the enterprise dealt with predicting maintenance in plant systems.  So this article is of interest.  Mckinsey's take, reasonable.   Can in particular agree it makes sense to capture lots of time based data and to continually upgrade the way this data is mined.  We even tested some of the methods with human data ... can training be considered a maintenance task?  

 Prediction at scale: How industry can get more value out of maintenance  from McKinsey

Machines can now tell you when they aren’t feeling well. The challenge for today’s industrial players lies in applying advanced predictive-maintenance technologies across the full scope of their operations.  ... ' 

Bamboo Children's Learning Skill

Taking a look at this, Alexa has not had enough children's voice learning skills

Bamboo Learning Launches Comprehensive Educational Alexa Skill   By Eric Hal Schwartz in Voicebot.ai

Voice-based educational technology startup Bamboo Learning has debuted a new eponymous Alexa Skill for teaching kids from kindergarten through fifth grade. The new skill combines lessons on reading, language arts, and math, as opposed to the individual, Alexa skills for those subjects already available, with families able to enroll multiple kids in different grades to take part in the lessons.


Bamboo Learning already offers Alexa skills and Google Assistant actions on several subjects. The new Bamboo Learning Alexa skill merges the Bamboo Math, Bamboo Books, and Bamboo English skills into a single curriculum containing millions of activities narrated by a Panda teacher. The Panda guides the children through the lessons, noting when they are correct and helping point them in the right direction if they get it wrong.  A family can enroll up to six users to take part in personalized lessons, with each child having their own unique animal avatar, to mark their educational progress. The lessons are designed to be purely audio but include images and text on Echo Show smart displays and Fire TVs. .. ' 

Tuesday, July 27, 2021

Detecting Deception with Machine Leaning

Quite interesting, but my guess, if something like this sees much use, will be heavily regulated.  Below intro, more at the link. 

Detecting Deception

By Sandrine Ceurstemont,   Commissioned by CACM Staff  in CACM

 People are not good at detecting when someone is lying. Studies have shown that our ability to perceive deception is barely greater than chance.  Wasiq Khan, a senior lecturer in artificial intelligence and data sciences at Liverpool John Moores University in the U.K., thinks that is partly because it requires the ability to identify complex clues in speech, facial movements, and gestures, attributes that he says "cannot be observed by humans easily."

Automated systems that use machine learning may be able to do better. Khan and his colleagues developed such a system while working on a project for the EU, where the aim was to explore new technologies that could be put in place to improve border control. They examined whether deception could be detected automatically from eye and facial cues, such as blink rate and gaze aversion. "I wanted to investigate whether face movements or eye movements are important," says Khan.

The team recorded videos of 100 participants to use as their dataset. The volunteers were filmed while role-playing a scenario that might occur at a nation's port of entry, in which they are asked about what they had packed in their suitcase by an avatar controlled by the researchers in another room. Half of the participants were asked to lie, and the other half were told to be truthful.   

The videos were then analyzed using an automated system called Silent Talker. It examined each video frame and used an algorithm to extract information from the interviewees about 36 face and eye movements. Results were noted in binary format where 1 could be assigned when the person's eyes were closed, for example, and 0 if they were open.  The team them tried to determine which facial and eye features were correlated with deception by using various clustering algorithms. "The video analysis is complex," says Khan.

The algorithms identified features that seemed to be most important for detecting deception, which all involved tiny eye movements. The team then trained three machine learning algorithms using both the more significant features and the total set of attributes. Eighty percent of the dataset was used for training, including 40 truthful and 40 deceitful interviews, while the remaining 20% was held for testing.

Khan and his colleagues found the machine learning methods were all able to predict deception quite well from the identified features. Overall accuracy ranged from 72% to 78% depending on the method, where the greatest accuracy was obtained by focusing solely on eye movements. "We identified that eye features are important and contain significant clues for deception," says Khan.  ... ' 

IBM and Japan: Quantum System One

 More moves forward in Quantum Computing:

IBM and the University of Tokyo Unveil Japan's Most Powerful Quantum Computer in PRNewsWire

TOKYO, July 26, 2021 /PRNewswire/ -- IBM (NYSE: IBM) and the University of Tokyo today unveiled Japan's most powerful quantum computer as part of their ongoing collaboration to advance Japan's exploration of quantum science, business and education. "IBM is committed to the growth of the global quantum ecosystem," said Dr. Dario Gil, Director of IBM Research. The IBM Quantum System One is now operational for researchers at both scientific institutions and businesses in Japan, with access ... '

Designing Optimal Auctions Through DeepLearning

Long time involved in auction protocols.  So this interested me.  Not quite sure how it would work in contexts, but intriguing.  Technical.

Optimal Auctions Through Deep Learning  By Paul Dütting, Zhe Feng, Harikrishna Narasimhan, David C. Parkes, Sai S. Ravindranath

Communications of the ACM, August 2021, Vol. 64 No. 8, Pages 109-116  10.1145/3470442

Designing an incentive compatible auction that maximizes expected revenue is an intricate task. The single-item case was resolved in a seminal piece of work by Myerson in 1981. Even after 30–40 years of intense research, the problem remains unsolved for settings with two or more items. We overview recent research results that show how tools from deep learning are shaping up to become a powerful tool for the automated design of near-optimal auctions. In this approach, an auction is modeled as a multilayer neural network, with optimal auction design framed as a constrained learning problem that can be addressed with standard machine learning pipelines. Through this approach, it is possible to recover to a high degree of accuracy essentially all known analytically derived solutions for multi-item settings and obtain novel mechanisms for settings in which the optimal mechanism is unknown.  ... " 

Cobot Designer for Robot Safety

A free available application.

Web-based design tool for better job safety

Research News / July 01, 2021  Fraunhofer

The safety of people interacting with robots has top priority, especially when humans and robots are working side by side instead of being separated from each other by safety fencing. The Fraunhofer Institute for Factory Operation and Automation IFF’s web-based design tool helps companies design their cobots. The Cobot Designer helps minimize the risk of accidents and increases employee safety. The tool is available as a free web application.

Humans and robots are sharing workspace in more and more sectors, whether they be manufacturing, logistics or medicine. Safety plays a major role in this. Up to now, range-finders on robots have prevented severe impacts or crushing when collisions occur but these sensors do not function when humans and machines have to stand close to each other, e.g. in subassembly. This requires other solutions. Teams of Fraunhofer IFF research scientists have developed a web-based application, the Cobot Designer, which ascertains the robot speeds that ensure safe collaboration. The design tool helps programmers design cobots safely. The project was contracted by the German Social Accident Insurance Institution for the Woodworking and Metalworking Industries (BGHM).  .... ' 

Action at a Distance Works

An approach that's already being used for some kinds of encrypted transmission.  Here is a largely non technical explanation using Bell's Theorem.  

How Bell’s Theorem Proved ‘Spooky Action at a Distance’ Is Real

The root of today’s quantum revolution was John Stewart Bell’s 1964 theorem showing that quantum mechanics really permits instantaneous connections between far-apart locations. .Spookiness indeed.  ... '

Monday, July 26, 2021

Towards a Battery Free Internet of Things

Some notes about how this might work.  Think its inevitable that we will have many kinds of IOTs  delivering AI.   Further how we can ensure these devices getting security updates.

A Battery-Free Internet of Things,  By Esther Shein

Communications of the ACM, July 2021, Vol. 64 No. 7, Pages 16-18  10.1145/3464937

Introductory video:  https://youtu.be/gX9cbxLSOkE 

When NVIDIA purchased mobile-chip designer Arm Holdings from SoftBank last year, NVIDIA CEO Jensen Huang made the bold prediction that in the years ahead, there will be trillions of artificial intelligence (AI)-enabled Internet of Things (IoT) devices. Regardless of whether that holds true, it is safe to say the growth of IoT devices is exploding. All those devices will require power sources, and the way Josiah Hester sees it, that's problematic for the environment and society.

"When I see the 'trillion' number, I see a trillion dead batteries, basically," says Hester, an assistant professor of computer engineering at Northwestern University. "There's piles of batteries in landfills in China and elsewhere sitting there unrecycled; or they're put in furnaces and melted down, which is not a carbon-neutral event."

As a native Hawaiian, Hester also is concerned about the impact of micro-plastics and dead batteries turning up in oceans, and about lithium mining, which uses water supplies that people depend on to live. That got him thinking about how to design computer systems without batteries that instead harvest energy, thus reducing their carbon footprint and the impact on the environment.

Hester and other researchers at Northwestern designed a battery-free Nintendo Game Boy that is powered by button presses and sunlight, harvesting energy from the movement of tiny magnets and through tightly wound coils every time a user presses a button.

Now, the team is working on smart face masks that are powered by a person's breathing or movement, that will be able to capture heart or respiration rates, and also to determine whether the person is wearing the mask correctly.   ... ' 

Better Security Through Obfuscation

 Have always been told, in essence, that this is a bad way to go.  Better keep everything visible, and open, expose it to as many expert hackers as possible to examine and test it, to expose any problems.    Though we have had examples where even when exposed to such testing bugs opening threats have not been found for years.   Is this a better way?    Technical.   Reading. 

Better Security Through Obfuscation   By Chris Edwards

Communications of the ACM, August 2021, Vol. 64 No. 8, Pages 13-15  10.1145/3469283

Last year, three mathematicians published a viable method for hiding the inner workings of software. The paper was a culmination of close to two decades of work by multiple teams around the world to show that concept could work. The quest now is to find a way to make indistinguishability obfuscation (iO) efficient enough to become a practical reality.

When it was first proposed, the value of iO was uncertain. Mathematicians had originally tried to find a way to implement a more intuitive form of obfuscation intended to prevent reverse engineering. If achievable, virtual black box (VBB) obfuscation would prevent a program from leaking any information other than the data it delivers from its outputs. Unfortunately, a seminal paper published in 2001 showed that it is impossible to guarantee VBB obfuscation for every possible type of program.

In the same paper, though, the authors showed that a weaker form they called iO was feasible. While iO does not promise to hide all the details of a logic circuit, as long as they are scrambled using iO, different circuits that perform the same function will leak the same information as each other; an attacker would not be able to tell which implementation is being used to provide the results they obtain.

"Our motivation in defining the notion of iO was that it escaped the impossibility result for VBB. However, we had no idea if iO could be constructed, and even if it could be constructed, would it be useful for applications," says Boaz Barak, George McKay professor of computer science in the John A. Paulson School of Engineering and Applied Sciences at Harvard University, and co-author of the 2001 paper on VBB.

Whatever its utility, for more than a decade iO seemed to be out of reach. A major breakthrough came in 2013, when a team came up with a candidate construction and described a functional-encryption protocol that could be built on top of it. This was quickly followed by a slew of proposals for applications that could make use of iO.

One possible application is functional encryption, which makes it possible to selectively hide parts of the same program or data from different users through the use of different decryption keys. This could provide far more fine-grained protection than conventional encryption, where a single key unlocks everything encrypted with it. Other more exotic forms of encryption enabled by iO include deniable encryption, where a user could provide a false key that appears to work but does not reveal information secured by a true key.   ... '

One related paper: https://www.boazbarak.org/Papers/obfuscate.pdf more at the link.

Border Gateway Protocol

 New to me, a useful description here.

Fixing the Internet  By Keith Kirkpatrick

Video description:  https://youtu.be/A1KXPpqlNZ4  

Communications of the ACM, August 2021, Vol. 64 No. 8, Pages 16-17  10.1145/3469287

Few people pay much attention to how the electrical grid works until there is an outage. The same is often true for the Internet.

Yet unlike the electrical grid, where direct attacks are infrequent, vulnerabilities and security issues with the Internet's routing protocol have led to numerous, frequent malicious attacks that have resulted in widespread service outages, intercepted and stolen personal data, and the use of seemingly legitimate Web sites to launch massive spam campaigns.

The Internet is an interconnected global network of autonomous systems or network operators, like Internet service providers (ISPs), corporate networks, content delivery networks (such as Hulu or Netflix), and cloud computing companies such as Google and Microsoft Cloud. The Border Gateway Protocol (BGP) is used to ensure data can be directed between networks along the most efficient path, similar to how a GPS navigation system maintains a database of street addresses and can assess distance and congestion when selecting the optimal route to a destination.

Each autonomous system connected to the Internet has an Internet Protocol (IP) address, which is its network interface, and provides the location of the host within the network; this allows other networks to establish a path to that host. BGP routers managed by an ISP control the flow of data packets containing content between networks, and maintains a standard routing table used to direct packets in transit. BGP makes routing decisions based on paths, rules, or network policies configured by each network's administrator.

BGP was first described in a document assembled by the Internet Society's Network Working Group in June 1989 and was first put into use in 1994. BGP is extremely scalable, allowing tens of thousands of networks around the world to be connected together, and if a router or path becomes unavailable, it can quickly adapt to send packets through another reconnection. However, because the protocol was designed and still operates on a trust model that accepts that any information exchanged by networks is always valid, it remains susceptible to issues such as information exchange failures due to improperly formatted or incorrect data. BGP can also be at the mercy of routers too slow to respond to updates, or that run out of memory or storage, situations that can cause network timeouts, bad routing requests, and processing problems.   ....   ' 

More Money Moving into Autonomous Trucking

Why?  More directly calculable ROI,  Less regulation. and more specific regulation. Less training of the users.  Leads towards simplification. 

Trucks Move Past Cars on the Road to Autonomy

By Wired, July 26, 2021, in CACM

The makers of self-driving trucks are focusing on comparatively simpler highway driving, and expect human drivers to handle local streets.

In 2016, three veterans of the still young autonomous vehicle industry formed Aurora, a startup focused on developing self-driving cars. Partnerships followed with major automakers, including Hyundai and Volkswagen. CEO Chris Urmson said at the time that the link-ups would help the company bring "mobility as a service" to urban areas—Uber-like rides without a human behind the wheel.

But by late 2019, Aurora's emphasis had shifted. It said self-driving trucks, not cars, would be quicker to hit public roads en masse. Its executives, who had steadfastly refused to provide a timeline for their self-driving-car software, now say trucks equipped with its "Aurora Driver" will hit the roads in 2023 or 2024, with ride-hail vehicles following a year or two later. This month, the company announced it would go public via a reverse merger, raising $2 billion in the process. "We have a team that really understands how hard this problem is," says Urmson.

The move points to a growing consensus in the industry: If self-driving vehicles are going to happen, trucks will likely arrive before cars.

From Wired 

Origami Comes to Robotic Life

Soft animated modeling built in origami.

Origami Comes to Life with Shape-Changing Materials

University of Colorado Boulder, Daniel Strain, July 20, 2021

University of Colorado Boulder (CU Boulder) researchers have developed paper-thin, shape-changing objects that could lead to books in which origami figures fly off the page. These "Electriflow" designs, which include origami cranes, flowers, and butterflies, were inspired by soft robotic artificial muscles developed at CU Boulder, which do not require motors or mechanical parts. Eric Acome of Artimus, commercial supplier of the artificial muscles, said, "They're just pouches, but depending on the shape of that pouch, you can generate different kinds of movement." CU Boulder's Purnendu said, "This system is very close to what we see in nature. We're pushing the boundaries of how humans and machines can interact."

Sunday, July 25, 2021

DeepMind Releases Accurate Picture of Human Proteome

Was briefly involved in a discussion of protein folding structure prediction, so have an appreciation of the complexity the  Have been told this is a very big deal, and also future direction.  Good thing to watch, as I do.

DeepMind Releases Accurate Picture of the Human Proteome   By SciTechDaily,  July 23, 2021

DeepMind today announced its partnership with the European Molecular Biology Laboratory (EMBL), Europe's flagship laboratory for the life sciences, to make the most complete and accurate database yet of predicted protein structure models for the human proteome. This will cover all ~20,000 proteins expressed by the human genome, and the data will be freely and openly available to the scientific community. The database and artificial intelligence system provide structural biologists with powerful new tools for examining a protein's three-dimensional structure, and offer a treasure trove of data that could unlock future advances and herald a new era for AI-enabled biology.

AlphaFold's recognition in December 2020 by the organizers of the Critical Assessment of protein Structure Prediction (CASP) benchmark as a solution to the 50-year-old grand challenge of protein structure prediction was a stunning breakthrough for the field. The AlphaFold Protein Structure Database builds on this innovation and the discoveries of generations of scientists, from the early pioneers of protein imaging and crystallography, to the thousands of prediction specialists and structural biologists who've spent years experimenting with proteins since. The database dramatically expands the accumulated knowledge of protein structures, more than doubling the number of high-accuracy human protein structures available to researchers. Advancing the understanding of these building blocks of life, which underpin every biological process in every living thing, will help enable researchers across a huge variety of fields to accelerate their work.... 

The ability to predict a proteins shape computationally from its amino acid sequence is already helping scientists to achieve in months what previously took years. .... 

"...  The proteome is the entire set of proteins that is, or can be, expressed by a genome, cell, tissue, or organism at a certain time. It is the set of expressed proteins in a given type of cell or organism, at a given time, under defined conditions. Proteomics is the study of the proteome. ... "  WKP

From SciTechDaily

See further in ScienceMag: 

Protein structures. Public Database of AI-Predicted Protein Structures Could Transform Biology

By Robert F. Service

Non Fungible Tokens

I heard about NFTs perhaps a year ago.  Someone I know was pushing investment in them.  NFT's are blockchain style encrypted 'tags' for unique digital things.  The tag acts as a certification of uniqueness.  See Wikipedias description:  https://en.wikipedia.org/wiki/Non-fungible_token  

I see that the fabled Andreessen Horowitz is investing in a marketplace of NFTs.

https://a16z.com/2021/04/02/nfts-readings-resources/      Their reading list (canon) about NFTs

https://www.coindesk.com/nft-marketplace-opensea-valued-at-1-5b-in-100m-funding-round-led-by-a16z     - Their investment in a marketplace of  NFTs. 

Mean anything?

Hiding Malware in Artificial Neurons

 My areas of interest have always included machine learning, neural networks, steganography and security.    So I read this in interest. Not sure how it would work in practice.  Following up.

(When I say I am following up, I may or may not include findings in latter posts at my discretion.  I will if I think its particularly useful.   Do let me know if you have interest) 


Researchers Hid Malware Inside an AI's 'Neurons' And It Worked Scarily Well   July 23, 2021

The authors concluded that a 178MB AlexNet model can have up to 36.9MB of malware embedded into its structure without being detected using a technique called steganography. ... 

Neural networks could be the next frontier for malware campaigns as they become more widely used, according to a new study. 

According to the study, which was posted to the arXiv preprint server  on Monday, malware can be embedded directly into the artificial neurons that make up machine learning models in a way that keeps them from being detected. The neural network would even be able to continue performing its set tasks normally.

"As neural networks become more widely used, this method will be universal in delivering malware in the future," the authors, from the University of the Chinese Academy of Sciences, write.

View Full Article

Real Time AI

Thinking this. 

CoCoPIE: Enabling Real-Time AI on Off-the-Shelf Mobile Devices via Compression-Compilation Co-Design   By Hui Guan, Shaoshan Liu, Xiaolong Ma, Wei Niu, Bin Ren, Xipeng Shen, Yanzhi Wang, Pu Zhao

Communications of the ACM, June 2021, Vol. 64 No. 6, Pages 62-68  0.1145/3418297

Many believe the company that enables real intelligence on end devices (such as mobile and IoT devices) will define the future of computing. Racing toward this goal, many companies, whether tech giants such as Google, Microsoft, Amazon, Apple and Facebook, or startups spent tens of billions of dollars each year on R&D.

Assuming hardware is the major constraint for enabling real-time mobile intelligence, more companies dedicate their main efforts to developing specialized hardware accelerators for machine learning and inference. Billions of dollars have been spent to fuel this intelligent hardware race.

This article challenges the view. By drawing on a recent real-time AI optimization framework CoCoPIE, it maintains that with effective compression-compiler co-design, a large potential is yet left untapped in enabling real-time artificial intelligence (AI) on mainstream end devices.

The principle of compression-compilation co-design is to design the compression of deep learning models and their compilation to executables in a hand-in-hand manner. This synergistic method can effectively optimize both the size and speed of deep learning models, and also can dramatically shorten the tuning time of the compression process, largely reducing the time to the market of AI products. When applied to models running on mainstream end devices, the method can produce real-time experience across a set of AI applications that had been broadly perceived possible only with special AI accelerators.

Foregoing the need for special hardware for real-time AI has some profound implications, thanks to the multifold advantages of mainstream processors over special hardware:  ...'

Power Shift to Employees?

Tech Pay Survey Shows Power Shift to Employees, By The Information, July 23, 2021

The technology-business boom is giving white-collar workers more leverage with employers.

At tech companies, the traditional practice of allowing employees' shares to vest only after a worker reaches a full year of employment appears to be eroding. More than 20% of surveyed managers and employees said their stock begins to vest right away or after the end of their third month of employment.

The findings from a survey about compensation and workplace practices show how the balance of power has started to shift toward employees amid the surge of capital for startups and record earnings of bigger companies.

Separately, the majority of respondents said their companies rarely use hiring practices that are shown to improve diversity ....

From The Information  

Which Companies are Transforming Work?

Useful thoughts.    What have we really transformed to? 

Readers Ask: Which Companies Are Transforming Work?   by Kristen Senz  in HBSWK

Joseph Fuller answers readers' questions about automation, virtual internships, and the future of work on Working Knowledge’s “Office Hours” series.

The COVID-19 pandemic accelerated workforce shifts that had been gaining momentum before the public health crisis, thrusting employers and workers into a new era within months.

Joseph Fuller, a professor at Harvard Business School and co-leader of the School’s Managing the Future of Work initiative, recently answered reader questions on Instagram, as part of our ongoing “Office Hours” series.

Fuller’s research probes the "skills gap" and the paradox that many employers struggle to fill jobs while millions of Americans remain unemployed, underemployed, or have left the workforce. He has also written about the caregiving roles of employees and strengthening the education-to-work pipeline.

The following is a transcript of questions posed by Instagram users and Fuller’s answers:

Should we be worried about the current “worker shortage”?

"We should absolutely be worried about the current worker shortage, for three or four reasons. One is it reflects in part stagnation and actually decline of the workforce participation rate. We have a large number of prime working age adults, particularly males, who are neither in school, in employment, or somehow involved in the criminal justice system. They’re just not working.


"The second is that a lot of the shortage of workers is coming in the form of digital natives who can deal with the types of technologies that are going to be more and more prevalent in the workplace. We do a very poor job of teaching that in the United States—giving people a basic familiarity with digital technology that they’re going to need going forward.

"The final thing is that a lot of this shortage covers up the fact that there’s a significant population of lower-skilled, lower educational-attainment workers out there, many who speak English as a second language, who have worked in service industries that have been very, very badly damaged by COVID. They’re going to have a hard time finding work that accommodates their skills and pays them decently going forward."

Are there any companies standing out from the others in how they’re transforming work?

"Through COVID, the two companies that jump out at me as being most innovative in their work practices and in mobilizing people are companies of great size that have been hugely affected by COVID positively—namely, Walmart and Amazon.

"I know those are not always companies that people view favorably in terms of work, but in the last 10 years, Walmart has been an amazing innovator in the space in terms of skills development and upskilling its own workforce. Amazon has shown incredible resilience in scaling up its workforce and using technology to make people more productive and to enhance workplace safety."

Is automation a threat to jobs? If so, how should government, business, and employees respond? ... 

Saturday, July 24, 2021

Securing Industrial Operations at Scale


Now is the time to secure industrial operations at scale!   Internet of Things (IoT)  

By Fabien Maisl   in Cisco.

Cyber attacks to industrial organizations and critical infrastructures are now making headlines regularly. As Talos pointed out in a recent research blog, we’ve entered a new world of critical infrastructure security where threat actors are structured businesses.

Nevertheless, few industrial organizations have implemented comprehensive security programs to protect their operational technologies (OT) and even fewer have deployed at scale. At the same time, the pandemic is highlighting how digital transformation can help industries be more agile and transform their infrastructure to operate in the new normal. But for this to happen, industrial networks must have a strong security foundation.

For all these reasons, we’re seeing heightened demand from industrial organizations all over the world for a new generation of OT security solutions. They all include a mix of similar requirements:

Easy to deploy throughout the industrial network, without added costs or complexity to the existing infrastructure. Provide comprehensive visibility into OT devices so security policies can be built for the industrial network

Help teams focus on immediate threats so they can prioritize actions and quickly improve the organization’s security posture even if they are not experts in OT or cybersecurity

Scale to massive deployments so that the entire organization can be protected properly. Integrate seamlessly with existing IT security tools so that a converged IT/OT security strategy can be implemented.

With the release of Cyber Vision 4.0, Cisco is offering a unique OT security solution that addresses these requirements, that will be made available beginning July 2021.  ... ' 

Understanding Behavior and What May Happen Next

It is  good to know what is going to happen next.  Prediction though can be hard.

Metropolis Spotlight: Viisights Uses AI to Understand Behavior and Predict What Might Happen Next

By Shiri Gordon and Debraj Sinha   in NVIDIA Developer

Developers can use the new viisights wise application powered by GPU-accelerated AI technologies to help organizations and municipalities worldwide avoid operational hazards and risk in their most valuable spaces. 

With conventional imaging and analytics solutions, AI vision is used to detect the presence of people and objects and their basic attributes to better understand how a physical space is being used. For example, is there a pedestrian, cyclist, or vehicle on a roadway? While understanding what’s happening on a road or in a retail store is important, predicting what might happen next is critical to taking this vision to the next level and building even smarter spaces. 

viisights — an NVIDIA Metropolis partner — helps cities and enterprises make better decisions faster by offering an AI-enabled platform that analyzes complex behavior within video content. viisights uses the NVIDIA Deepstream SDK to accelerate their development workflow for its video analytics pipeline, cutting development time by half. DeepStream is a path to developing highly optimized video analysis pipelines including video decoding, multi-stream processing and accelerated image transformations in real-time, critical for high performance video analytics applications. For deployment, viisights uses NVIDIA TensorRT to further accelerate their application’s inference throughput at the edge with INT8 calibration. 

The viisights Wise application recognizes and classifies over 200 different types of objects and dozens of different types of actions and events. The ability to recognize that a person is running, falling, or throwing an object provides a far richer and more holistic understanding of how spaces are being used. By using viisights Wise, operations teams can gain deeper insights and identify behavioral patterns that can predict what might happen next. viisights Wise can process over 20 video streams on a single GPU in near real-time.

viisights technology protects public privacy, as it only analyzes general behavior patterns of individuals and groups and does not identify specifics like faces or license plates. Because the application only analyzes behavior patterns, viisights technology is incredibly easy to use, further increasing ROI.

Real-world situations and use cases for viisights Wise include:    ... 

Total Artificial Heart Implanted

 Seems quite a considerable coup. 

Total Artificial Heart Successfully Transplanted in U.S.

By Interesting Engineering, July 23, 2021

Duke University Hospital surgeons successfully transplanted a total artificial heart (TAH) developed by France's CARMAT into a 39-year-old patient who had suffered sudden heart failure.

The TAH both resembles and functions like the human heart. Actuator fluid carried in a bag outside the body is responsible for the heartbeat, and sensors and microprocessors on the heart trigger its micropumps based on patient need.  The TAH is connected to the aorta and the pulmonary artery through two outlets.

To keep the heart powered, the patient will need to carry a nearly nine-pound bag containing a controller and two chargeable battery packs.  The TAH has received primary approval for testing from the U.S. Food and Drug Administration, and was approved for use in Europe for patients expected to receive a heart transplant within 180 days.

From Interesting Engineering 

Friday, July 23, 2021

Inside the Industry that Unmasks People at Scale

Article on this brought to my attention by Bruce Schneier  ... 

Inside the Industry that Unmasks People at Scale

Unique IDs linked to phones are supposed to be anonymous. But there’s an entire industry that links them to real people and their address.

By Joseph Cox

Tech companies have repeatedly reassured the public that trackers used to follow smartphone users through apps are anonymous or at least pseudonymous, not directly identifying the person using the phone. But what they don't mention is that an entire overlooked industry exists to purposefully and explicitly shatter that anonymity.

They do this by linking mobile advertising IDs (MAIDs) collected by apps to a person's full name, physical address, and other personal identifiable information (PII). Motherboard confirmed this by posing as a potential customer to a company that offers linking MAIDs to PII.  ... ' 

Quoted from the comments:  ' ... Multi-dimensional data is never anonymous. Location data is the worst kind. The simple truth is, there can only be a few persons in a certain spot. If you know a few spots, you have a close to 100% certainty who that person was. ...'  

5G Telemedicine and the Military

Brought to my attention by Walter Riker.  Will ask for more comment from the VA, sure it is being looked at. 

Telemedicine with 5G could be a gamechanger for military health

FederalNewsNetwork, July 22, 2021

Telehealth became an even bigger industry during COVID-19. Doctors were forces to think of creative ways to see patients as people were forced to stay home to avoid the spread of the virus.

However, as 5G is starting to roll out, telehealth may be breaking into a completely new plane. At Joint Base San Antonio (JBSA) the Air Force is testing capabilities that could be the future of medicine.

“5G brings a whole new paradigm and architecture to the table. From what we’ve seen before even up through the current 5G  non-standalone that you see advertised on TV today,” Jody Little, executive program manager for 5G NextGen at JBSA, said during a Federal Insights discussion sponsored by Verizon. “Now you can bring large amounts of data forward or back to it and operate in the forward edge. You can virtualize these applications and get very ultra-low latency. And now you’re supporting lots of sensors. Whereas in, say, 4g, you could support maybe 100. Here, you can support 1000s.”  ... ' 

Scaling AI

Good thoughts, intro below.

Scaling AI and data science – 10 smart ways to move from pilot to production

VB Staff by Venturebeat,  Presented by Intel

“Fantastic! How fast can we scale?” Perhaps you’ve been fortunate enough to hear or ask that question about a new AI project in your organization. Or maybe an initial AI initiative has already reached production, but others are needed — quickly.

At this key early stage of AI growth, enterprises and the industry face a bigger, related question: How do we scale our organizational ability to develop and deploy AI?  Business and technology leaders must ask: What’s needed to advance AI (and by extension, data science) beyond the “craft” stage, to large-scale production that is fast, reliable, and economical?

The answers  are crucial to realizing ROI, delivering on the vision of “AI everywhere”, and helping the technology mature and propagate over the next five years.  ... ' 

Is Programming Theory a Waste of Time?

Gave me a reminder of what PT was:  Its the theoretical introduction to coding,  and  not ONLY the specific, in context how-to of using a particular coding method. For hiring of course its usually seen that the specific practical skill is the most important.  Skill vs theory.  Mechanic vs Engineer. The rest of it is the theory and may add background and future flexibility.   You would also expect 'theory' to change more quickly as emergent tech does. Not a waste of time, as long as you still end up with the skill too.

Is Programming Theory A Waste of Time? | Careers | Communications of the ACM

Thursday, July 22, 2021

Sweat Powered Wearables

 Given current power needs, would be much less than a typical smartphone needs.  Perhaps a small IOT. 

Your Sweaty Fingertips Could Help Power the Next Generation of Wearable Electronics

By Science, July 19, 2021

The small beads of sweat your fingertips produce while you sleep could power wearable sensors that measure glucose, vitamin C, or other health indicators. That's the promise of a new advance—a thin, flexible device that wraps around fingertips like a Band-Aid—that its creators say is the most efficient sweat-powered energy harvester yet.  

"The ability to harvest tiny amounts of sweat from the fingertips is really unique," says Roozbeh Ghaffari, a biomedical engineer at Northwestern University who was not involved with the work.

Researchers around the world are currently developing wearable sensors to measure anything from a runner's acceleration to a diabetic's glucose levels.

From Science   View Full Article   

Nathan Benaich, Air Street Capital

I have been subscribing to this monthly report for some time, worth a look.  Below the most recent.

Your guide to AI,  By Nathan Benaich, Air Street Capital

Monthly analysis of AI technology, geopolitics, research, and startups.

http://newsletter.airstreet.com/issues/your-guide-to-ai-june-2021-653276     .... 

Amazon Alexa Live 2021

Attended some technical portions of this, was informative. Here a followup of things presented, announced.

Everything new announced at Amazon Alexa Live 2021

By Michael Bizzaco, July 21, 2021 9:00AM

Amazon Alexa is one of today’s go-to voice assistants. Available on hundreds of devices, from smart speakers to displays and thermostats, Alexa grows more popular every day, with over 100,000 million device owners and 900,000 registered developers producing Alexa-powered products. Speaking of the latter, Alexa Live 2021 has finally arrived. This year’s free virtual symposium is a great place to learn about all of the new developer tools and services that Alexa will be capable of in the near future. Here’s everything announced at this year’s event.   ... '  

Algorithm Helping Demystifying Networks

Interesting, but don't understand it directly.  But thinking it. Boolean models are easy.

Algorithm May Help Scientists Demystify Complex Networks  By Penn State News,  July 21, 2021

( note the PSU article does it better)

A new algorithm capable of analyzing models of biological systems can lead to greater understanding of their underlying decision-making mechanisms, with implications for studying how complex behaviors are rooted in relatively simple actions.

Pennsylvania State University (Penn State)'s Jordan Rozum said the modeling framework includes Boolean networks.

Said Penn State's Reka Albert, "Boolean models describe how information propagates through the network," and the nodes' on/off states eventually slip into repeating patterns that correspond to the system's stable long-term behaviors.

Complexity can scale up dramatically as the system incorporates more nodes, particularly when events in the system are asynchronous. The researchers used parity and time-reversal transformations to boost the efficiency of the Boolean network analysis.

Full PSU article.

Hard-Rock Mining

Once consulted in this space.  McKinsey talks the underlying economic processes of mining.

Digging deeper: Trends in underground hard-rock mining for gold and base metals

July 13, 2021 | Commentary

While underground mining methods show higher cost than open pit, their complexity almost always means that there is opportunity in both productivity and cost improvement.

 Article (5 pages)

Underground hard-rock mining accounts for 40 percent of global mining operations but only 12 percent of run-of-mine (ROM) production.1 Underground mines also tend to be more targeted, more costly, and less productive than open-pit mines. Because the choice of which underground method to deploy is predominantly driven by the geology of the deposit being mined, the operator has little flexibility in choosing the mining method given that the objective is to maximize net asset value over the life of the mine. But given the inherent complexity in underground mining, we frequently uncover improvement opportunities in both productivity and cost. Of these methods, stopping not only is the most common but also delivers the highest overall production share, at almost 50 percent; block caving is one of the least-used methods but is responsible for an outsize share of overall production, at almost 25 percent (Exhibit 1). Again, these underground mining methods are often determined by the deposits and the economics of mining and are, thus, somewhat out of the operator’s control. ... '

Future of Brain Computer Interfaces

Back to the classic problem of machine human interfaces.

Scientists Warn of 'Bleak Cyborg Future' From Brain-Computer Interfaces

By SciTechDaily, July 20, 2021

Surpassing the biological limitations of the brain and using one's mind to interact with and control external electronic devices may sound like the distant cyborg future, but it could come sooner than we think.

Researchers from Imperial College London conducted a review of modern commercial brain-computer interface (BCI) devices, and they discuss the primary technological limitations and humanitarian concerns of these devices in APL Bioengineering, from AIP Publishing.

The most promising method to achieve real-world BCI applications is through electroencephalography (EEG), a method of monitoring the brain noninvasively through its electrical activity. EEG-based BCIs, or eBCIs, will require a number of technological advances prior to widespread use, but more importantly, they will raise a variety of social, ethical, and legal concerns.

From SciTechDaily

View Full Article 

Pentagon Hacks Itself

A former employer of mine tests a by now classic approach.  Note aiming at AI efforts here in particular.  

The Pentagon Is Bolstering Its AI Systems—by Hacking Itself

By Wired, July 21, 2021

Intelligence as a way to outfox, outmaneuver, and dominate future adversaries. But the brittle nature of AI means that without due care, the technology could perhaps hand enemies a new way to attack.

The Joint Artificial Intelligence Center, created by the Pentagon to help the US military make use of AI, recently formed a unit to collect, vet, and distribute open source and industry machine learning models to groups across the Department of Defense. Part of that effort points to a key challenge with using AI for military ends. A machine learning "red team," known as the Test and Evaluation Group, will probe pretrained models for weaknesses. Another cybersecurity team examines AI code and data for hidden vulnerabilities.

Machine learning, the technique behind modern AI, represents a fundamentally different, often more powerful, way to write computer code. Instead of writing rules for a machine to follow, machine learning generates its own rules by learning from data. The trouble is, this learning process, along with artifacts or errors in the training data, can cause AI models to behave in strange or unpredictable ways.

"For some applications, machine learning software is just a bajillion times better than traditional software," says Gregory Allen, director of strategy and policy at the JAIC. But, he adds, machine learning "also breaks in different ways than traditional software."

From Wired     View Full Article   

Sero! Concept Mapping: Free Trial

Mentioned here many times here before, the concept mapping tool Sero!

The Evidence Basis for Concept Mapping: 50 Years and Still Growing

More research-backed reasons to use CM in the classroom

Concept maps' early signs of promise as a learning and assessment tool have helped inspire a 50-year history of evidence building. 

In the U.S., the evidence base proved solid enough for policymakers to recommend concept mapping in the National Assessment of Educational Progress’ 2019 Science Framework.  Indeed, countries as diverse as Costa Rica and India have called for broader inclusion of concept mapping in national-level curricula.

And the evidence continues to grow. Here are some findings from around the world, just since 2020. They illuminate uses for assessment and some of the challenges for implementation that are being effectively addressed by new software innovations.    

1. “Concept maps can be used to replace rote learning with meaningful and enjoyable learning”

The purpose of this study was to find ways to help India’s National Education Policy meet its educational goals with concept mapping. The outcome of this review of the evidence base was highly favorable:

The outcome of the review is that Concept mapping can offer an effective tool in education for both the teaching and learning processs [...] Concept maps provide a unique graphical view of how students organize, connect, and synthesize information [...] which develops critical thinking of the learners. Further, it provides a platform for collaboration and discussion, arriving at shared understandings among members of groups.


"A journey towards the commitments of national education policy 2020 through concept mapping"

Indian Journal of Science and Technology


See the benefits of concept mapping for yourself

Sero! is an engaging concept map-based assessment platform. It efficiently harnesses the cognitive benefits of concept mapping to help learners make connections and assessors identify gaps and misconceptions.

Try Sero! for free    

Questions? Email brian@serolearn.com.

More Novel Drone Use

Makes one wonder though, how secure are these drones?

As Spain's Beaches Fill Up, Seaside Resort Sends in Drones

Reuters, Horaci Garcia, July 15, 2021

Officials in the town of Sitges in northeastern Spain are using drones for real-time crowd monitoring along 18 km of beach as COVID-19 cases rise. Ricardo Monje of Annunzia, the company that developed the project, said, "We can take photos, pass them through some software and with the software we can count how many people are on the beach." Local official Guillem Escola said, "If we see the beach is very crowded, we can pass that information on to the beach monitors who will make checks and ensure people are keeping their distance. If people don't take notice, then we send in the police." Officials said the project complies with all data-protection laws, and images of people would remain anonymous.

Wednesday, July 21, 2021

Joking Chatbots Help Learning?

 Brought the below up before, here is more detail. Humor is powerful, but how do we harness it? Need a good way.  Thoughts?  See my Humor link.

Joke-Cracking Chatbots Boost Learning Levels

By Paul Marks, Commissioned by CACM Staff, July 20, 2021

Technology has brought us many wonderful things, but chatbots are not one of them. On banking and e-commerce sites, for instance, where these text-based conversational agents have been pressed into service to replace customer-support staff, even simple requests are often met with baffling arrays of options.

For example, a bank's online chatbot recently asked me which of four types of savings account I was interested in – but it did not explain how they differed from each other. When I typed in "I don't know" the bankbot replied tersely: "That is not an option". It then looped me back to those original options. No wonder some consumer commentators – like this one at Forbes – say such clumsy chatbot implementations are "killing customer service".

So, when I heard researchers in Canada had decided to add a sense of humor to chatbots, I feared the worst. Surely making light of giving people inadequate information would be adding insult to injury? Well, apparently not: the researchers have found that comedy-capable chatbots could have an important role, although in education – not customer support.

Speaking at the virtual Conference on Human Factors in Computing Systems (CHI2021) in May, human-computer interaction researchers led by Jessy Ceha and Ken Jen Lee of the University of Waterloo in Ontario, Canada, described how their team set out to investigate whether a particular way of studying a subject, called learning-by-teaching, might be enhanced by a witty chatbot.

In garden variety learning-by-teaching, typically three students research a subject using certain teacher-approved resources like books, Websites, diagrams, and photographs. They then prepare a lesson and teach that subject to another small group of students, the process of lesson-planning having helped them not only improve their own factual recall of the subject at hand, but also, in having to work out how to teach the subject, develop their problem-solving skills.

The Waterloo team wondered, what if the teaching group got to coach a smart chatbot instead of fellow students? Further, they wondered whether a chatbot with a sense of humor, in getting a few laughs out of its teachers, might better motivate them, lower their stress and anxiety levels, and help them learn more?  ... ' 

AI and Media Buying

 Early on considered this option.  Key was to also include options available, risks in choices, knowledge from previous choices. 

How custom algorithms will shape the future of media buying

By John Wittesaele | July 14, 2021    Categories: Marketing,

John Wittesaele is EMEA CEO at Xaxis, a global provider of innovative AI technologies, data-driven creativity, and programmatic expertise.

The digital advertising industry ingests and processes millions of data signals per second, generating immense volumes of data. While the industry is hyper-focused on the cookie deprecation, the third-party cookie is actually only one marketing input, there are many other data signals, both on and offline, available to optimise media buying.

Algorithms based on artificial-intelligence (AI) can be tailored to brands’ unique goals, allowing marketers to find pockets of performance within vast amounts of data and optimise media buying to drive real business outcomes. By combining custom AI approaches that integrate a brand’s key performance indicators (KPIs), and shaking off our third-party cookie dependence, we can welcome a new era of transparent and effective programmatic media.

User matching via first-party data signals

One way AI and custom algorithms will shape media buying, is by matching converted consumers with prospects that have similar digital patterns. Rather than focussing on who consumers are – their age and gender, or where they live – AI looks beyond basic characteristics to focus on the most important behavioural signals of a likely customer. Two consumers can have completely different profiles but ultimately want the same thing. Where traditional audience targeting would miss this opportunity, algorithmic matching enables brands to identify and take advantage of these similar needs.

Algorithmic consumer matching is currently based on first-party data signals, from retailers, brands or publishers. Moving forward, an explosion in new types of data is expected from connected cars and homes, internet-of-things devices, virtual and augmented reality, and biometrics, which will all feed into this process. AI will be vital to manage this data, and there must always be an emphasis on balancing the relationship between AI and ethics to ensure advertising works better for everyone while individual identities are protected. .... '

Tracking Human Mobility

A somewhat unusual look at different kinds of mobility and how it relates to pandemic challenges.

App Tracks Human Mobility, COVID-19

University of Miami, Deserae E. del Campo,  June 14, 2021

The COVID-19 vs. Human Mobility Web application can map the coronavirus pandemic's global impact on human movement. The University of Miami's Shouraseni Sen Roy and Christopher Chapin based the interactive app on Apple Maps' dataset on human movement through walking, driving, and public transit; Oxford University's COVID-19 Government Response Tracker, detailing government policies deployed during the pandemic; and Johns Hopkins University's compiled global cases of COVID-19. Users can choose a country, or a U.S. state or county, and compare human mobility and coronavirus cases over time, as well as data on government policies associated with COVID-19's spread. Sen Roy said, "Understanding historic mobility patterns, both under normal circumstances and in response to extreme events like a pandemic or a natural disaster, is surely needed for policymakers to make informed decisions regarding transportation systems and more.”

A Strategy to Understand Machine Learning and Deep Learning

Below, Ajit Jaokar puts together an Excellent introductory newsletter post (here one of many, join his newsletter for much more)     To get all the text and all illustrations, and commentary by him and others,  click through to Linkedin below.  Note this is maths-based, but relatively non technical.  Anyone can get the gist.  


Artificial Intelligence  By Ajit Jaokar

Open this article on LinkedIn to see what people are saying about this topic.     Open on LinkedIn

Artificial Intelligence #13: An easy maths-based strategy to understand machine learning and deep learning

Welcome to Artificial Intelligence #13

For this episode, I was originally going to post on a different theme, but I got quite a few comments on a post I made about maths on LinkedIn.

Because a few people found that post useful, I thought of expanding it a bit more on my approach of teaching AI using a maths based approach

I use a similar approach in my teaching #artificialintelligence at the #universityofoxford  

Previously, I discussed about the significance of maths in learning AI.

So, to recap, there are mainly four things you need to understand machine learning and deep learning

·      Probability theory

·      Statistics

·      Linear Algebra

·      Optimization

So, in this post, I am going to show you a simple approach to understand machine learning deep learning based on maths knowledge that most of you already know (as a student in year 12 / A levels if you took a maths/ science-based degree)

Here is a chain of thought I use

The idea is you start with simple concepts and gradually add to them using familiar maths

Considering the limits of this article, I will illustrate a small number of steps – but even these can be hopefully useful to you.   ...... " 

Tuesday, July 20, 2021

Paying Older Audience Attention

Brought to my attention, being in the older, more experienced audience myself, struck me as rarely paid attention to.   Good first look at this kind of resource problem.

Your Messaging to Older Audiences Is Outdated   by Hal Hershfield and Laura Carstensen

July 02, 2021

Summary.   Given a rapidly aging population, effective messaging to older people holds national importance for public health as well as marketing of goods and services. Older people make up an incredibly diverse demographic that varies in terms of physical and cognitive ability,...more

One of the most pressing concerns in the early days of the Covid-19 pandemic was how to best communicate information to those who were at greatest risk — particularly, the elderly. Unfortunately, many attempts were riddled with stereotyped depictions of older people as frail, lonely, and incompetent. In doing so, messages from advertisers, public health officials, and policymakers may have failed to resonate with large swaths of their targeted audience. Given a rapidly aging population, effective messaging to older people holds national importance for public health as well as marketing of goods and services.

Arguably, the greatest challenge is market segmentation. Older people make up an incredibly diverse demographic that varies in terms of physical and cognitive ability, economic power, and social connection. Aging is also changing over historical time. Several studies have shown that the incidence of dementia appears to be decreasing over time; some research suggests this is due to higher educational attainment and improvements in cardiovascular health. Today’s older generations are less lonely and happier than their younger counterparts. As a result, market segmentation based on chronological age is becoming increasingly difficult, if not futile.

A more telling predictor of behavior and a better approach to age segmentation may be time left in life rather than time since birth. Healthy versus sick offers more meaningful insight than whether someone is in their 70s or their 80s.  ... 

RiskIQ Joins Microsoft: Good

 Risk is ultimately 'the thing'.   Both in terms of analyzing how what you do is risky in various contexts.   And also in terms of external threats that increase your risk.   Both the risk of what you plan to do, and the risk of what others plan to do to you.   These days the two work together.  Which is the way we described it in some of our analytics efforts.  Microsoft needs both of these.  I hope they learn this from RiskIQ.   Will watch how this evolves.  

Joining Microsoft is the Next Stage of the RiskIQ Journey


Today Microsoft announced its intent to acquire RiskIQ, representing the next stage of our journey that's been more than a decade in the making. We couldn't be more excited to join forces to enable the global community to defend against the rising tide of cyberattacks. 

RiskIQ was conceived to preserve the original promise of the Internet—bringing people together. Connecting people across the world and making sure those connections are safe is something worth defending every single day. That hasn’t changed.

When RiskIQ first launched, the digital enterprise was shifting to the Internet, the start of digital transformation. SaaS; Mobile apps were suddenly everywhere; the cloud was becoming the basis of development—essentially, the Internet was becoming the network, and the extended enterprise was born. ...'