/* ---- Google Analytics Code Below */

Monday, February 28, 2022

Shrinking AI?

AI, computational demand, complex contextual scenarios.    Can we make it happen?

 Shrinking AI?   By Chris Edwards

Communications of the ACM, January 2022, Vol. 65 No. 1, Pages 12-14 10.1145/3495562

The computational demand made by artificial intelligence (AI) has soared since the introduction of deep learning more than 15 years ago. Successive experiments have demonstrated the larger the deep neural network (DNN), the more it can do. In turn, developers have seized on the availability of multiprocessor hardware to build models now incorporating billions of trainable parameters.

The growth in DNN capacity now outpaces Moore's Law, at a time when relying on silicon scaling for cost reductions is less assured than it used to be. According to data from chipmaker AMD, cost per wafer for successive nodes has increased at a faster pace in recent generations, offsetting the savings made from being able to pack transistors together more densely (see Figure 1). "We are not getting a free lunch from Moore's Law anymore," says Yakun Sophia Shao, assistant professor in the Electrical Engineering and Computer Sciences department of the University of California, Berkeley.

Though cloud servers can support huge DNN models, the rapid growth in size causes a problem for edge computers and embedded devices. Smart speakers and similar products have demonstrated inferencing can be offloaded to cloud servers and still seem responsive, but consumers have become increasingly concerned over having the contents of their conversations transferred across the Internet to operators' databases. For self-driving vehicles and other robots, the round-trip delay incurred by moving raw data makes real-time control practically impossible.

Specialized accelerators can improve the ability of low-power processors to support complex models, making it possible to run image-recognition models in smartphones. Yet a major focus of R&D is to try to find ways to make the core models far smaller and more energy efficient than their server-based counterparts. The work began with the development of DNN architectures such as ResNet and Mobilenet. The designers of Mobilenet recognized the filters used in the convolutional layers common to many image-recognition DNNs require many redundant applications of the multiply-add operations that form the backbone of these algorithms. The Mobilenet creators showed that by splitting these filters into smaller two-dimensional convolutions, they could cut the number of calculations required by more than 80%.

A further optimization is layer-fusing, in which successive operations funnel data through the weight calculations and activation operations of more than one layer. Though this does not reduce the number of calculations, it helps avoid repeatedly loading values from main memory; instead, they can sit temporarily in local registers or caches, which can provide a big boost to energy efficiency.

More than a decade ago, research presented at the 2010 International Symposium on Computer Architecture by a team from Stanford University showed the logic circuits that perform computations use far less energy compared to what is needed for transfers in and out of main memory. With its reliance on large numbers of parameters and data samples, deep learning has made the effect of memory far more apparent than with many earlier algorithms.

Accesses to caches and local scratchpads are less costly in terms of energy and latency than those made to main memory, but making best use of these local memories is difficult. Gemmini, a benchmarking system developed by Shao and colleagues, shows even the decision to split execution across parallel cores affects hardware design choices. On one test of ResNet-50, Shao notes convolutional layers "benefit massively from a larger scratchpad," but in situations where eight or more cores are working in parallel on the same layer, simulations showed larger level-two cache as more effective.

Reducing the precision of the calculations that determine each neuron's contribution to the output both cuts the required memory bandwidth and energy for computation. Most edge-AI processors now use many 8-bit integer units in parallel, rather than focusing on accelerating the 32-bit floating-point operations used during training. More than 10 8-bit multipliers can fit into the space taken up by a single 32-bit floating-point unit.

With its reliance on large numbers of parameters and data samples, deep learning has made the effect of memory far more apparent than with earlier algorithms.

To try to reduce memory bandwidth even further, core developers such as Cadence Design Systems have put compression engines into their products. "We focus a lot on weight compression, but there is also a lot of data coming in, so we compress the tensor and send that to the execution unit," says Pulin Desai, group director of business development at Cadence. The data is decompressed on the fly before being moved into the execution pipeline.

Compression and precision reduction techniques try to maintain the structure of each layer. More aggressive techniques try to exploit the redundancy found in many large models. Often, the influence of individual neurons on the output of a layer is close to zero; other neurons are far more important to the final result. Many edge-AI processors take advantage of this to cull operations that would involve a zero weight well before they reach the arithmetic units. Some pruning techniques force weights with little influence on the output of a neuron to zero, to provide even more scope for savings. ... ' 

Kirkland Brand Has Done Well

 Costco's Kirkland signature brand has done very well.  Comparison to CPG giants. by Tom Ryan

The Kirkland Signature brand exceeded $59 billion in sales in Costco’s 2021 fiscal year, up 13.4 percent year-over-year and accounting for 31 percent of the wholesale club’s total revenue. How did it get so big?

The size of the Kirkland brand surpasses all but a few CPG giants — only Nestlé, Procter & Gamble, PepsiCo and Unilever appear bigger — and it is by far the largest CPG private label. Walmart in 2020 put its Great Value brand at somewhat over $27 billion globally.

Costco formed the Kirkland brand 1995 and quickly expanded it across categories ranging from diapers to toilet paper, tires, golf clubs, luggage, wines and rotisserie chickens.  .... ' 

Sunday, February 27, 2022

Pretraining for Autonomous Systems

More on Microsoft's pretraining system 

Microsoft Research Blog

COMPASS: COntrastive Multimodal Pretraining for AutonomouS Systems

Published February 23, 2022

By Shuang Ma , Senior Researcher  Sai Vemprala , Senior Researcher  Wenshan Wang , Project Scientist  Jayesh Gupta , Senior Researcher  Yale Song , Senior Researcher  Daniel McDuff , Principal Researcher  Ashish Kapoor , Partner Research Manager

Humans have the fundamental cognitive ability to perceive the environment through multimodal sensory signals and utilize this to accomplish a wide variety of tasks. It is crucial that an autonomous agent can similarly perceive the underlying state of an environment from different sensors and appropriately consider how to accomplish a task. For example, localization (or “where am I?”) is a fundamental question that needs to be answered by an autonomous agent prior to navigation, often addressed via visual odometry. Highly dynamic tasks, such as vehicle racing, necessitate collision avoidance and understanding of the temporal evolution of their state with respect to the environment. Agents must learn perceptual representations of geometric and semantic information from the environment so that their actions can influence the world.

Task-driven approaches are appealing, but learning representations that are suitable only for a specific task limits their ability to generalize to new scenarios, thus confining their utility. For example, as shown in Figure 1, to achieve tasks of drone navigation and vehicle racing, people usually need to specifically design different models to encode representations from very different sensor modalities, e.g., different environments, sensory signals, sampling rate, etc. Such models must also cope with different dynamics and controls for each application scenario. Therefore, we ask the question if it is possible to build general-purpose pretrained models for autonomous systems that are agnostic to tasks and individual form factor.

In our recent work, COMPASS: Contrastive Multimodal Pretraining for Autonomous Systems, we introduce a general-purpose pretraining pipeline, built to overcome such limitations arising from task-specific models. The code can be viewed on GitHub.   ....' 

Hypergraphs Examined

Always interested in better ways to use visuals to come to useful conclusions in contexts.  Is this one?  Has anyone some good concrete examples?

A Group Effort   By Chris Edwards  in the ACM

Communications of the ACM, March 2022, Vol. 65 No. 3, Pages 12-14  10.1145/3510550

(more graphic examples at the link) 

Fifty years ago, mathematician Paul Erds posed a problem to friends at one of his regular tea parties. The trio thought they would be able to come up with a solution the same afternoon.

It took 49 years for other mathematicians to provide an answer.

The Erds-Faber-Lovász conjecture focused on a familiar question in mathematics, one of graph coloring. However, this was not on a conventional graph, but on another more-complex structure: a hypergraph. Unlike graphs, where the connection or edges between nodes are point-to-point links, the edges of a hypergraph can enclose any number of points. Groups can overlap and even enclose others, so factors that helped ensure the solution to any coloring problem, including the one set by Erds, turn out to be quite different to one for conventional graphs.

The differences continue into practically all other aspects of hypergraph mathematics. There are many analogies between the two types of structure: it is entirely possible to treat a conventional graph as a special case of the richer hypergraph family. It is essentially a hypergraph for the case in which each edge is allowed to span only two vertices. The variety of hypergraphs presents much bigger challenges that mathematicians are trying to tackle on multiple fronts.

When trying to generalize the properties of hypergraph, "Everything is surprising," says Raffaella Mulas, group leader at the Max Planck Institute for Mathematics in the Sciences, Leipzig, Germany. "It's often surprising when something can be generalized, and I'm also surprised when things that seem to be trivial can't be generalized."

Mulas' current area of research focuses on the spectra of hypergraphs. These spectra, like the spectra of graphs, are generated from transpositions and other manipulations of the matrix that describes how the vertices are connected. However, that is where most of the similarities end.

In a graph, the edges are represented by non-zero weights on either side of the matrix diagonal: the rows and columns both represent vertices.

When represented by a matrix, the hypergraph has a quite different structure. Typically, vertices are listed in the rows and each edge grouping has its own column. That, in turn, leads to different mechanisms not just for constructing the spectra, but also for determining what derived properties such as eigenvalues mean.

Spectra are important tools for applications such as clustering in machine learning because, in principle, they provide computationally cheap ways of finding natural groupings in the data. By focusing on the connectivity between vertices judged by the number of shared edges, spectral clustering can prove far more effective at finding partitions than simpler methods such as k-means clustering.

"The most basic thing that a spectrum can do for a graph is count the connected components," Mulas says, but this does not work for hypergraphs. "The situations where you can recover the connectivity spectra for hypergraphs are only for very structured cases."

Such differences have slowed acceptance of the hypergraph as a tool for analyzing data, though there are many situations that have emerged where a hypergraph is the most natural representation. For example, in the early 1990s, Richard Shi, then a post-doctoral researcher at Canada's University of Waterloo and now a professor of electrical engineering at the University of Washington in Seattle, proposed using directed hypergraphs to build layout tools for integrated circuits that would keep related components closer together.

The hypergraphs Shi conceived later evolved into what are now known as oriented hypergraphs, which have been joined by chemical hypergraphs, so named because they readily represent reactions using catalysts, and a further variant, the hypergraph with real weights. Mulas has collaborated with other groups to develop tools for modeling biological interactions using the real-weight variants.

In recent years, hypergraphs have become the focus of attention for interpreting behavior and connections in social networks because they can show many more types of group activity than simple pair-wise relationships. Yet the available tools for analyzing groups and simplifying the structures to cut processing time when dealing with large amounts of data have been found wanting. Because of the mathematical differences between graphs and hypergraphs, applied research in the past has often had to fall back on the more established tools from graph theory, often with the help of extensions such as clique and star expansions.  ... '  (more and visuals at the link above) 


Thinking innovatively like a Kid

McKinsey writes

Kids have a special way of cultivating creativity--and some even see their crafty ideas develop into lucrative business opportunities. Did you know that kids invented the trampoline, popsicles, and earmuffs?

  '''On #KidInventorsDay, explore these thought-provoking insights along with your favorite kid, and tap into your own imagination. And bookmark our McKinsey for Kids  collection page to see the latest in our interactive series.  ... "

Good examples, but they need help in creativity as well. ... 

Driverless Cars Building Virtual Cities

Virtual city models developed from car imaging to create  tests, training from virtual models. 

Google, Waymo Used Driverless Cars to Make Virtual San Francisco

By New Scientist, February 24, 2022

Researchers at driverless car company Waymo and Google Research used self-driving vehicles to create a virtual model of San Francisco from photos processed by Block-NeRF software.

The software is based on Neural Radiance Fields (NeRFs), a tool that builds three-dimensional (3D) models of small objects from a collection of stills that are viewable from all angles, even those that have no existing photos.

The method uses artificial intelligence trained to generate accurate 3D models from large image sets, which includes data on the exact location from which each image was captured.

Block-NeRF weaves together models of city blocks photographed by the driverless cars to keep the size of the overall model sufficiently small to run on modest hardware ...

Block-NeRF, created by researchers at driverless car company Waymo and Google Research, uses vast numbers of photos taken by cameras mounted atop Waymo’s autonomous cars to build small 3D models, each covering just over one city block. These models... 

From New Scientist   View Full Article

NHTS Approval of LED Headlights

Further automotive directions, regulations, 

Adaptive LED Headlights Get NHTSA Approval in the U.S.

By motor1.com, February 18, 2022

The U.S. National Highway Transportation Safety Administration issued a new rule permitting the use of adaptive light-emitting diode (LED) headlights on U.S. roads.

The $1-billion infrastructure bill passed in November contained a measure allowing the rule against use of such headlights in the U.S. to be modified.

The housings of adaptive headlights contain computer-controlled LEDs that can be aimed in specific locations; they can illuminate the road ahead of the driver similar to the way high beams can, while also focusing the light away from oncoming traffic.

Luxury automakers like Mercedes-Benz and Audi have been using adaptive headlights in their vehicles for years; while the legal hurdles have been cleared, their U.S. deployment will likely not happen quickly ...

Adaptive headlights feature banks of computer-controlled LEDs that can be aimed in very specific locations; they can illuminate the distant road in front of the driver similar to high beams, and also aim the light away from oncoming traffic ... 

From motor1.com  

Saturday, February 26, 2022

Ransomware used in Ukraine Attacks

Such tools will likely become tools in other kinds of cyberattacks.

Ransomware Used as Decoy in Destructive Cyberattacks on Ukraine in SecurityWeek  By Ionut Arghire

The cyberattacks employed HermeticWiper, a piece of malware that was designed solely to damage the Master Boot Record (MBR) of the target system, rendering the machine unusable.

Once executed, the wiper adjusts its settings to gain read access control to any file, then gains the privileges required to load and unload device drivers, disables crash dumps to cover its tracks, disables the Volume Shadow Service (VSS), and loads a benign partition manager which it abuses to corrupt the MBR.

The wiper uses different corruption methods based on the version of Windows running on the machine and partition type (FAT or NTFS). HermeticWiper can damage both MBR and GPT drives and triggers a system reboot to complete the data wiping process, researchers with Cisco’s Talos division note.

Although executed on February 23, hours before Russia launched an invasion of Ukraine, the attacks appear to have been in preparation for months.

The network of one organization in Ukraine was compromised on December 23, 2021, with a web shell installed on January 16, more than one month before HermeticWiper was deployed, Symantec reports. .... ' 

A Cyber Security Social Contract

The current state of the affairs', and related threats are pointing to a need for  this.

The Cyber Social Contract  in Foreign Affairs

How to Rebuild Trust in a Digital World

By Chris Inglis and Harry Krejsa, February 21, 2022

In the spring of 2021, a Russia-based cybercrime group launched a ransomware attack against the largest fuel pipeline in the United States. According to the cybersecurity firm Mandiant, the subsequent shutdown and gas shortage across the East Coast likely originated from a single compromised password. That an individual misstep might disrupt critical services for millions illustrates just how vulnerable the United States’ digital ecosystem is in the twenty-first century.

Although most participants in the cyber-ecosystem are aware of these growing risks, the responsibility for mitigating systemic hazards is poorly distributed. Cyber-professionals and policymakers are too often motivated more by a fear of risk than by an aspiration to realize cyberspace’s full potential. Exacerbating this dynamic is a decades-old tendency among the large and sophisticated actors who design, construct, and operate digital systems to devolve the cost and difficulty of risk mitigation onto users who often lack the resources and expertise to address them.

Too often, this state of affairs produces digital ecosystems where private information is easily accessible, predatory technology is inexpensive, and momentary lapses in vigilance can snowball into a continent-wide catastrophe. Although individually oriented tools like multifactor authentication and password managers are critical to solving elements of this problem, they are inadequate on their own. A durable solution must involve moving away from the tendency to charge isolated individuals, small businesses, and local governments with shouldering absurd levels of risk. Those more capable of carrying the load—such as governments and large firms—must take on some of the burden, and collective, collaborative defense needs to replace atomized and divided efforts. Until then, the problem will always look like someone else’s to solve.

The United States needs a new social contract for the digital age—one that meaningfully alters the relationship between public and private sectors and proposes a new set of obligations for each. Such a shift is momentous but not without precedent. From the Pure Food and Drug Act of 1906 to the Clean Air Act of 1963 and the public-private revolution in airline safety in the 1990s, the United States has made important adjustments following profound changes in the economy and technology.  ...

See also comments in Schneierhttps://www.schneier.com/blog/archives/2022/02/a-new-cybersecurity-social-contract.html 


Predictions from O'Reilly Group

 From  OReilly AI: emerging tech Radar  

1. What’s ahead for AI, VR, NFTs, and more?

Mike Loukides makes some Predictions:

Thoughtful looks.  

Synthetic vs Real Training Data?

The real world is messy and needs direction.

Are You Still Using Real Data to Train Your AI?

500+IEEE Spectrumby Eliza Strickland / 3d//keep unread//hide

It may be counterintuitive. But some argue that the key to training AI systems that must work in messy real-world environments, such as self-driving cars and warehouse robots, is not, in fact, real-world data. Instead, some say, synthetic data is what will unlock the true potential of AI. Synthetic data is generated instead of collected, and the consultancy Gartner has estimated that 60 percent of data used to train AI systems will be synthetic. But its use is controversial, as questions remain about whether synthetic data can accurately mirror real-world data and prepare AI systems for real-world situations.

Nvidia has embraced the synthetic data trend, and is striving to be a leader in the young industry. In November, Nvidia founder and CEO Jensen Huang announced the launch of the Omniverse Replicator, which Nvidia describes as “an engine for generating synthetic data with ground truth for training AI networks.” To find out what that means, IEEE Spectrum spoke with Rev Lebaredian, vice president of simulation technology and Omniverse engineering at Nvidia.

Rev Lebaredian on...

What Nvidia hopes to achieve with Omniverse

Why today’s real-world data isn’t good enough

Why autonomous vehicles need synthetic data

Overfitting, algorithmic bias, and adversarial attacks

The Omniverse Replicator is described as “a powerful synthetic data generation engine that produces physically simulated synthetic data for training neural networks.” Can you explain what that means, and especially what you mean by “physically simulated”?  .... ' 

Friday, February 25, 2022

Medical Digital Twins

 I worked at the University of Florida on modeling medical procedures.   This would have been a very useful modeling extension. 

Medical Digital Twins: a New Frontier  By Allyn Jackson, Commissioned by CACM Staff, February 24, 2022

Last year, a new product for the treatment of type-1 diabetes came on the market: a "digital twin" of the human pancreas. The patient is outfitted with a bloodstream sensor and an insulin pump. The sensor continuously sends data about insulin levels to a device that looks a bit like a cellphone and that runs a mathematical model of glucose metabolism. The model is calibrated to the patient's health status and individual characteristics, such as gender, age, weight, and activity level. The model is linked to a closed-loop control algorithm to drive the pump, which when needed injects the required amount of insulin.

Not only does the digital twin free the patient from the need to pinprick for blood samples several times a day, it also optimizes the amount of insulin administered—just like a healthy human pancreas.

With the success of this kind of model, researchers are starting to envision development of a full-blown "medical digital twin," a software instantiation of the total health status of a person.  One leader in this effort is Reinhard Laubenbacher, director of the Laboratory for Systems Medicine at the University of Florida.

The challenges of medical digital twins are enormous, but Laubenbacher, who received his Ph.D. in mathematics from Northwestern University in 1985 and has spent the past 20 years in systems biology, is ready for it.  "As they say, go big or go home," he said.  "At this stage in my career, my life, that's what I need to do."

Digital twins are used extensively in industry. For example, a digital twin of a jet engine draws on real-time data from sensors in the physical engine to make short-term predictions about the engine's functioning. The twin can make adjustments to head off failure or optimize performance, and can identify faulty or failing components to be checked at the next maintenance. The most sophisticated digital twins are able to self-improve, by learning from situations in which their predictions diverge from what actually happens.

A medical digital twin would take health information about an individual, including data from sensors attached to the person's body, and feed that into a model comprising all major biological systems, from the organ to the cellular and even to the molecular level. Doctors could use the digital twin for a variety of purposes, such as predicting how that particular individual might respond to a given treatment.

However, such comprehensive, detailed models are far in the future. As Laubenbacher put it, "We are at step -1."   ... '

Institute for the Future

Some time ago I wrote for the Institute for the Future in Palo Alto. 

Here is their current online presence, just revisited. 

MAKING THE FUTURE WITH FORESIGHT ... 

Someone from the IFTF interested in collaboration?  Contact me.     Franz 

Automating Sense of Smell

 A sensory space we also experimented with

E-Nose Sniffs Out the Good Whiskey  

IEEE Spectrum, Michelle Hampson, February 18, 2022

New research describes an electronic nose (e-nose) that can analyze whiskies and recognize a whiskey’s brand with over 95% accuracy after a single whiff. Researchers at Australia's University of Technology Sydney based NOS.E on an e-nose originally developed to detect illegal animal parts sold on the black market. NOS.E has a vial to insert the whiskey sample, and a gas sensor chamber to contain the whiskey's scent; the chamber reads the various odors and transmits the data to a computer, then key scent features are extracted and analyzed by machine learning algorithms to identify brand, region, and style of whiskey. The researchers compared NOS.E's analysis of a half-dozen whiskies to that of a state-of-the-art gas chromatography device, and found both methods yielded similar accuracy.

Decentralized Science

 Decentralized science, new to me as a concept.

A Guide to DeSci, the Latest Web3 Movement   By Sarah Hamburg

A growing number of scientists and entrepreneurs are leveraging blockchain tools, including smart contracts and tokens, in an attempt to improve modern science. Collectively, their work has become known as the decentralized science movement, or DeSci.

Still in its infancy, DeSci lies at the intersection of two broader trends: 1) efforts within the scientific community to change how research is funded and knowledge is shared, and 2) efforts within the crypto-focused movement to shift ownership and value away from industry intermediaries. But what exactly does DeSci entail? 

I’m a neuroscientist and cofounder of a startup that uses the blockchain to provide users of wearables with full ownership and control over their biometric data (including brain data). I recently published a short letter in the journal Nature encouraging scientists across all disciplines to join DeSci. As the movement grows, so does the need for open public discussion. To that end, I’ve put together an introductory guide that covers how DeSci came to be, what its defining features are, what the major debates and open questions are within the movement, and where the greatest opportunities and challenges lie.

DeSci drivers

The DeSci movement aims to enhance scientific funding; unleash knowledge from silos; eliminate reliance on profit-hungry intermediaries such as publisher conglomerates; and increase collaboration across the field. 

Funding is an especially acute pain point for scientists, who spend up to half their time writing grant proposals. Success in getting funding is heavily tied to metrics such as the h-index, which quantifies the impact of a scientist’s published work. The resulting pressure to “publish or perish” incentivizes the pursuit of novel research over work that’s critical but less likely to grab headlines. Ultimately, inadequate and unreliable funding not only reduces the amount of science being done, but also biases which projects scientists choose, contributing to issues such as the replication crisis.

Information access is another much-lamented problem. Despite the fact that science is the epitome of a global public good, a lot of scientific knowledge is trapped behind journal paywalls and inside private databases. Making all types of data more accessible is the main objective of the Open Science movement, which emerged over a decade ago.

Open Science initiatives have had far-reaching effects, including mandates by the National Institutes of Health and other funding sources to publish open-access findings. But the extent to which science has improved as a result is a matter of debate. For example, journals responded to these mandates with pay-to-publish business models. Now, instead of paying to read other people’s studies, publicly funded scientists pay to publish their own research. (Nature charges over $11,000 per paper.) Some academics have argued that open access mandates increasingly concentrate power in the hands of major publishers.

Where DeSci comes in:  ...

Thursday, February 24, 2022

ZugZwang! Compelling Moves

Game strategy 

I remember this from learnings in tactical interactions.   Good overview here. 

Play First and Lose: Zugzwang in Chess, Math and Pizzas  in Quantamagazine

How to win games by going second and leaving your opponent with no good options.

In most two-player games, it is generally better to win the toss and go first. And if you are sharing a pizza with someone and want to have a larger portion, it’s usually better to grab the first slice and pick a really large one. But there are situations in which it might be better to go second. In chess, these situations have a dramatic-sounding name: zugzwang! Our puzzles today explore this up-is-down phenomenon in four different contexts.

Let’s start with chess, which is possible to win whether you go first or second. In the most recent World Chess Championship, the Norwegian reigning champion Magnus Carlsen demolished the Russian grandmaster Ian Nepomniachtchi by a score of 7.5 to 3.5. Out of the 11 games played, Carlsen won four, twice as white and twice as black. The other seven games were drawn.

But though it is possible to win as black in chess, it is well known that getting to move first with the white pieces is advantageous — it’s a little like serving first in tennis. Statistically, based on a large number of games, the odds are about 54%-46% in favor of the player going first in chess. In most chess positions, you can improve your position by making a move. But there are times when the player who has to move can only worsen their position and will eventually lose. This is an example of zugzwang — a German word whose literal meaning is “move compulsion.” Here’s a simple example:    .... 

Amazon Alexa Contextual Skills Kit

Amazon skill development reports a new set of area specific skills for use.  How well does this support contextually good assistance?   Taking a closer look.  More at the link below.

Build a Voice Skill That’s Just Right for You  .... 

With the Alexa Skills Kit, you’re only limited by your imagination. You can build engaging, interactive voice experiences for everything from games and food to Smart Home devices. Our built-in voice interaction models make it easy.

Custom Skills: Use your imagination to create your own voice experiences

Game Skills: Build everything from interactive adventures to quizzes

Music & Audio Skills: Empower listeners to access your audio streaming services

Food Ordering Skills: Enable hungry customers to find restaurants and order

Smart Home Skills: Enable customers to control your cloud-connected devices

Video Skills: Let customers control your video device and content

Flash Briefing & News Skills: Make it easy for customers to get your latest content updates

Connected Vehicle Skills: Enable customers to control their connected vehicles

Knowledge Skills: Build Q&A skills in minutes without writing code

Get started »

Self Configuring Robotic Cubes

Quite interesting application for space, and perhaps other complex environments for sensors?

 Robotic cubes: Self-reconfiguring ElectroVoxels use embedded electromagnets to test applications for space exploration

by Rachel Gordon, Massachusetts Institute of Technology

If faced with the choice of sending a swarm of full-sized, distinct robots to space, or a large crew of smaller robotic modules, you might want to enlist the latter. Modular robots, like those depicted in films such as "Big Hero 6," hold a special type of promise for their self-assembling and reconfiguring abilities. But for all of the ambitious desire for fast, reliable deployment in domains extending to space exploration, search and rescue, and shape-shifting, modular robots built to date are still a little clunky. They're typically built from a menagerie of large, expensive motors to facilitate movement, calling for a much-needed focus on more scalable architectures—both up in quantity and down in size.

Scientists from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) called on electromagnetism—electromagnetic fields generated by the movement of electric current—to avoid the usual stuffing of bulky and expensive actuators into individual blocks. Instead, they embedded small, easily manufactured, inexpensive electromagnets into the edges of the cubes that repel and attract, allowing the robots to spin and move around each other and rapidly change shape.

The "ElectroVoxels" have a side length of about 60 millimeters, and the magnets consist of ferrite core (they look like little black tubes) wrapped with copper wire, totaling a whopping cost of just 60 cents. Inside each cube are tiny printed circuit boards and electronics that send current through the right electromagnet in the right direction  .... ' 

Wednesday, February 23, 2022

Details on Microsoft Singularity for AI Workflow

Worth a look, I will.  Note indication of scheduling AI workloads.

Microsoft goes public with details on its 'Singularity' AI infrastructure service

Microsoft's Azure and Research teams are working together on the 'Singularity' AI infrastructure service.

By Mary Jo Foley

Posted in All About Microsoft on February 21, 2022 | Topic: AI & Robotics

Microsoft's Azure and Research teams are working together to build a new AI infrastructure service, codenamed "Singularity." The Singularity team is working to build what Microsoft describes in some of its job postings for the group as "a new AI platform service ground-up from scratch that will become a major driver for AI, both inside Microsoft and outside."

A group of those working on the project have published a paper entitled "Singularity: Planet-Scale, Preemptible and Elastic Scheduling of AI Workloads,"  which provides technical details about the Singularity effort. The Singularity service is about providing data scientists and AI practitioners with a way to build, scale, experiment and iterate on their models on a Microsoft-provided distributed infrastructure service built specifically for AI.

Authors listed on the newly published paper include Azure Chief Technical Officer Mark Russinovich; Partner Architect Rimma Nehme, who worked on Azure Cosmos DB until moving to Azure to work on AI and deep learning in 2019; and Technical Fellow Dharma Shukla. From that paper:

"At the heart of Singularity is a novel, workload-aware scheduler that can transparently preempt and elastically scale deep learning workloads to drive high utilization without impacting their correctness or performance, across a global fleet of accelerators (e.g., GPUs, FPGAs)."  (Details at her ZDnet site linked to above) .... ' 

Deep Learning and Quantitative Finance

Of special interest, plan to attend:  

March 16 Talk, "Deep Learning for Sequences in Quantitative Finance" with David Kriegman

Register now (and more info)    for the next free ACM TechTalk, "Deep Learning for Sequences in Quantitative Finance," presented on Wednesday, March 16 at 12:00 PM ET/17:00 PM UTC by David Kriegman, Professor of Computer Science & Engineering at the University of California, San Diego. Andrew Rosenfeld, Senior Vice President at Two Sigma, will moderate the questions and answers session following the talk.

Leave your comments and questions with our speaker now and any time before the live event on ACM's Discourse Page. And check out the page after the webcast for extended discussion with your peers in the computing community, as well as further resources on deep learning, machine learning, and more.

(If you'd like to attend but can't make it to the virtual event, you still need to register to receive a recording of the TechTalk when it becomes available.)

Note: You can stream this and all ACM TechTalks on your mobile device, including smartphones and tablets.

The quantitative investment process can be viewed as one that takes in raw data at one end and executes trades that buy and sell financial instruments at the other end. The process naturally decomposes into steps of feature extraction, forecasting the returns of individual instruments, portfolio allocation to decide quantities to trade, and trading execution. Many of the steps in this process are readily expressed as machine learning problems that can be addressed using deep learning sequence methods. This talk will provide an overview of this pipeline and deep learning for sequences. No background knowledge in finance or deep learning is required. .... ' 

Paying Attention to Attention: Meta-Awareness

 Case of Segway struck me here.  Reasonable?  

University of Miami neuroscientist Amishi Jha explains why and how leaders should hone their meta-awareness.  by Theodore Kinni   in Strategy-Business

Once upon a time, the Segway was going to revolutionize the transportation industry. Steve Jobs reportedly said that Dean Kamen’s invention had the transformative potential of the personal computer, and venture capitalist John Doerr predicted that Kamen’s startup would reach US$1 billion in sales—a lot of money in 2001, when nobody but tweens believed in unicorns—at record speed. Instead, sightseeing tours and mall cop beats were nearly the only things the two-wheeled, self-balancing personal transporter transformed.

There are many reasons why the Segway never achieved its purported promise, but a lot of them track back to the misplaced focus of Dean Kamen. He didn’t see the forest for the trees. He was so intently focused on one narrow aspect of the Segway—the innovative technology that enabled its intuitive, automatic balance and operation—that he and his early boosters were unaware that its markets were extremely limited. Where in a nation of cities and towns that considered skateboards too dangerous for the sidewalks would hundreds of thousands of Segway riders be allowed to zip around? And short of that, who was going to pay $5,000 to take a Segway for a spin in the driveway? ... ' 

VR 'Shopping Task' Could Help Test for Cognitive Decline in Adults

 Does not sound surprising, is it the best measure? 

VR 'Shopping Task' Could Help Test for Cognitive Decline in Adults

By King's College London (U.K.) News Center, February 4, 2022

A virtual reality (VR) test developed by researchers at the U.K.'s King's College London can be used to assess an individual's functional cognition, and eventually may be used to test for age-related cognitive decline.

Using the VR shopping task called "VStore," 142 healthy individuals aged 20 to 79 years were asked to verbally recall a list of 12 items; they then were assessed based on the time it took to collect those items in a virtual store, select them on a virtual self-checkout machine, pay for them, and then order coffee. The researchers found VStore simultaneously engaged several key neuropsychological functions, potentially accessing a greater range of cognitive domains than standard assessments.

Said King's College's Lilla Porffy, "These are promising findings adding to a growing body of evidence showing that [VR] can be used to measure cognition and related everyday functioning effectively and accurately."

From King's College London (U.K.)  https://www.kcl.ac.uk/news/a-virtual-reality-shopping-task-could-help-test-for-cognitive-decline-in-adults.   .... ' 

Tuesday, February 22, 2022

How will China Regulate AI

Regulation will be a good indication of the direction and seriousness of putting automated 'intelligence into IT functionality. 

China Is About to Regulate AI—and the World Is Watching   By Wired, February 22, 2022

Wen Li, a Shanghai marketer in the hospitality industry, first suspected that an algorithm was messing with her when she and a friend used the same ride-hailing app one evening.

Wen's friend, who less frequently ordered rides in luxury cars, saw a lower price for the same ride. Wen blamed the company's algorithms, saying they wanted to squeeze more money from her.

Chinese ride-hailing companies say prices vary because of fluctuations in traffic. But some studies and news reports claim the apps may offer different prices based on factors including ride history and the phone a person is using. "I mean, come on—just admit you are an internet company and this is what you do to make extra profit," Wen says.

On March 1, China will outlaw this kind of algorithmic discrimination as part of what may be the world's most ambitious effort to regulate artificial intelligence. Under the rules, companies will be prohibited from using personal information to offer users different prices for a product or service .... 

Among other things, China's new regulations prohibit fake accounts, manipulating traffic numbers, and promoting addictive content. They also provide protections for delivery workers, ride-hail drivers, and other gig workers. ... 

From Wired

View Full Article  

Ruling the Metaverse

Or will it be one of the current big IT players?

Will BestBuy Rule the MetaVerse?

 Matthew Stern in Retailwire

With more people than ever talking about an impending virtual reality revolution and speculating about an eventual consumer move to a shared VR metaverse, some say Best Buy is primed to emerge as the go-to retailer for the hardware that people need to get plugged in.

Brokerage and advisory firm Loop Capital Markets takes the stance that Best Buy is positioned to leverage emerging enthusiasm for non-fungible tokens (NFTs) and virtual gaming/socializing, according to CNBC. In Loop’s approximation, the chain possesses advantages in its physical store environment, which allows customers to try on/test out new VR technology, and in the presence of in-store experts who set up technology (with an add-on cost attached). Loop sees a financial opportunity for Best Buy in becoming an integral part of the upgrade cycle for VR technology, selling higher-end and pricier PCs, displays, VR helmets and ancillary equipment as they come to market and replace outmoded ones.

The metaverse was all over the news late last year, with Facebook  just before the holiday season renaming itself Meta to reflect an emerging VR focus. Over Christmas, the company’s Oculus app — required to operate the VR hardware of the same name — emerged as the top seller on Apple’s App Store for the first time during the holiday, according to another CNBC report.

Not all VR news has been positive. A recent Wall Street Journal article detailing rather severe VR-related injuries may give pause to those considering immersing themselves in the technology and the loss of spatial and situational awareness that comes with it.

Best Buy has actively searched for emerging trends in the tech and gadget space to stay on top of since successfully pursuing a turnaround in the mid-2010s.

Late last year the chain announced the acquisition of Current Health, a remote patient monitoring and telehealth company. The purchase marked a continuation of the retailer’s moves into healthcare IT.  Earlier, when the tech world was buzzing about the prospects of IoT technology, Best Buy launched a program called Assured Living meant to facilitate older adults aging at home with the help of customized, integrated suites of in-home wired devices.  ... '

Non Human Expression Ineligible for Copyright Protection

 US at least finding in this case 

The US Copyright Office says an AI can’t copyright its art

By Adi Robertson@thedextriarchy

The US Copyright Office has rejected a request to let an AI copyright a work of art. Last week, a three-person board reviewed a 2019 ruling against Steven Thaler, who tried to copyright a picture on behalf of an algorithm he dubbed Creativity Machine. The board found that Thaler’s AI-created image didn’t include an element of “human authorship” — a necessary standard, it said, for protection.

Creativity Machine’s work, seen above, is named “A Recent Entrance to Paradise.” It’s part of a series Thaler has described as a “simulated near-death experience” in which an algorithm reprocesses pictures to create hallucinatory images and a fictional narrative about the afterlife. Crucially, the AI is supposed to do this with extremely minimal human intervention, which has proven a dealbreaker for the Copyright Office.

“COURTS HAVE BEEN CONSISTENT IN FINDING THAT NON-HUMAN EXPRESSION IS INELIGIBLE FOR COPYRIGHT PROTECTION”

The board’s decision calls “the nexus between the human mind and creative expression” a vital element of copyright. As it notes, copyright law doesn’t directly outline rules for non-humans, but courts have taken a dim view of claims that animals or divine beings can take advantage of copyright protections. A 1997 decision says that a book of (supposed) divine revelations, for instance, could be protected if there was (again, supposedly) an element of human arrangement and curation. More recently, a court found that a monkey couldn’t sue for copyright infringement. “The courts have been consistent in finding that non-human expression is ineligible for copyright protection,” the board says.

This doesn’t necessarily mean any art with an AI component is ineligible. Thaler emphasized that humans weren’t meaningfully involved because his goal was to prove that machine-created works could receive protection, not simply to stop people from infringing on the picture. (He’s unsuccessfully tried to establish that AIs can patent inventions in the US as well.) The board’s reasoning takes his explanation for granted. So if someone tried to copyright a similar work by arguing it was a product of their own creativity executed by a machine, the outcome might look different. A court could also reach an alternate conclusion on Thaler’s work if he follows his rejection with a lawsuit.... ' 


Shape Shifting Robots

 Saw early work in Japan leading in this direction.   Notable for there human interaction. 

Shape-shifting Robots Adapt With Cleverly Designed Bodies, Grippers Robots with shape-shifting grippers and bodies that snap into different shapes can do more with less 

By EVAN ACKERMAN   in Spectrum IEEE

Robots have all kinds of ways to change their shapes, in the sense that you can use rigid components along with actuators to design a robot that can go from one shape to another. Such a system is inevitably highly complex, though, and typically requires a lot of mass plus a lot of energy to switch to and then maintain the shape that you’d like it to.

This week, we saw a couple of papers highlighting different shape-shifting robotic systems that rely on clever origami-inspired designs to rapidly change between different configurations, getting the maximum amount of usability out of the minimum amount of hardware.

The first paper, “Shape Morphing Mechanical Metamaterials Through Reversible Plasticity” from researchers at Virginia Tech and published in Science Robotics, demonstrates a composite material that’s able to transition from a flat sheet to a complex shape using a phase-change metal skeleton for switchable rigidity. The material is made of an elastomer with a pattern of cuts in it (an origami-like technique called kirigami, which uses cuts instead of folds) that determines what shape the elastomer deforms into. Sandwiched inside the elastomer sheet is a skeleton made of a metal alloy that melts at 62 °C, along with a flexible heating element. Heating the skeleton to the point where it liquifies allows the sheet to deform, and then it freezes again when the skeleton cools off and solidifies, a process that can take a few minutes. But once it’s done, it’s stable until you want to change it again.  ...'

Monday, February 21, 2022

Firefox Browser Losing its Users?

 Being influenced by browsers like MS Edge pushing theirs?

Is Firefox Okay?

By Wired, February 17, 2022

At the end of 2008, Firefox was flying high. Twenty percent of the 1.5 billion people online were using Mozilla's browser to navigate the web. In Indonesia, Macedonia, and Slovenia, more than half of everyone going online was using Firefox. "Our market share in the regions above has been growing like crazy," Ken Kovash, Mozilla's president at the time, wrote in a blog post. Almost 15 years later, things aren't so rosy.

Across all devices, the browser has slid to less than 4 percent of the market—on mobile it's a measly half a percent. "Looking back five years and looking at our market share and our own numbers that we publish, there's no denying the decline," says Selena Deckelmann, senior vice president of Firefox. Mozilla's own statistics show a drop of around 30 million monthly active users from the start of 2019 to the start of 2022. "In the last couple years, what we've seen is actually a pretty substantial flattening," Deckelmann adds.

In the two decades since Firefox launched from the shadows of Netscape, it has been key to shaping the web's privacy and security, with staff pushing for more openness online and better standards. But its market share decline was accompanied by two rounds of layoffs at Mozilla during 2020. Next year, its lucrative search deal with Google—responsible for the vast majority of its revenue—is set to expire. A spate of privacy-focused browsers now compete on its turf, while new-feature misfires have threatened to alienate its base. All that has left industry analysts and former employees concerned about Firefox's future.

From Wired

View Full Article  

Economists Look at Robotics Surge

More on the topic of robotics influencing employment.

Economists Are Revising Their Views on Robots and Jobs

By The Economist, February 16, 2022

When the pandemic first struck, unemployment soared. Not since the Depression had American joblessness surpassed 14%, as it did in April 2020. But fears of a prolonged period of high unemployment did not come to pass. According to the latest available data, for November, the unemployment rate for the OECD club of mostly rich countries was only marginally higher than it was before the pandemic. By now it may even have drawn level. The rich world's labour-market bounceback is the latest phenomenon provoking economists to look again at a foundational question in the discipline: whether robots help or harm workers.

The gloomy narrative, which says that an invasion of job-killing robots is just around the corner, has for decades had an extraordinary hold on the popular imagination. Warning people of a jobless future has, ironically enough, created plenty of employment for ambitious public intellectuals looking for a book deal or a speaking opportunity. Shortly before the pandemic, though, other researchers were starting to question the received wisdom. The world was supposedly in the middle of an artificial-intelligence and machine-learning revolution, but by 2019 employment rates across advanced economies had risen to all-time highs. Japan and South Korea, where robot use was among the highest of all, happened to have the lowest rates of unemployment....

Two years into the pandemic, the evidence for automation-induced unemployment is scant, even as global investment spending surges...

From The Economist

View Full Article 

MIT's Twist Quantum Programming Language

Meet Twist: MIT’s Quantum Programming Language Keeping tabs on data entanglement keeps reins on buggy quantum code        RINA DIANE CABALLAR

A team of researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have created Twist, a new programming language for quantum computing. Twist is designed to make it easier for developers to identify which pieces of data are entangled, thereby allowing them to create quantum programs that have fewer errors and are easier to debug.

Twist’s foundations lie in identifying entanglement, a phenomenon wherein the states of two pieces of data inside a quantum computer are linked to each other. “Whenever you perform an action on one piece of an entangled piece of data, it may affect the other one. You can implement powerful quantum algorithms with it, but it also makes it unintuitive to reason about the programs you write and easy to introduce subtle bugs,” says Charles Yuan, a Ph.D. student in computer science at MIT CSAIL and lead author on the paper about Twist, published in the journal Proceedings of the ACM on Programming Languages.

“What Twist does is it provides features that allow a developer to say which pieces of data are entangled and which ones aren’t,” Yuan says. “By including information about entanglement inside a program, you can check that a quantum algorithm is implemented correctly.”

One of the language’s features is a type system that enables developers to specify which expressions and pieces of data within their programs are pure. A pure piece of data, according to Yuan, is free from entanglement, and thereby free from possible bugs and unintuitive effects caused by entanglement. Twist also has purity assertion operators to affirm that an expression lacks entanglement with any other piece of data, as well as static analyses and run-time checks to verify these assertions.

Sunday, February 20, 2022

Ordering Sat Imagery

SATELLITE IMAGERY FOR EVERYONE

Here’s how you can order up a high-resolution image of any place on Earth  in IEEE Sectrum

EVERY DAY, SATELLITES circling overhead capture trillions of pixels of high-resolution imagery of the surface below. In the past, this kind of information was mostly reserved for specialists in government or the military. But these days, almost anyone can use it.

That’s because the cost of sending payloads, including imaging satellites, into orbit has dropped drastically. High-resolution satellite images, which used to cost tens of thousands of dollars, now can be had for the price of a cup of coffee.

What’s more, with the recent advances in artificial intelligence, companies can more easily extract the information they need from huge digital data sets, including ones composed of satellite images. Using such images to make business decisions on the fly might seem like science fiction, but it is already happening within some industries.  .... '  

Encoding Time for Machine Learning

Time encoding from NVIDIA.  Time is the most valuable component for any model, even if you only gather time that you gathered from data.  

Three Approaches to Encoding Time Information as Features for ML Models

By Eryk Lewinson  from NVIDIA

Imagine you have just started a new data science project. The goal is to build a model predicting Y, the target variable. You have already received some data from the stakeholders/data engineers, did a thorough EDA, and selected some variables you believe are relevant for the problem at hand. Then you finally built your first model. The score is acceptable, but you believe you can do much better. What do you do?

There are many ways in which you could follow up. One possibility would be to increase the complexity of the machine-learning model you have used. Alternatively, you can try to come up with some more meaningful features and continue to use the current model (at least for the time being).

For many projects, both enterprise data scientists and participants of data science competitions like Kaggle agree that it is the latter – identifying more meaningful features from the data – that can often make the most improvement to model accuracy for the least amount of effort.

You are effectively shifting the complexity from the model to the features. The features do not have to be very complex. But, ideally, we find features that have a strong yet simple relationship with the target variable.

Many data science projects contain some information about the passage of time. And this is not restricted to time series forecasting problems. For example, you can often find such features in traditional regression or classification tasks. This article investigates how to create meaningful features using date-related information. We present three approaches, but we need some preparation first.

Setup and data

For this article, we mostly use very well-known Python packages as well as relying on a relatively unknown one, scikit-lego, which is a library containing numerous useful functionalities that are expanding scikit-learn’s capabilities. We import the required libraries as follows:   (useful intro above, more at the link)  ... ' 

Saturday, February 19, 2022

Log4 Security Challenge

Good overview and prediction of its incfluence

A Source of Security Challenges for Years to Come

By David Geer, Commissioned by CACM Staff, February 17, 2022

"Log4Shell is probably the most dangerous software vulnerability of its type on record," says Sandeep Lahane, CEO of cloud security firm Deepfence.

On November 24, 2021, Chen Zhaojun of the Alibaba Cloud Security Team discovered the critical software vulnerability Log4Shell in the open-source Java logging utility, Log4J. Log4J comes as components in Java archive (JAR) files that software developers easily insert into their software projects without writing extra code. The security community also knows Log4Shell by its Common Vulnerabilities and Exposures (CVE) ID Number, CVE 2021-44228. The Apache Software Foundation, which supports Log4J, has given the Log4Shell vulnerability a critical severity rating of 10, which is its highest rating.

Log4Shell is severe partly because the Log4J utility is commonplace, appearing in multitudes of software. "Java is undisputedly the most common language for enterprise software applications developed over the last 10-15 years. Logging is a core application requirement, and Log4J is the standard choice for logging," explains Lahane.

Log4Shell is as trivial for cybercriminals to abuse as it is ubiquitous. A look at how Log4Shell compares with other vulnerabilities puts it into perspective. "Log4Shell is easier to exploit than OpenSSL Heartbleed, and vulnerable components are significantly more widely distributed than Apache Struts—two other highly-dangerous vulnerabilities from the last decade," says Lahane.

To locate and leverage Log4Shell, attackers scan networks for vulnerable log4j components. Once they locate the vulnerability, they send a malicious command string to the server using any protocol (TCP, HTTP, or others) that allows them to do so.

Bogus Log4J lookup commands in the malicious command strings lead Log4J to connect to malicious servers to execute remote, malicious Java code. The potential damage from Log4Shell attacks is severe; Remote Code Execution attacks like these enable an attacker to trigger malware such as worms over the Internet.

"It's important to remember that threat actors can use the same open-source scanners to detect the vulnerability that security analysts use. Many remote scanners are currently available on open-source sites like GitHub," says Karen Walsh, CEO of content marketing firm Allegro Solutions.

Log4Shell is also difficult for organizations to mitigate. An enterprise may not know whether its software uses Log4J. If the software has a dependency on a vulnerable Log4J component, it's more of a direct relationship, and it's not so difficult to find it. .... ' 

Robots not Coming?

Takeover no, but they will be here, collaborating. 

The Robots Are Not Coming  (?) 

Those predictions of a robot takeover may not come to fruition.

By  Efraim Benmelech  in Kellogg

In 1987, at the beginning of the IT-driven technological revolution, the Nobel Prize–winning economist Robert Solow famously quipped that “you can see the computer age everywhere but in the productivity statistics.”

More than 30 years later, another technological revolution seems imminent. In what is called “the Fourth Industrial revolution,” attention is devoted to automation and robots. Many have argued that robots may significantly transform corporations, leading to massive worker displacement and a significant increase in firms’ capital intensity. Yet, despite these omnipresent predictions, it is hard to find robots not only in aggregate productivity statistics but also anywhere else.

While investment in robots has increased significantly in recent years, it remains a small share of total investment. The use of robots is almost zero in industries other than manufacturing, and even within manufacturing, robotization is very low for all but a few poster-child industries, such as automotive. For example, in the manufacturing sector, robots account for around 2.1 percent of total capital expenditures. For the economy as a whole, robots account for about 0.3 percent of total investment in equipment. Moreover, recent increases in sales of robotics are driven mostly by China and other developing nations as they play catch-up in manufacturing, rather than by increasing robotization in developed countries. These low levels of robotization cast doubt on doomsday projections in which robots will cut demand for human employees.  .... ' 

Friday, February 18, 2022

More Laws of Computing

 Not so much laws, but rather general indications of past trends that may usefully hold in the future. or not.  Here some proposals for mew laws.Good for tracking against new data.  

Moore’s Not Enough: ​4 New laws of Computing Moore’s and Metcalfe’s conjectures are taught in classrooms every day—these four deserve consideration, too  by ADENEKAN DEDEKE in IEEE Spectrum.  Below an intro, much more at the link.

MOORE'S LAW METCALFE'S LAW COMPUTING

I teach technology and information-systems courses at Northeastern University, in Boston. The two most popular laws that we teach there—and, one presumes, in most other academic departments that offer these subjects—are Moore’s Law and Metcalfe’s Law. Moore’s Law, as everyone by now knows, predicts that the number of transistors on a chip will double every two years. One of the practical values of Intel cofounder Gordon Moore’s legendary law is that it enables managers and professionals to determine how long they should keep their computers. It also helps software developers to anticipate, broadly speaking, how much bigger their software releases should be.

Metcalfe’s Law is similar to Moore’s Law in that it also enables one to predict the direction of growth for a phenomenon. Based on the observations and analysis of Robert Metcalfe, co-inventor of the Ethernet and pioneering innovator in the early days of the Internet, he postulated that the value of a network would grow proportionately to the number of its users squared. A limitation of this law is that a network’s value is difficult to quantify. Furthermore, it is unclear that the growth rate of every network value changes quadratically at the power of two. Nevertheless, this law as well as Moore’s Law remain a centerpiece in both the IT industry and academic computer-science research. Both provide tremendous power to explain and predict behaviors of some seemingly incomprehensible systems and phenomena in the sometimes inscrutable information-technology world.

King Camp Gillette reduced the price of the razors, and the demand for razor blades increased. The history of IT contains numerous examples of this phenomenon, too.

I contend, moreover, that there are still other regularities in the field of computing that could also be formulated in a fashion similar to that of Moore’s and Metcalfe’s relationships. I would like to propose four such laws.

Law 1. Yule’s Law of Complementarity

I named this law after George Udny Yule (1912), who was the statistician who proposed the seminal equation for explaining the relationship between two attributes. I formulate this law as follows:   .... ' 

Voice Assistant use on Smart Phones

 Less use? Is it a matter of context and environment, or capability? 

Smartphone Voice Assistant Use Stalls Out But Consumers Want More Voice Features in Mobile Apps – New Report

BRET KINSELLA on February 9, 2022 at 4:51 pm   in Voicebot.ai

The use of general purpose voice assistants such as Siri, Alexa, and Google Assistant contracted over the past 12 months according to data in a new report by Voicebot Research. A national consumer survey of over 1,100 found that fewer U.S. adults said they were using their smartphone-based voice assistant while monthly active users (MAU) also declined as a percent of the population   ... ' 

Deepspeed Inference and Training for AI Scale, Mixture of Experts.

 Considerable Technical Piece from Microsoft.  Much like the idea of 'mixture of experts' to broaden contextual results, could have often used it in the day.

DeepSpeed: Advancing MoE inference and training to power next-generation AI scale

Published January 19, 2022, By DeepSpeed Team  Andrey Proskurin , Corporate Vice President of Engineering

DeepSpeed-MoE for NLG: Reducing the training cost of language models by five times PR-MoE and Mixture-of-Students: Reducing the model size and improving parameter efficiency DeepSpeed-MoE inference: Serving MoE models at unprecedented scale and speed Looking forward to the next generation of AI Scale

In the last three years, the largest trained dense models have increased in size by over 1,000 times, from a few hundred million parameters to over 500 billion parameters in Megatron-Turing NLG 530B (MT-NLG). Improvements in model quality with size suggest that this trend will continue, with larger model sizes bringing better model quality. However, sustaining the growth in model size is getting more difficult due to the increasing compute requirements.

There have been numerous efforts to reduce compute requirements to train large models without sacrificing model quality. To this end, architectures based on Mixture of Experts (MoE) have paved a promising path, enabling sub-linear compute requirements with respect to model parameters and allowing for improved model quality without increasing training cost.  ... ' 

Is Nuclear Power still Phasing Out?

A long time follower of nuclear, and sometime analyzer of its value, and a believer that its danger was  over-hyped.  Is it really now over?  Or will it survive?   Here from IEEE Spectrum one look, overview below: 

Is Europe’s Nuclear Phaseout Starting to Phase Out? France re-ups aggressive fission policy; Poland and Romania expand theirs; EU frameworks will treat some nukes as sustainable RAHUL RAO 11 FEB 2022

In the depths of the 1970s oil crisis, French prime minister Pierre Messmer saw an opportunity to transform his country’s energy supply. His plan’s legacy is the dozens of cooling towers rising from the French landscape, marking the nuclear power stations that produce over two-thirds of France’s electricity, by far the highest proportion of any country on Earth.

Yet in a world where Chernobyl and Fukushima Daiichi smolder in recent memories, France’s cooling towers might seem like hopeless relics. Philippsburg, an old fortress town in Germany just 40 kilometers from the French border, once hosted a nuclear power plant with two towers just like them. A demolition crew brought both down on an overcast day in early 2020. The event was abrupt and unceremonious, its time kept secret to prevent crowds from gathering amidst the first wave of COVID-19.

Following somewhat in Messmer's footsteps, French president Emmanuel Macron announced a plan earlier this month to build at least six new reactors to help the country decarbonize by 2050.

At first glance, there’s little life to be found in the nuclear sectors of France’s neighbors. Germany’s coalition government is today forging ahead with a publicly popular plan to shutter the country’s remaining nuclear reactors by the end of 2022. The current Belgian government plans to shut down its remaining reactors by 2025. Switzerland is doing the same, albeit with a hazy timetable. Spain plans to start phasing out in 2027. Italy hasn’t hosted nuclear power at all since 1990.  ... ' 

Thursday, February 17, 2022

Reinforcement Learning for Healthcare

Interesting example of the use of reinforcement learning. 

Using reinforcement learning to identify high-risk states and treatments in healthcare

Published February 2, 2022

By Mehdi Fatemi , Senior Researcher  Taylor Killian , PhD student  Marzyeh Ghassemi , Assistant Professor by Microsoft Research. 

As the pandemic overburdens medical facilities and clinicians become increasingly overworked, the ability to make quick decisions on providing the best possible treatment is even more critical. In urgent health situations, such decisions can mean life or death. However, certain treatment protocols can pose a considerable risk to patients who have serious medical conditions and can potentially contribute to unintended outcomes.

In this research project, we built a machine learning (ML) model that works with scenarios where data is limited, such as healthcare. This model was developed to recognize treatment protocols that could contribute to negative outcomes and to alert clinicians when a patient’s health could decline to a dangerous level. You can explore the details of this research project in our research paper, “Medical Dead-ends and Learning to Identify High-risk States and Treatments,” which was presented at the 2021 Conference on Neural Information Processing Systems (NeurIPS 2021).

Reinforcement learning for healthcare

To build our model, we decided to use reinforcement learning—an ML framework that’s uniquely well-suited for advancing safety-critical domains such as healthcare. This is because at its core, healthcare is a sequential decision-making domain, and reinforcement learning is the formal paradigm for modeling and solving problems in such domains. In healthcare, clinicians base their treatment decisions on an overall understanding of a patient’s health; they observe how the patient responds to this treatment, and the process repeats. Likewise, in reinforcement learning, an algorithm, or agent, interprets the state of its environment and takes an action, which, coupled with the internal dynamics of the environment, causes it to transition to a new state, as shown in Figure 1. A reward signal is then assigned to account for the immediate impact of this change. For example, in a healthcare scenario, if a patient recovers or is discharged from the intensive care unit (ICU), the agent may receive a positive reward. However, if the patient does not survive, the agent receives a negative reward, or penalty.   .... ' 

Advances in Brain Inspired Computing

Computers taking  hints from brain designs?   Neuromorphic.  Note that Spiking neural networks new to me, will examine.  

AI Overcomes Stumbling Block on Brain-Inspired Hardware

Allison Whitten, Contributing Writer,   Quanta Magazine

Algorithms that use the brain’s communication signal can now work on analog neuromorphic chips, which closely mimic our energy-efficient brains.

Today’s most successful artificial intelligence algorithms, artificial neural networks, are loosely based on the intricate webs of real neural networks in our brains. But unlike our highly efficient brains, running these algorithms on computers guzzles shocking amounts of energy: The biggest models consume nearly as much power as five cars over their lifetimes.

Enter neuromorphic computing, a closer match to the design principles and physics of our brains that could become the energy-saving future of AI. Instead of shuttling data over long distances between a central processing unit and memory chips, neuromorphic designs imitate the architecture of the jelly-like mass in our heads, with computing units (neurons) placed next to memory (stored in the synapses that connect neurons). To make them even more brain-like, researchers combine neuromorphic chips with analog computing, which can process continuous signals, just like real neurons. The resulting chips are vastly different from the current architecture and computing mode of digital-only computers that rely on binary signal processing of 0s and 1s.

With the brain as their guide, neuromorphic chips promise to one day demolish the energy consumption of data-heavy computing tasks like AI. Unfortunately, AI algorithms haven’t played well with the analog versions of these chips because of a problem known as device mismatch: On the chip, tiny components within the analog neurons are mismatched in size due to the manufacturing process. Because individual chips aren’t sophisticated enough to run the latest training procedures, the algorithms must first be trained digitally on computers. But then, when the algorithms are transferred to the chip, their performance breaks down once they encounter the mismatch on the analog hardware.

Now, a paper published last month in the Proceedings of the National Academy of Sciences has finally revealed a way to bypass this problem. A team of researchers led by Friedemann Zenke at the Friedrich Miescher Institute for Biomedical Research and Johannes Schemmel at Heidelberg University showed that an AI algorithm known as a spiking neural network — which uses the distinctive communication signal of the brain, known as a spike — could work with the chip to learn how to compensate for device mismatch. The paper is a significant step toward analog neuromorphic computing with AI.  ... "

Need for Outside Perspective for Innovative Breakthroughs

 Outside and economically relevant.

Why Outside Perspectives Are Critical for Innovation Breakthroughs

Lessons from the story of Dr. Patricia Bath, the inventor of modern cataract surgery and the first African American woman to receive a medical patent.

Jean-Louis Barsoux, Cyril Bouquet, and Michael Wade 

Innovation is widely viewed as an engine of progress — not only for driving economic growth, but also for bringing vital improvements in a variety of domains, from science and medicine to inequality and sustainability.

Anyone can have a good idea, so you could expect the distribution of U.S. patents to resemble the demographics of the workplace. Of course, this is far from the case. Multiple studies have shown that two groups lag far behind in terms of leadership in innovation: women and African Americans.  .... '

McDonald's Automating Drive Throughs

 Mentioned this previously,  and our connection with the project.  Long pre Covit.  This should heavily influence such automated takeout capabilities.   Watson involvement of interest. 

McDonald's and IBM could bring AI-powered drive-thrus to more restaurants  In Engadget By Jon Fingas @jonfingas 

Expect fewer humans taking your orders.

McDonald's might not be the only restaurant experimenting with AI-based order taking in the near future. Restaurant Dive reports McDonald's is selling its McD Tech Labs to IBM in order to "further accelerate" work on its automated voice ordering systems. The deal will help apply the technology to a wider variety of countries, languages and menus, McDonald's said, while bolstering IBM's Watson-powered customer service offerings.

The deal is expected to close in December. McD Tech Labs will join IBM's Cloud & Cognitive Software team.   .... "

Wednesday, February 16, 2022

Underground Fashion Maps

 Mapping fashion senses, uses in advertising?   beyond? 

'Underground maps' segment cities using fashion, AI

by Cornell University  in Techxplore

Cornell computer scientists have developed a new artificial intelligence framework to automatically draw "underground maps," which accurately segment cities into areas with similar fashion sense and, thus, interests.

How people dress in an area can tell a lot about what happens there, or is happening at a particular time, and knowing the fashion sense of an area can be a very useful tool for visitors, new residents and even anthropologists.

"The question I've been interested in is, can we use millions of images from social media or satellite images to discover something interesting about the world?" said Utkarsh Mall, a doctoral student in the lab of Kavita Bala, professor of computer science and dean of the Cornell Ann S. Bowers College of Computing and Information Science.

Mall is lead author of "Discovering Underground Maps from Fashion," which he presented at the Winter Conference on Applications of Computer Vision, Jan. 4-8 in Waikoloa, Hawaii.  .... ' 

See Underground fashion maps   for more. 

Krafton Virtual Beings?

 An advance in virtual beings in the Metaverse?

Korean Gaming Giant Krafton Introduces AI-Powered Virtual Beings for New Metaverse Platform  by Eric Hal Schwartz

Korean video game developer Krafton has debuted hyperrealistic virtual humans to use within the digital worlds of the metaverse. As seen in the demo video above, the new characters leverage AI to move and interact with humans as virtual friends and game show hosts.

Krafton Virtual Beings

Krafton, best known for competitive online games like Player Unknown’s Battlegrounds, augmented the Unreal engine with its own AI to generate virtual human characters and their realistic skin, hair, and body movements. They even squint in reaction to bright lights. The characters are built from motion-capture videos that the graphics technology can use to reconstruct a digital skeleton that moves like a human body. The virtual flesh layered over it then uses text-to-speech, speech-to-text and voice-to-face tech to imitate real people. Krafton wants to deploy the characters on its new metaverse platform, called “Interactive Virtual World,” one of several digital spaces rolling out from major tech companies worldwide.

“Krafton’s virtual human demo showcases the sort of high-end content that can be realized with hyper-realism technology,” creative director Josh Seokjin Shin said. “This demo represents the initial steps we’re taking to realize an incredible and interactive virtual world (metaverse). In the meantime, we will continue introducing more advanced versions of virtual humans and content based on the belief in the infinite scalability of such technologies.”

Metaverse Race

The metaverse and virtual beings to populate the digital space have become one of the hottest conversational AI and virtual reality trends. The platforms and tech developers working on aspects of it don’t lack for investors. Startups like Hour One, Supertone, Resemble AI, Veritone, and DeepBrain are all grabbing cash and clients. Meta in the U.S., Baidu’s XiRang metaverse in China, The Jio-financed metaverse in India, and Krafton’s fellow South Korean developer Nvidia’s new Omniverse might do very well as Jio already has a lot of customers and Two Platforms may help bring them to a new digital world. LG recently announced that its virtual spokesperson would be recording a music album, following in CoCo Hub’s virtual footsteps. On Russian TV, Sber has even deployed a virtual show host. Gaming companies like Krafton are an obvious vehicle to bring more interactive versions of the virtual characters to virtual spaces.  ... 

Examples of AI as Business Imperative

Examples of AI in action:

Your AI strategy’s secret ingredient,  by 7wData

AI is increasingly becoming a business imperative. Nine in 10 Fortune 1000 companies are not only investing in AI, but are increasing those investments, with 92% reporting measurable business benefits from their current AI use — up from 72% in 2020 and just 28% in 2018, according to a 2022 NewVantage Partners executive survey.

Still, only 26% of companies say their AI initiatives have actually moved into widespread production. The biggest obstacle? Cultural barriers, with executives 11 times more likely to say culture is the greatest impediment to AI success than to cite technology limitations as the biggest barrier.

And the cultural challenges have actually gotten worse, with 92% of executives citing cultural factors this year vs. 81% in 2018.

The upshot? Companies are finding that the key to successfully operationalizing AI comes down to people, and putting them at the center of their initiatives.

When Michael DiMascola, safety business partner at Herr’s Foods, wanted to reduce accidents for its delivery trucks, the first thought was to install surveillance cameras to watch drivers.

The Pennsylvania-based maker of potato chips, cheese curls, and other snacks operates a fleet of 640 vehicles to distribute products in the eastern United States and Canada, and drivers already had a bad taste in their mouths from a previous attempt to install cameras in their cabs.

“The stigma was that Big Brother was watching,” DiMascola says. “And they lit up like a Christmas tree when an event happened, so it was more of a distraction.”

If the problem is that drivers are too distracted, then adding yet another distraction isn’t going to help, he concluded. Plus, the old cameras only triggered after something bad happened, such as a collision or sudden braking or acceleration. “We needed to get ahead of those events,” says DiMascola, who saw distracted driving as a top priority to address. .... ' 

Nuclear Fusion Breakthrough Claimed

More fusion progress reprted.

Scientists make breakthrough with nuclear fusion record  in DW

European researchers have leaped closer to making nuclear fusion a practical energy source for humanity. It's the same power-generating process that makes stars, including our own sun, shine. Scientists announced progress on Wednesday in the mission to make nuclear fusion a safe, practical, and clean energy source — smashing the record for the amount of nuclear fusion energy produced.

The experiment at the Joint European Torus (JET) facility near Oxford, England, set a record of generating 59 megajoules of sustained fusion energy in a five-second period — well over double the previous amount.

What is nuclear fusion?

The fusion process is a reverse of what happens in existing nuclear power plants — nuclear fission — where energy is released when large atoms are broken down into smaller ones. Nuclear fusion comes from bashing together two small atomic nuclei at such high temperatures that they fuse — and release energy.

The nuclei would normally repel one another, so unimaginably high temperatures are needed to make them move quickly enough to actually collide. It's the same basic process that sees hydrogen in the sun converted into helium, generating sunlight and making life on Earth possible.

Fusion offers the prospect of climate-friendly, abundant energy without pollution, radioactive waste.  ... ' 

Metadata – The Magic Behind Data Fabric

Topquadrant talks active metadata.

Metadata – The Magic Behind Data Fabric

by Irene Polikoff | Feb 7, 2022 | Blog from Topquadrant

The main goal of creating an enterprise data fabric is not new. It is the ability to deliver the right data at the right time, in the right shape, and to the right data consumer, irrespective of how and where it is stored. Data fabric is the common “net” that stitches integrated data from multiple data and application sources and delivers it to various data consumers. 

So, what makes the data fabric approach different from previous, more traditional data integration architectures? The key differentiator of a data fabric is its fundamental reliance on metadata to accomplish this goal. Implementing a data fabric means establishing a metadata-driven architecture capable of delivering integrated and enriched data to data consumers.To emphasize this point, Gartner coined the term active metadata. 

Data fabric relies on active metadata. 

Metadata describes different aspects of data. The more comprehensive the sets of metadata we collect, the better they will be able to support our application scenarios. Traditionally, metadata categories have included:

Business metadata – provides the meaning of data through mappings to business terms.

Technical metadata – provides  information on the format and structure of the data such as physical database schemas, data types, data models.

Operational metadata – describes details of the processing and accessing of data such as data sharing rules, performance, maintenance plans, archive and retention rules.

More recently, a new category of metadata became important – Social metadata. It typically includes discussions and feedback on the data from its technical and business users. Business metadata has evolved beyond just mapping to terms to now encompassing ontologies to better assist with interpreting data’s context and meaning.

How is active metadata different from passive metadata? Gartner defines passive metadata as any metadata that is collected. Some Gartner analysts equate active metadata with metadata that is being used. By use, we mean use of the metadata by software (such as software components within the data fabric) in support of a broad range of data integration, analysis, reporting and other data processing scenarios. Other analysts push this concept further and say that active metadata is created by the data fabric by analyzing passive metadata and using the results to recommend or automate tasks.

Irrespective of the exact definition of active metadata, the underlying premise of the data fabric is that the optimal solution for the delivery of the right data in the right shape is to leverage its metadata. For example, the data fabric may use metadata to:  ... ' 

Tuesday, February 15, 2022

Chips Like the Brain

Chips and the Brain

Researchers Make Chip That Can Be Rewired Like the Human Brain

By Silicon Republic, February 11, 2022

The chip is made from perovskite nickelate, which is very sensitive to hydrogen.  A multi-institutional research effort created a reprogrammable-on-demand electronic chip, which could eventually lead to the creation a computer that learns continuously, like the human brain.   The chip is made from hydrogen-sensitive perovskite nickelate in order to adapt and learn in a way similar to the brain.

The researchers applied electrical impulses at different voltages to refine the concentration of hydrogen ions on the chip, generating states that could be mapped out to corresponding brain functions.   "Using our reconfigurable artificial neurons and synapses, simulated dynamic networks outperformed static networks for incremental learning scenarios," the researchers explained. 

"The ability to fashion the building blocks of brain-inspired computers on demand opens up new directions in adaptive networks."

From Silicon Republic

View Full Article