/* ---- Google Analytics Code Below */

Friday, December 31, 2021

Brain Cells Learn Pong

Seems quite remarkable, if it is what I think it is, how far could be be extended, repeated?    Is it ethical to use human brain cells for this?

ACM TECHNEWS

Human Induced Cells Grown in Petri Dish Learn to Play Pong Faster Than AI

By DailyMail.com, December 30, 2021

Researchers at Cortical Labs in Australia demonstrated that human-induced neurons grown in a petri dish can be taught to play the retro videogame Pong in only five minutes.

The DishBrain system is made of brain cells grown on microelectrode arrays that can stimulate the cells. The researchers sent electrical signals either to the right or left of the array to indicate the video game ball's location, and the brain cells would fire neurons to move the paddle accordingly.

DishBrain learned the game in 10 to 15 rallies, gaming sessions that last for 15 minutes, but it takes 5,000 rallies for an artificial intelligence to learn the game.

"Using this DishBrain system, we have demonstrated that a single layer of in vitro cortical neurons can self-organize and display intelligent and sentient behavior when embodied in a simulated game-world," the researchers said in their published report.

From DailyMail.com  Full Article

MGM Trains with VR

 Have seen similar things tried with VR in Retail environments a decade ago without much success.  Is it enough to just be innovation in a clever way and fun way?  Or is it just a way to attract potential employees?

ACM TECHNEWS

Desperate for Workers, MGM Resorts Lets Applicants Role Play With VR

By Gizmodo, December 30, 2021

MGM Resorts will use VR to simulating checking guests into hotels as part of the hiring process.

MGM Resorts will adopt virtual reality headsets that allow job applicants to experience various customer-service roles before making a decision. The headsets will enable applicants to simulate various front-of-house roles, including operating casino games and checking in hotel guests.

The company also is considering using immersive technology at career fairs, and has partnered with enterprise VR training firm Strivr to create a VR employee training module.

The module will cover "difficult guest interactions, said MGM Resorts' chief human resources officer Laura Lee. "It can be very difficult just to verbally explain the types of positions or show a video," and VR lets applicants "throw a headset on and really experience the job," Lee said.

From Gizmodo

On Cross Company AI

 Very interesting, another example.  Which leads us to the usual difficulty of adequately model different corporate goals.

Intelligence Across Company Borders

By Olga Fink, Torbjørn Netland, Stefan Feuerriegelc

Communications of the ACM, January 2022, Vol. 65 No. 1, Pages 34-36   10.1145/3470449

Artificial intelligence (AI) has potential to increase global economic activity in the industrial sector by $13 trillion by 2030.6 However, this potential remains largely untapped because of a lack of access to or a failure to effectively leverage data across companies borders.10 AI technologies benefit from large amounts of representative data—often more data than a single company possesses. It is especially challenging to achieve good AI performance in industrial settings with unexpected events or critical system states that are, by definition, rare. Industrial examples are early detections of outages in power systems or predicting machine faults and remaining useful life, for which robust inference is often precluded.

A solution is to implement cross-company AI technologies that have access to data from a large cross-company sample. This approach can effectively compile large-scale representative datasets that allow for robust inference. In principle, this could be achieved through data sharing. However, due to confidentiality and risk concerns, many companies remain reluctant to share data directly—despite the potential benefits.10 In some cases, data sharing is also precluded by privacy laws (for example, when involving data from individuals). Likewise, sharing code for AI models among companies has other drawbacks. In particular, it prevents that AI learns from large-scale, cross-company data, and, hence, potential performance gains from cross-company collaboration would be restrained.

To overcome the limitations of direct data sharing, we discuss a potential remedy by using federated learning with domain adaptation. This approach can enable inference across company borders without disclosing the proprietary data. Earlier works discuss the importance of AI in interorganizational settings (for example, via meta learning or transfer learning). For instance, in Hirt et al.,3 a prediction ensemble across different interorganizational entities is built, which is effective when all entities solve the same task. What makes federated learning combined with domain adaptation appealing is its flexibility when operating conditions vary across companies: it allows one to train a collaborative model that is tailored to the specific application and the specific conditions of a company.

Hurdles in Collaborating on Artificial Intelligence

Two prime hurdles hinder companies from collaborating in AI initiatives. First, a privacy-preserving solution is required so that inference can be made without disclosing the underlying data.10 Physical sharing of data could disclose proprietary information on operational processes or other intellectual property to competitors. This is particularly problematic whenever companies seek AI collaboration with suppliers, customers, or competitors. For example, data from manufacturing plants could reveal parameter settings, product compositions, throughput rates, yield, routing, and machine uptimes. If such data is revealed, it can be misused by customers in negotiations or help competitors improve their productivity or products. Besides intellectual property, a number of further constraints are reducing companies' propensity to share data. Examples include trust, cybersecurity risks, ethical constraints, and laws for ensuring a user's right to privacy.

The second hurdle is that collaborating companies need to account for the possibility of domain shifts. A domain shift refers to discrepancies among the data distributions collected for systems with different configurations or operating conditions.9 For example, machine data from one company may not be representative of operating conditions observed in another company. A domain shift presents hurdles to the underlying inferences: a model that was trained on data from one company is likely to perform poorly when deployed at another company with distinctly different settings or conditions.

Toward Artificial Intelligence Across Companies

Recent advances in AI research can help overcome these two hurdles. Specifically, we review how cross-company AI can be achieved through a combination of federated learning to address the privacy-preserving data-sharing hurdle and domain adaptation to address the domain shift hurdle (see the accompanying figure). Such a combination is typically referred to as federated transfer learning.4,a  ..... 

IEEE Spectrum Looks at LIkely Emergent Milestones

12 Exciting Engineering Milestones to Look for in 2022 An electric aircraft race, a new dark-matter detector, and a permanent Chinese space station await   By Michael Koziol  ....  Only the first example is written out below. 

Psyche’s Deep-Space Lasers

In August, NASA will launch the Psyche mission, sending a deep-space orbiter to a weird metal asteroid orbiting between Mars and Jupiter. While the probe’s main purpose is to study Psyche’s origins, it will also carry an experiment that could inform the future of deep-space communications. The Deep Space Optical Communications (DSOC) experiment will test whether lasers can transmit signals beyond lunar orbit. Optical signals, such as those used in undersea fiber-optic cables, can carry more data than radio signals can, but their use in space has been hampered by difficulties in aiming the beams accurately over long distances. DSOC will use a 4-watt infrared laser with a wavelength of 1,550 nanometers (the same used in many optical fibers) to send optical signals at multiple distances during Psyche’s outward journey to the asteroid.  .....  (11 more at the link) 

The Real-World Dilemma of Security and Privacy by Design

Security remains hard.  

Technical Perspective: The Real-World Dilemma of Security and Privacy by Design  By Ahmad-Reza Sadeghi     Communications of the ACM, October 2021, Vol. 64 No. 10, Page 84 10.1145/3481040

The Roman historian Tacitus (55 A.D.–120 A.D.) once said "the desire for safety stands against every great and noble enterprise."

In the digital era, providing security and privacy is a noble enterprise, and the entanglement between security and safety systems is increasing. The growing digitization of smart devices has already become an integral part of our daily lives, providing access to vast number of mobile services. Indeed, many people are glued to their smart devices. Hence, it seems almost natural to use them in the context of critical emergency and disaster alerts from life-threatening weather to pandemic diseases. However, despite all the convenience they offer, smart devices expose us to many security and privacy threats.

The following paper investigates real-world attacks on the current implementation of Wireless Emergency Alerts (WEA), which constitutes different emergency categories like AMBER Alerts in child-abduction cases, or alerts issued by the U.S. president.

The 3rd Generation Partnership Project (3GPP) standardization body, consisting of seven telecommunications standard development organizations, has specified and released a standard to deliver WEA messages over Commercial Mobile Alert Service (CMAS) in LTE networks. According to the authors, 3GPP made a design choice to provide the best possible coverage for legitimate emergency alerts, regardless of the availability of working SIM cards required for setting up a secure channel to a network base station. However, this realization leaves every phone vulnerable to spoof alerts. Consequently, all modem chipsets that fully comply with the 3GPP standard show the same behavior, that is, fake Presidential Alerts (and other types of alerts) are received without authentication.

The paper applies the art of engineering and demonstrates as well as extensively evaluates a real-world base station spoofing attack (that is, disguising a rogue base station as genuine). Basically, the attacker sets up its own rogue base station in the vicinity of the victim(s).

The rogue base station will most probably have a better signal strength than benign stations to the victims' devices, leading the victim's device to try to connect to the rogue station. While the phone has failed or is just failing to connect to a (malicious) fake base station, the CMAS message will still be received by the device because the standardized protocol allows it. The attack was simulated in a sports arena by utilizing 4x1Watt malicious base stations located outside four corners of the stadium with 90% success rate (coverage of 49,300 from 50,000 seats). This sounds cool and creepy.  ... ' 

Thursday, December 30, 2021

ShakeAlert Earthquake Warning System

 Were loosely involved in the testing of this warning system. Up to 10 second pre alert noted.

Home/News/Seconds Before Earthquake Rattled California, Phones... 

ACM TECHNEWS

Seconds Before Earthquake Rattled California, Phones Got Vital Warning  By The Guardian (U.K.)

An early-alert system managed by the U.S. Geological Survey (USGS) on Monday warned Californians of a 6.2-magnitude earthquake by phone, seconds before it struck.

The ShakeAlert system issues warnings through various agencies and applications, including Google's Android operating system.   USGS sensors feed information bundled into a data package that is displayed on phones within seconds; some alert apps are available for download, but even some who had no such app on their phones received alerts.

ShakeAlert sent quake warnings to about 500,000 phones before the tremors began.  "We got some reports from folks that they got up to 10 seconds' warning before they felt shaking. That's pretty darn good," said the USGS' Robert de Groot.

From The Guardian (U.K.)

View Full Article  

See also:

Using Sparse Data to Predict Lab Quakes  By Los Alamos National Laboratory,  December 30, 2021

Augmented versus Virtual Meta-Verses

 A space we also explored, in both senses.  Harder yes, but not sure the user cares if the value is provided.   And does not matter otherwise.  

Why AR, not VR, will be the heart of the metaverse

Join gaming leaders, alongside GamesBeat and Facebook Gaming, for their 2nd Annual GamesBeat & Facebook Gaming Summit | GamesBeat: Into the Metaverse 2 this upcoming January 25-27, 2022. Learn more about the event. 

This article was contributed by Louis Rosenberg, CEO and chief scientist at Unanimous AI

My first experience in a virtual world was in 1991 as a PhD student working in a virtual reality lab at NASA. I was using a variety of early VR systems to model interocular distance  (i.e. the distance between your eyes) and optimize depth perception in software. Despite being a true believer in the potential of virtual reality, I found the experience somewhat miserable. Not because of the low fidelity, as I knew that would steadily improve, but because it felt confining and claustrophobic to have a scuba mask strapped to my face for any extended period.

Even when I used early 3D glasses (i.e. shuttering glasses for viewing 3D on flat monitors), the sense of confinement didn’t go away. I still had to keep my gaze forward, as if wearing blinders to the real world. There was nothing I wanted more than to take the blinders off and allow the power of virtual reality to be splattered across my real physical surroundings.

This sent me down a path to develop the Virtual Fixtures system for the U.S. Air Force, a platform that enabled users to manually interact with virtual objects that were accurately integrated into their perception of a real environment. This was before phrases like “augmented reality” or “mixed reality” had been coined. But even in those early days, watching users enthusiastically experience the prototype system, I was convinced the future of computing would be a seamless merger of real and virtual content displayed all around us.

The 2nd Annual GamesBeat and Facebook Gaming Summit and GamesBeat: Into the Metaverse 2

January 25 – 27, 2022

Augmented Reality Research 1992 (USAF - L Rosenberg)

Cut to 30 years later, and the phrase “metaverse” has suddenly become the rage. At the same time, the hardware for virtual reality is significantly cheaper, smaller, lighter, and has much higher fidelity. And yet, the same problems I experienced three decades ago still exist. Like it or not, wearing a scuba mask is not pleasant for most people, making you feel cut off from your surroundings in a way that’s just not natural.

This is why the metaverse, when broadly adopted, will be an augmented reality environment accessed using see-through lenses. This will hold true even though full virtual reality hardware will offer significantly higher fidelity. The fact is, visual fidelity is not the factor that will govern broad adoption. Instead, adoption will be driven by which technology offers the most natural experience to our perceptual system. And the most natural way to present digital content to the human perceptual system is by integrating it directly into our physical surroundings.

Of course, a minimum level of fidelity is required, but what’s far more important is perceptual consistency. By this, I mean that all sensory signals (i.e. sight, sound, touch, and motion) feed a single mental model of the world within your brain. With augmented reality, this can be achieved with relatively low visual fidelity, as long as virtual elements are spatially and temporally registered to your surroundings in a convincing way. And because our sense of distance (i.e. depth perception) is relatively coarse, it’s not hard for this to be convincing.

But for virtual reality, providing a unified sensory model of the world is much harder. This might sound surprising because it’s far easier for VR hardware to provide high-fidelity visuals without lag or distortion. But unless you’re using elaborate and impractical hardware, your body will be sitting or standing still while most virtual experiences involve motion. This inconsistency forces your brain to build and maintain two separate models of your world — one for your real surroundings and one for the virtual world that is presented in your headset.

When I tell people this, they often push back, forgetting that regardless of what’s happening in their headset, their brain still maintains a model of their body sitting on their chair, facing a particular direction in a particular room, with their feet touching the floor (etc.). Because of this perceptual inconsistency, your brain is forced to maintain two mental models. There are ways to reduce the effect, but it’s only when you merge real and virtual worlds into a single consistent experience (i.e. foster a unified mental model) that this truly gets solved. .... '

D-Wave and Gate Model Quantum

Recall we talked to D-Wave regarding applications for Quantum computing, and now continue to follow them. Have often been mentioned here.   Here an update in Venturebeat:

D-Wave opens up to Gate-model Quantum Computing    by Jack Vaughan  @JackIVaughan

Recent advances in quantum computing show progress, but not enough to live up to years of hyperbole. An emerging view suggests the much-publicized quest for more quantum qubits and quantum supremacy may be overshadowed by a more sensible quest to make practical use of the qubits we have now.

The latter view holds particularly true at D-Wave Systems Inc., the Vancouver, B.C., Canada-based quantum computing pioneer that recently disclosed its roadmap for work on logic gate-model quantum computing systems.

Creating Business Value Using No-Code Prediction & Forecasting Technology: Data Stories from Enterprise Executives at the Forefront of AI._

D-Wave’s embrace of gates is notable. To date, the company focuses solely on quantum annealing processors. Using this probabilistic approach, it has achieved superconducting qubit processor counts that it claims outpaces most others. Its latest Advantage system boasts 5,000 qubits. That’s well ahead of the 127-qubit device IBM reported in November.

There is an important caveat, as followers of the quantum business know. D-Wave’s annealing qubits don’t have the general quantum qualities that competitive quantum gate-model systems have, and the degree of processing speed-up they provide has been questioned.

Questions arise despite placing its systems in research labs at Google, NASA, Los Alamos National Laboratory, and elsewhere. D-Wave’s qubit counts have been faulted by critics for specializing in a purpose-built approach aimed at a certain class of optimization problems.

Bring on the NISQ

Still, the company has a leg-up with its experience compared to most competitors, having fabricated and programmed superconducting parts since at least 2011.

For that matter, the gate-model quantum computing crew’s benchmarks have come under attack, too, and its battles with scaling and quantum error (or “noise”) correction have spawned the term “noisy intermediate-scale quantum” (or “NISQ”) to describe the present era, where users have to begin to do what they can with whatever working qubits they have. ... ' 

Wednesday, December 29, 2021

Pitch Perception

 Perfecting Pitch Perception

MIT News, Jennifer Michalowski, December 17, 2021

Neuroscientists at the Massachusetts Institute of Technology (MIT) developed a computational model trained using music, voices, and other naturalistic sounds to determine how humans perceive pitch. Their findings could help researchers reproduce pitch perception in cochlear implants. The researchers asked a deep neural network to identify the repetition rate of sounds in a training set to train it to estimate pitch. MIT's Mark Saddler said, "We very nicely replicated many characteristics of human perception ... suggesting that it's using similar cues from the sounds and the cochlear representation to do the task." Among other things, they determined that nerve cells fire in time with the sound vibrations that reach the inner ear. Said MIT's Josh McDermott, "For cochlear implants to produce normal pitch perception, there needs to be a way to reproduce the fine-grained timing information in the auditory nerve."


Implications of Google Visual Search: Bringing Eyeballs to Shopping

Been thinking of the implications of Google Lens for some time, will it now more generally emerge.

Will visual search bring eyeballs to Google Shopping?

DISCUSSION

Will visual search bring eyeballs to Google Shopping?

Oct 14, 2021, by Tom Ryan   in Retailwire  with further expert discussion: 

Google is updating the look and format of product search pages so that feature images better resemble a digital store rather than a long list of links and text.

One method involves the use of Google Lens, an image recognition technology that lets a smartphone camera conduct digital searches by identifying real objects. The “search images” button on the Google app makes all the images on a page searchable through Google Lens, providing information on those products as well as views of similar products.

In a blog entry  Bill Ready, president of commerce, payments & NBU at Google, said, “Whether it’s an image that you see online, a photo you saved on your phone  or something in the real world that catches your eye, Google Lens makes the products you see instantly shoppable.”

Google Lens will soon be extended to Chrome on desktops to provide product search results for website images, video and text.

A second method (demonstrated in the video below) enables visual searches directly from mobile search. A search for “cropped jackets,” for example, shows a visual feed of jackets in various colors and styles, alongside other information like local shops, style guides and videos. Users can filter the search down to style, department and brand and access ratings, reviews and price comparisons.

Wrote Bill Ready in his blog article, “This new experience is powered by Google’s Shopping Graph, a comprehensive, real-time dataset of products, inventory and merchants with more than 24 billion listings. This not only helps us connect shoppers with the right products for them, it also helps millions of merchants and brands get discovered on Google every day.”

Google has overhauled its shopping search over the last year as Amazon has surpassed the company as the leader in first product search and lately has also taken share in digital advertising. Google’s changes include no longer requiring merchants to advertise to have their products listed in shopping searches as well as eliminating commission fees. Another newer update has been adding in-store inventory checks to product search.   ..... ' 

Robotic Process Automation

Well put intro piece in Venturebeat  by Peter Wayner

Robotic Process Automation in 2022

In 2021, enterprise teams turned to robotic process automation (RPA) to simplify workflows and bring some order to office tasks. The next year promises to bring more of the same sophisticated artificial intelligence and task optimization so more offices can liberate their staff from repetitive chores.

The product area remains one of the poorly named buzzwords in enterprise computing. There are no robots in sight. The tools are generally deployed to fix what was once known as paperwork, but they rarely touch much paper. They do their work gluing together legacy systems by pushing virtual buttons and juggling the multiple data formats so that the various teams can keep track of the work moving through their offices.

Here are the 10 ways that RPA marketplace will shift and adapt in 2022:

Better Integration

The main job for RPA is to knit together some hundreds of legacy systems that now make up the backbone of many companies. The main challenge for each RPA company will be strengthening the connections between systems. That means more modules or bots in the marketplaces and better versions of the existing ones.

Lower Code

One of the major selling points for many RPA vendors is that their tools can come close to programming themselves through what some call “process discovery.” While this may never be as magic as anyone wants, the tools will continue to simplify this job. It may even approach “no-code” level automation for some simple tasks.

Higher Code

It seems contradictory to imagine that RPA platforms will simultaneously get easier to program and harder, but these changes will be seen in different levels of tasks. While the interns and managers will be able to automate more simple tasks, the developers will be called to customize the RPAs for more complex integrations. In many cases, RPA tools make good frameworks that sophisticated programmers can revise and extend. The RPA handles 95% of the work and the development team handles the last 5%. This is why some companies are reporting that RPAs are more complicated and expensive to maintain than they thought. Companies are asking them to do more and more sophisticated jobs, and that means bringing in better programming talent.

More AI

Craig Le Clair at Forrester Research predicts that every RPA company will either embrace AI or “become a dinosaur”. While this may never become strictly true, there’s no doubt that RPA is one of the simpler vectors for inserting AI into corporate DNA. The standard modules tackle tasks like optical character recognition, machine learning, and machine vision. RPA firms that ship better, smarter AI modules will be able to win more contracts. The accuracy and depth of the AI algorithms will rise in importance.

Divergence

Some firms need all the cleverness that AI scientists can deliver. Some firms, though, do not. Many of the AI options are aimed at dealing with older, paper interfaces or other tasks that require adaptability. One popular job for AI is to convert paper documents into digital form and then search for relevant data like the invoice number or the expiration date for a driver’s license. Some workflows, though, are pretty mature and don’t need this extra dose of smarts. Companies that process little paper or don’t need the extra intelligence may find they’re not as interested in AI-based innovations.   ... ' 

Tuesday, December 28, 2021

Explainable AI: Machine Learning

Technical, primer for explainability methods  

Explainable AI (XAI) Methods Part 1 — Partial Dependence Plot (PDP)

Primer on Partial Dependence Plot, its advantages and disadvantages, how to make use and interpret it

By Seungjun (Josh) Kim  in TowardsDataScience

Explainable Machine Learning (XAI)

Explainable Machine Learning (XAI) refers to efforts to make sure that artificial intelligence programs are transparent in their purposes and how they work. [1] It has been one of the hottest keywords in the Data Science and Artificial Intelligence community in the recent few years. This is understandable because a lot of SOTA (State of the Art) models are black boxes which are difficult to interpret or explain despite their top-notch predictive power and performance. For many organizations and corporations, several percentage increase in classification accuracy may not be as important as answers to questions like “how does feature A affect the outcome?” This is why XAI has been receiving more spotlight as it greatly aids decision making and performing causal inference.

In the next series of posts, I will cover various XAI methodologies that are in wide use nowadays in the Data Science community. The first method I will cover is the Partial Dependence Plot, PDP, in short.

Partial Dependence Plot (PDP)

Partial Dependence (PD) is a global and model-agnostic XAI method. Global methods give a comprehensive explanation on the entire data set, describing the impact of feature(s) on the target variable in the context of the overall data. Local methods, on the other hand, describes the impact of feature(s) on an observation level. Model-agnostic means that the method can be applied to any algorithm or model.

Simply put, PDP shows the marginal effect or contribution of individual feature(s) to the predictive value of your black box model [2]. For a more formal definition, The partial dependence function for regression can be defined as:  ... '  


Paradox Olivia Recruiting

Pointed out to me:

Paradox software firm hits unicorn status with new $200M capital raise

By Andy Blye - Reporter,   December 27, 2021, 09:37am MST

Paradox, a Scottsdale company that makes conversational recruiting software, announced Monday that it had raised $200 million in series C financing that includes a company valuation of $1.5 billion, vaulting it to unicorn status.

The funding came from a bevy of investors, co-led by Stripes, Sapphire Ventures, and Thoma Bravo with participation from Workday Ventures, Indeed, Willoughby Capital, Twilio Ventures, Blue Cloud Ventures, Geodesic, Principia Growth, DLA Piper Venture Fund and current investor Brighton Park Capital.... ' 

Directed Laser Light Attacks

I have a connection at Karlsruhe, seeking to connect on this. 

IT Security: Computer Attacks With Laser Light

By Karlsruhe Institute of Technology

Researchers at Germany's Karlsruhe Institute of Technology (KIT), the Technical University of Braunschweig, and the Technical University of Berlin demonstrated that physically isolated computer systems can be hacked using a directed laser.

The researchers found that hackers can communicate secretly with air-gapped computer systems over several meters of distance, using a directed laser to transmit data to the light-emitting diodes of traditional office devices without additional hardware at the attacked device. Their work was presented at ACSAC '21, the Annual Computer Security Applications Conference..

"The LaserShark project demonstrates how important it is to additionally protect critical IT systems optically next to conventional information and communication technology security measures," says KIT Professor Christian Wressnegger.

From Karlsruhe Institute of Technology

View Full Article  

Monday, December 27, 2021

Palm Oil Agriculture

 An example of complex and labor intensive agricultural  harvesting

Home/News/'Intelligent' Cutters for Trees to Ease Malaysia Palm..

ACM TECHNEWS, 'Intelligent' Cutters for Trees to Ease Malaysia Palm Oil Labor Crunch

'Intelligent' Cutters for Trees to Ease Malaysia Palm Oil Labor Crunch

By Bloomberg, December 21, 2021

Automation in the palm oil industry has been slow in part due to the sectors difficult and dangerous conditions.

IRGA, a precision farming solutions provider in Malaysia, has developed a palm oil harvesting tool that could help the industry manage a chronic labor shortage., The intelligent palm tree cutter, called HARVi, relies on digital sensors to identify the location of the worker and the tree and determine whether the worker is cutting fruit or pruning fronds.

A mobile app makes the data easily accessible and eliminates the need to manually count fruit bunches.

Said IRGA's Girish Ramachandran, "From the tree to the mill there is very, very low digitalization. People are still running manual processes that are absurd."

The industry's grueling conditions, with fruit growing as high as 40 feet off the ground in bunches that weight up to 55 pounds, has posed a challenge for automation. With HARVi, workers can harvest fruit bunches up to 20 feet high, versus 12 to 14 feet currently.

From Bloomberg

View Full Article  

Insights into Brain Functions

Insights into Brain Functions

STANN Reveals Insights Into How the Brain Functions

Baylor College of Medicine, Ana Maria Rodriguez, December 21, 2021

Baylor College of Medicine (BCM)'s Dr. Abul Hassan Samee and colleagues developed the Spatial Transcriptomics cell-types Assignment using Neural Networks (STANN) model to obtain novel insights into brain function. Samee said the team applied STANN and other computational techniques to brain datasets of the mouse olfactory bulb, and determined "the precise location of different cell types, whether they communicated with each other, and by which means." The researchers theorize that the brain's morphological layers contain different spatially localized clusters of cell types, and district subtypes executing location-specific functions. BCM's Dr. James Martin said STANN offers an "instruction manual" for scientists to analyze other brain regions or organs.

Full Article.

3D Printed Home

 Virginia Family Gets Keys to Habitat for Humanity's First 3D-Printed Home in U.S.

CNN, Sara Smart, December 26, 2021

A Virginia family purchased Habitat for Humanity's first three-dimensionally (3D)-printed home in the U.S., constructed in partnership with 3D printing company Alquist. The 1,200-square-foot, three-bedroom, two-bathroom concrete house was built in just 12 hours for April Stringfield and her son to move into. Janet V. Green with Habitat for Humanity Peninsula and Greater Williamsburg said the organization hopes to continue partnering and developing the technology used with the 3D printing. 'We would love to build more with this technology, especially because it's got that long-term savings for the homeowners," Green said.  Full article.

Fusion: No Magnets

New ways to efficient Fusion

MAGNETIC-CONFINEMENT FUSION WITHOUT THE MAGNETS

Zap Energy’s new Z-pinch fusion reactor promises a simpler approach to an elusive goal

TOKAMAKS, WHICH USE magnets to contain the high-temperature plasma in which atomic nuclei fuse and release energy, have captured the spotlight in recent months, due to tremendous advances in superconducting magnets. Despite these gains, though, traditional magnetic-confinement fusion is still years away from fulfilling nuclear fusion’s promise of generating abundant and carbon-free electricity.

But tokamaks aren’t the only path to fusion power. Seattle-based Zap Energy’s FuZE-Q reactor, scheduled to be completed in mid-2022, bypasses the need for costly and complex magnetic coils. Instead, the machine sends pulses of electric current along a column of highly conductive plasma, creating a magnetic field that simultaneously confines, compresses, and heats the ionized gas. This Z-pinch approach—so named because the current pinches the plasma along the third, or Z, axis of a three-dimensional grid—could potentially produce energy in a device that’s simpler, smaller, and cheaper than the massive tokamaks or laser-fusion machines under development today.

Z-pinched plasmas have historically been plagued by instabilities. In the absence of a perfectly uniform squeeze, the plasma wrinkles and kinks and falls apart within tens of nanoseconds—far too short to produce useful amounts of electricity.

Four artist renderings of the interior and exterior of a large cylindrical nuclear fusion reactor. Three of the images show a glowing plasma spreading inside the vessel. 

ZAP ENERGY

Zap Energy's Z-pinch design generates magnetic fields without using complex magnetic coils.

Zap Energy’s approach, which it calls sheared-flow stabilization, tames these instabilities by varying the flow of plasma along the column. The design sheathes the plasma near the column’s central axis with faster-flowing plasma—imagine a steady stream of cars traveling in the center lane of a highway, unable to change lanes because heavy traffic is whizzing by on both sides. That arrangement keeps the fusion-reactive plasma corralled and compressed longer than previous Z-pinch configurations could.

“We think our reactor is the least expensive, most compact, most scalable solution with the shortest path to commercially viable fusion power,” says Ben Levitt, Zap Energy’s director of research and development. Levitt predicts that Zap will reach Q=1, or scientific breakeven—the point at which the energy released by the fusing atoms is equal to the energy required to create the conditions for fusion—by mid-2023, which would make it the first fusion project to do so.

Given the long history of broken promises in fusion-energy research, that’s the sort of claim that warrants skepticism. But Zap’s ascent of a forbiddingly steep technology curve has been swift and impressive. The startup was founded in 2017 as a spin-off of the FuZE (Fusion Z-pinch Experiment) research team at the University of Washington. The company produced its first fusion reactions the very next year. Before the company’s founding, the university team had collaborated with Lawrence Livermore National Laboratory researchers. They won a series of U.S. Department of Energy grants that enabled them to test the sheared-flow approach at progressively higher energy levels. To date, the company has raised more than US $40 million.  .... ' 

Sunday, December 26, 2021

Exploring Emotions with VR

Emotional Engagement in VR.  Neuromarketing connection?

Exploring Emotions with VR

By Max Planck Gessellschaft (Germany), December 22, 2021

Researchers at Germany's Max Planck Institute for Human Cognitive and Brain Sciences (MPI CBS) used virtual reality (VR) to evoke emotions as realistically as possible.  Study participants experienced rollercoaster rides in VR as the researchers explored their brain activity via electroencephalography (EEG).

Results indicated the degree to which a person is emotionally engaged is observable in their brain's alpha oscillations.  MPI CBS' Felix Klotzsche said, "Using alpha oscillations, we were able to predict how strongly a person experiences a situation emotionally. Our models learned which brain areas are particularly important for this prediction."

The researchers demonstrated that the link between EEG signals and emotional feelings is verifiable under naturalistic conditions.

From Max Planck Gessellschaft (Germany)   Full Article 

Aging in Place

Clever marketing play.   Is there enough in it?

Lowe’s wants to help customers age in place, with expert comments

Dec 23, 2021,  by Matthew Stern

Lowe’s recently formed a two-year partnership with AARP to provide strategies and information for older adults aging in their homes.

“For the past 18 months, the home has increased in importance for all of us and perhaps especially for our baby boomer customers, who are increasingly interested in aging in place in their own homes,” said Lowe’s CEO Marvin Ellison on the retailer’s third-quarter conference call.

AARP will provide the Lowe’s Livable Home initiative with educational content, including stories and videos, to help people make major and minor changes to living spaces that will make stairways more navigable, bathrooms and kitchens more user-friendly and support family caregivers seeking to make home updates.

Mr. Ellison said the partnership would offer solutions such as “walk-in bathtub, grab bars, stairlifts, nonslip floors, pull-down cabinets and wheelchair ramps.”

The partnership comes as pandemic-related cocooning in the U.S. inspired a huge number of households to tackle DIY home improvement projects.

AARP survey data also shows 70 percent of people 50 and older want to remain in their current homes as they age. In addition, households headed by people age 65 and older are expected to grow from 34 million to 48 million in the next 20 years, according to the Urban Institute.

“People are living longer and they want to live their best lives at every age,” said AARP CEO Jo Ann Jenkins, in a statement. “Ageless homes that work for older adults are good for people of all ages, but most houses weren’t built to support our needs long term.”  .....' 

Thursday, December 23, 2021

Fusion Energy Update by PPPL

PPPL Unravels Puzzle to Speed Development of Fusion Energy

Princeton Plasma Physics Laboratory, John Greenwald, December 17, 2021

A computational technique developed by researchers at the U.S. Department of Energy's Princeton Plasma Physics Laboratory (PPPL) can model the movement of free electrons during fusion-harnessing experiments. The algorithm would simulate pitch-angle scattering without losing the energy of the speeding electrons. PPPL's Yichen Fu said, "By solving the trajectories we can know the probability of electrons choosing every path, and knowing that enables more accurate simulations that can lead to better control of the plasma." PPPL's Hong Qin said the research findings offer a strong mathematical proof of the first working algorithm for solving the equation.

Full article

Seismic Sensors Data Science

 Seismic sensing   and data capture and analysis also an area to watch.

Researchers Weave Optical Fibers into Seismic Sensors Science

Science, Paul Voosen, December 9, 2021

Researchers at Switzerland's ETH Zurich used a fiber optic cable extended to a volcano in Iceland to study its interior fluctuations and eruptions. They tapped the fiber with an "interrogator" box that sends a laser pulse and records the pattern of reflections coming back from defects along the cable. This method allows the researchers to generate an image of a passing seismic wave at a distance of 100 kilometers or more. Researchers across the globe are laying fiber optic cables on glaciers, volcanoes, permafrost, and earthquake fault zones due to their low cost, ruggedness, and density. Such fiber has enabled researchers to find previously unknown earthquake faults, study the interior workings of volcanoes and the movements of glaciers and avalanches, and detect shifting pressures from ocean tides and currents.

Full article

Webb Space Telescope Launch and the Realm of More Data

 Linking to my long time interest in distant and voluminous data gathering, starting with the Hubble telescope. Plan to reach out t them regards use of data analysis.  Now we are about to start a new, long term  approach.  I met a few people through this blog over the years with related interests.  There are worthwhile connections.  Be aware.

WHAT TO EXPECT FROM NASA’S JAMES WEBB SPACE TELESCOPE LAUNCH

An anxiety-ridden launch that’s been decades in the making

By Loren Grush@lorengrush  Dec 23, 2021, 9:00am EST

On Christmas Day, NASA is gifting astronomers one of the greatest presents it can give by launching the most powerful space telescope ever created. Called the James Webb Space Telescope, or JWST, the space observatory is meant to be the successor to NASA’s Hubble Space Telescope already in orbit around Earth. And it promises to completely transform the way we study the cosmos.

Sporting the biggest mirror of any space-bound telescope ever launched, JWST is tasked with collecting infrared light from some of the most distant stars and galaxies in the Universe. With this capability, the telescope will be able to peer far back in time, imaging some of the earliest objects to have formed just after the Big Bang. On top of that, it will unravel the mysteries of supermassive black holes, distant alien worlds, stellar explosions, dark matter, and more.

IT WILL UNRAVEL THE MYSTERIES OF SUPERMASSIVE BLACK HOLES, DISTANT ALIEN WORLDS, STELLAR EXPLOSIONS, DARK MATTER, AND MORE

NASA has worked for nearly three decades to craft this telescope and get it to the launchpad. Now, the telescope is finally set to launch on top of a European Ariane 5 rocket out of Europe’s primary launch site in Kourou, French Guiana in South America, on Saturday, December 25th. But once the telescope is in space, there’s still a long way to go. Because JWST is so massive, it must fly to space folded up. Once in space, it will undergo a complex unfurling process that will take up to two weeks to complete. And this reverse origami must go exactly right for the telescope to function properly.

All the while, JWST will be traveling to an extra cold spot located 1 million miles from Earth, where the spacecraft will live out its life, collecting as much infrared light as it can. It’s an extremely complicated launch and mission, with many opportunities for things to go wrong along the way. But if everything goes right, the world’s astronomers will have an unbelievably powerful tool at their disposal for the next five to 10 years.  ... .'

New rechargeable Lithium ION Batteries

Batteries have become everything, in providing IOT, clothing  and beyond.

Engineers Produce 140m Flexible Rechargeable Battery

By MIT News

December 22, 2021

Researchers have developed a rechargeable lithium-ion battery in the form of an ultra-long fiber that could be woven into fabrics. The battery could enable a wide variety of wearable electronic devices, and might even be used to make 3D-printed batteries in virtually any shape.

The researchers envision new possibilities for self-powered communications, sensing, and computational devices that could be worn like ordinary clothing, as well as devices whose batteries could also double as structural parts.

In a proof of concept, the team behind the new battery technology has produced a flexible fiber battery 140 meters long to demonstrate that the material can be manufactured to arbitrarily long lengths. The work is described in "Thermally Drawn Rechargeable Battery Fiber Enables Pervasive Power," published in the journal Materials Today.

The system embeds the lithium and other materials inside the fiber, with a protective outside coating, thus making the version stable and waterproof.

From MIT News

View Full Article  

Wednesday, December 22, 2021

Data for Better Business Decision?

Ultimately this is the 'thing', how do we do it effectively?  Some starting thoughts.

SmartData Collective > Big Data > How To Use Data For Smarter Business Decisions

BIG DATA

How To Use Data For Smarter Business Decisions

Big data technology is of the upmost importance for any company trying to meet its growth targets in 2022.

BY Sean Mallon

Big Data Technology Has Become a Nontrivial Element of Modern Business

If you intend to start resting your case with investing in data, analytics and more insightful business forecasts, stop. Instead, shift your focus toward prioritizing the business investment categories that would bring you the biggest bang for the buck in terms of both revenue and bottom line.

Most of your competitors are probably relying on data to run their businesses for a while now. They use data to automate their processes by turning some of their operational and transactional data into alerts that help them make better business decisions in the quest for income. While this is an intelligent thing to do, that’s where most of these efforts to use data in the process of running a business come to an end. The so-called insights-driven business transformation is the next level of making the most out of data. This is the ability to morph enterprise data into insights and then use these insights to spark actions that directly impact the outcome of a business. The evolution process then loops over and over again, in a continuous stream of learning and improving. This is how customer-centric companies operate. Also, this has become the top priority for many CIOs and business analysts. You should know that almost 70% of CIOs consider their company has changed or is currently changing its management culture to make quantitative decisions one of their highest priorities.  ... ' 

Quantum Computing Error Correction

Error correction in context  is key.

Milestone in Quantum Computing With Error Correction

By SciTechDaily, December 20, 2021

Scientists at Dutch quantum computing research institute QuTech have integrated high-fidelity operations on encoded quantum data with a scalable framework for repeated data stabilization, a key milestone in the development of quantum error correction.

The resulting logical quantum bit (qubit) features seven physical qubits or superconducting transmons.

QuTech's Jorge Marques said, "We do three types of logical-qubit operations: initializing the logical qubit in any state, transforming it with gates, and measuring it. We show that all operations can be done directly on encoded information. For each type, we observe higher performance for fault-tolerant variants over non-fault-tolerant variants."

From SciTechDaily    View Full Article

Tuesday, December 21, 2021

Coverting Laws to Programs

How do-able and useful is such  such smart coding.  How ways is it to include elements like intent?  What other considerations must it be compliant to?  Consideration of interacting smart contracts?

Converting Laws to Programs, By Esther Shein in CACM

Communications of the ACM, January 2022, Vol. 65 No. 1, Pages 15-16   10.1145/3495564

 Sometimes the intricacies of tax laws are mind-boggling, even to lawyers. Sarah Lawsky, a law professor at Northwestern University School of Law and Jonathan Protzenko, a principal researcher at Microsoft Research, were working to translate Section 121 of the U.S. Tax Code, which stipulates how much a taxpayer can deduct from their income taxes from the profit of the sale of a home, into rogrammable code.

They found themselves stumped, because while the law stipulates a profit on the first $250,000 of a home sale is not to be taxed, "there's like nine layers of exceptions," including whether a person served in the military or is married, or if a spouse is deceased, Protzenko says.

U.S. law is "insanely complicated, so when you have that amount of insanity there's no way a human can confidently claim, `Give me your situation and I'll give you the right answer'," he says. "You need code to capture and precisely express what's supposed to happen, because English is too fuzzy and irregular."

Protzenko and Lawsky spent hours debating a fine point, "and she said, `Gosh, I thought I knew this text well'," he recalls. "She teaches it to her students every year, but transcribing law into code requires you to think about the most minute details, and there was a fair amount of head scratching to make sure we were 100% correct about what the law means."

Thankfully they were, and the code was embedded into Catala, a programming language developed by Protzenko's graduate student Denis Merigoux, who is working at the National Institute for Research in Digital Science and Technology (INRIA) in Paris, France.

It is not often lawyers and programmers find themselves working together, but Catala was designed to capture and execute legal algorithms and to be understood by lawyers and programmers alike in a language "that lets you follow the very specific legal train of thought," Protzenko says.

In highly regulated industries, critical laws are translated precisely into code that reflects their intent. This is especially true when it comes to tax software and software that verifies Health Insurance Portability and Accountability Act (HIPPA) compliance. Yet, tax software "does not formalize statutes in a meaningful way,'' according to Lawsky.

Tax forms created by the government essentially take tax laws and put them into algorithms—and they are not a direct translation of the law, she says.

For example, in a prepublication article for the Ohio State Technology Law Journal, Lawsky wrote that software programs such as TurboTax encode tax forms, which are not law. They are prepared by the government, which collects information and turn portions of the law into an algorithm for taxpayers to apply.

"The difficult part of the coding, and the judgment calls, are almost entirely performed by the government, not by those who code tax preparation software," Lawsky wrote.

"And because forms turn law into algorithms, the forms themselves—not the instructions … may contain judgments about the law [and] sometimes law that is unclear," she says. "The forms themselves abstract away from the law."

You would think something as numerical as income tax law would be similar to mathematical logic, but it is not, Protzenko says, because it is not written with the precision and clarity that would "make it amenable to a very mathematical reading of it."

For example, that law does not mention a number may need to be rounded into whole cents. "The law won't tell you what you're supposed to do with rounding numbers and that can lead to ambiguity and a lack of specification of what's supposed to happen," he says.

Healthcare law is also very complex. Faisal Khan, senior legal counsel at healthcare law firm Nixon Gwilt Law in Vienna, VA, says, "Software for HIPAA compliance must incorporate algorithms that target and hit on all the top-level statutory requirements and implementing regulations.'

To make that happen, Khan says, "There must be a team of compliance-related input as many of the regulations essentially function as guidelines for companies to adhere to."

That means a process or security check that may be compliant for a small company may not automatically be compliant for a large company or health system, Khan says. Moreover, software data should be verified by compliance specialists because enforcement professionals at the U.S. Department of Health and Human Services are not only going to review documents, but also will scrutinize how key individuals and stakeholders are following HIPAA compliance processes and tweaking those processes as necessary as things change, according to Khan.

Thus, while software is a key solution to reducing costs and standardizing practices, "There needs to be a human element to support implementing an algorithm-based solution on the ground," Khan says. "What works from a logic perspective may not be the best solution based upon company operations and existing processes."  .... ' 

Matterport for Retailers

Have started to look at this and related capabilities.   And note the mention of digital twins.

Matterport Powers New Experiences for Retailers and Consumers

Retailers turn to digital twins to reach consumers virtually while providing offline, shoppable experiences

December 21, 2021 09:15 ET | Source: Matterport Inc

SUNNYVALE, Calif., Dec. 21, 2021 (GLOBE NEWSWIRE) -- Matterport, Inc. (“Matterport”) (Nasdaq: MTTR), the leading spatial data company driving the digital transformation of the built world, is powering new experiences for retailers and their customers. Using Capture Services On-Demand, Matterport Pro2 cameras, or the Matterport Smartphone app, retail customers are creating virtual showrooms, curating shoppable digital experiences with e-commerce integration, and making store operations more efficient.

“Our retail customers use Matterport technology in a variety of different ways, whether that’s creating a virtual showroom where consumers can shop for holiday gifts or using digital twins to remotely manage store design and operations,” said Conway Chen, Vice President of Business Development of Matterport. “Even for consumers who may be locked down in their own country due to the pandemic, they can still visit their favorite store virtually and see merchandise presented in a real space, as if they were walking through an actual showroom. Our technology is also allowing influencers and designers to merchandise their products directly to their followers. Matterport technology is improving the shopping experience for both retailers and consumers.” 

Harrods uses Matterport Capture Services to create virtual showroom for consumers   ...

Superdeterminism? Free will involved?

in Youtube:  https://www.youtube.com/watch?v=ytyjgIyegDI&list=WL&index=20  Brought to my Attention, see also other related references below:  Technical:

By Sabine Hossennfelder

Check out the math & physics courses that I mentioned (many of which are free!) and support this channel by going to https://brilliant.org/Sabine/ where you can create your Brilliant account. The first 200 will get 20% off the annual premium subscription.

This is a video I have promised you almost two years ago: How does superdeterminism make sense of quantum mechanics? It's taken me a long time to finish this because I have tried to understand why people dislike the idea that everything is predetermined so much. I hope that in this video I have addressed the biggest misconceptions. I genuinely think that discarding superdeterminism unthinkingly is the major reason that research in the foundations of physics is stuck.

If you want to know more about superdeterminism, these two papers (and references therein) may give you a good starting point:

https://arxiv.org/abs/1912.06462

https://arxiv.org/abs/2010.01324

0:00 Intro

0:24 What is superdeterminism?

2:28 What's with free will?

8:13 How does superdeterminism work?

13:51 Why would it destroy science?

15:43 What is it good for?

19:25 Sponsor message  .... ' 


DHL Doubles Robotics

 Continued Robotic Crunch

DHL Doubles Robots as Humans Alone Can't Handle Holiday Crunch

By Bloomberg, December 20, 2021

DHL's supply-chain unit has doubled the number of robots it uses in the U.S. to about 1,500 ahead of the holidays, in addition to hiring 15,000 seasonal workers.

The move has enabled the parcel delivery company to keep up with orders even as bottlenecks and labor costs grow.

DHL's Oscar de Bok said, "The supply-chain disruption that we're seeing at the moment is not a one-time thing. Because of the growth of e-commerce, supply chains are now organized differently because you get major hops and jumps at the end of the supply chain, because that's the end-consumer. All the stores and the wholesalers and distributors that used to be in between are now less, and that's why you get more disruptions in supply chains."

From Bloomberg

View Full Article - May Require Paid Subscription 

Monday, December 20, 2021

Monetizing your Personal Data

 Did considerable exploratory work in this space.  And considering the included risk.

Monetizing Your Personal Data

By Keith Kirkpatrick

Communications of the ACM, January 2022, Vol. 65 No. 1, Pages 17-19   10.1145/3495563

During the initial wave of commercialization of the Internet in the mid-to-late 1990s, companies began collecting personal information from visitors to their Websites. The value proposition laid out by Internet companies seemed simple: allow companies to track and capture user behavioral and demographic data, in exchange for free access to content, as well as a more personalized and tailored experience that was based on an individual's browsing and shopping habits.

However, few users or market observers could have projected the evolution of the market for data, which has become far more complex and valuable than previously imagined. In fact, large companies such as Google, Facebook, Amazon, and Alibaba, among others, have generated massive profits by leveraging the data collected, not only using it to improve the personalization and usability of their own sites, but by reselling that data to advertisers, to the tune of billions of dollars per year. In fact, March 2021 data from eMarketer indicated the Internet advertising market generated $378.2 billion in 2020, and projected that figure will rise to nearly $646 billion by 2024.

"Everything you do creates data that's being bought and sold," says George Stella, chief revenue officer of BigToken (bigtoken.com), a data broker that enables consumers to collect revenue from the use of their personal data. "So, the ad tech industry has collected a ton of information from people without their permission over the last 20-plus years, and made billions and billions of dollars off of it."

A key barrier to empowering people to generate revenue from their data is awareness. "Less than 200 or 300 million people out of 7.1 billion people globally are even aware that their data is being used or sold, and that they can actually benefit from these sites," says Sagar Shah, client partner with artificial intelligence (AI) technology firm Fractal (www.fractal.ai).

While the Internet advertising market is massive, putting specific monetary value on each individual user's personal data is highly variable, not only due to people's different demographic profiles, but also to the type of data and its relative level of abundance or scarcity. For example, data on demographics that are in limited supply (such as data on Middle Eastern male consumers) is more valuable than demographic data on white millennial women. Similarly, the browsing data of individuals seeking to purchase a Tesla or Ferrari automobile within the next month would be valued more highly by data brokers and advertisers than the data of someone browsing for the best deals on a used Chrysler minivan.

Regardless of the type of data, personal data has value on both the legitimate advertising market and the black market, where stolen records can be sold to various parties. Data broker Invisibly (www.invisibly.com) provides a listing of various types of data available for sale on the dark web, ranging from a Social Security number (valued at just $0.53) to a complete healthcare record ($250). There also is significant value attached to personal information that is collected, bought, and sold through legitimate operations, such as data brokers and Internet advertising firms.

Left out of this equation are the end users generating that data who, for the most part, do not share in any of that revenue. Enter companies such as the aforementioned BigToken, Invisibly, and Killi (killi.io), each of which serve as middlemen or brokers between consumers and the companies that collect data. The goal is to create a user ownership model in which consumers retain more control over their data, who is permitted to capture it, and who can profit from it.

"There's a whole industry built around the unscrupulous gathering of customer data to optimize sales," says Rick Hoskins, founder of Filter King, a seller of HVAC filters via its eponymous online site. "We take a lot of care not to source customer data unethically. As a business owner, allowing normal people to monetize their data would take a massive weight off my shoulders. It would cut the knees out from under this sketchy shadow industry stealing people's information for profit. Not only would it give us, online marketers, access to more data, it would let us access it ethically."  .... ' 

Full article in ACM

What is Practical AI Understanding?

 Looking forward to reading this, Quanta mag is usually good and medium level technically for information delivery ... Yes ...  understanding in context is the most important thing, and thus precise calibration according to need is important. 

What Does It Mean for AI to Understand?

By Quanta Magazine, December 20, 2021

Even simple chatbots, such as Joseph Weizenbaums 1960s ersatz psychotherapist Eliza, have fooled people into believing they were conversing with an understanding being, even when they knew that their conversation partner was a machine.

Remember IBM's Watson, the AI Jeopardy! champion? A 2010 promotion proclaimed, "Watson understands natural language with all its ambiguity and complexity." However, as we saw when Watson subsequently failed spectacularly in its quest to "revolutionize medicine with artificial intelligence," a veneer of linguistic facility is not the same as actually comprehending human language.

Natural language understanding has long been a major goal of AI research. At first, researchers tried to manually program everything a machine would need to make sense of news stories, fiction or anything else humans might write. This approach, as Watson showed, was futile — it's impossible to write down all the unwritten facts, rules and assumptions required for understanding text. More recently, a new paradigm has been established: Instead of building in explicit knowledge, we let machines learn to understand language on their own, simply by ingesting vast amounts of written text and learning to predict words. The result is what researchers call a language model. When based on large neural networks, like OpenAI's GPT-3, such models can generate uncannily humanlike prose (and poetry!) and seemingly perform sophisticated linguistic reasoning.

But has GPT-3 — trained on text from thousands of websites, books and encyclopedias — transcended Watson's veneer? Does it really understand the language it generates and ostensibly reasons about? This is a topic of stark disagreement in the AI research community. Such discussions used to be the purview of philosophers, but in the past decade AI has burst out of its academic bubble into the real world, and its lack of understanding of that world can have real and sometimes devastating consequences. In one study, IBM's Watson was found to propose "multiple examples of unsafe and incorrect treatment recommendations." Another study showed that Google's machine translation system made significant errors when used to translate medical instructions for non-English-speaking patients.  ... '   ( full article at link below) 

From Quanta Magazine

View Full Article

Strange Robotics

ACM NEWS

This Robot Looks Like a Pancake, Jumps Like a Maggot

By The New York Times, December 14, 2021

The ability to jump can help a terrestrial robot traverse new spaces and navigate rough terrain.

Credit: R. Chen et al./Nature Communications

If a pancake could dream, it might long for legs so it could jump off your breakfast plate in pursuit of a better, unchewed life.

But legs, it turns out, are not necessary for something as flat as a flapjack to hop around. A group of scientists has designed a tortilla-shaped robot that can jump several times per second and higher than seven times its body height of half a centimeter. They report that the robot, which is the size of a squished tennis ball and weighs about the same as a paper clip, nimbly performs these feats without any semblance of feet. Their research was published on Tuesday in the journal Nature Communications.

Shuguang Li, a roboticist at Harvard who was not involved with the research, called the new robot "a clever idea" and "an important contribution to the soft robotics field."

Many terrestrial robots, meaning ones at home on the ground rather than in air or water, move by rolling or walking. But the ability to jump can help a terrestrial robot traverse new spaces and navigate rough terrain; sometimes it's more efficient for a robot to jump over an obstacle than to go around it, Rui Chen, a researcher at Chongqing University in China and an author of the paper, wrote in an email.

From The New York Times

Sunday, December 19, 2021

Malware Developers Turn to 'Exotic' Programming Languages to Thwart Researchers

Not sure I understand this.  Could be either way.  More experts likely to know advanced methods and patterns and thus more insight in finding malware?   Or is better to have a larger number of trainees doing the work, likely to find subtle security flaws. 

 Malware Developers Turn to 'Exotic' Programming Languages to Thwart Researchers

ZDNet, Charlie Osborne, July 27, 2021

Cybersecurity service provider BlackBerry's Research & Intelligence team has found that malware developers are increasingly employing "exotic" coding languages to foil analysis. A report published by the team cited an "escalation" in the use of Go (Golang), D (DLang), Nim, and Rust to "try to evade detection by the security community, or address specific pain-points in their development process." Malware authors are experimenting with first-stage droppers and loaders written in these languages to evade detection on a target endpoint; once the malware has bypassed existing security controls that can identify more typical forms of malicious code, they are used for decoding, loading, and deploying malware. The researchers said cybercriminals’ use of exotic programming languages could impede reverse engineering, circumvent signature-based detection tools, and enhance cross-compatibility over target systems..... ' 

Discovering effective Drug Therapy

 Chemical compound discovery and application for drug therapy. 

Scientists Can Efficiently Screen Billions of Chemical Compounds to Find Effective Drug Therapies

USC Dornsife College of Letters, Arts, and Sciences

Darrin S. Joy, December 15, 2021

An international team of researchers led by the University of Southern California Dornsife College of Letters, Arts, and Sciences has devised a method of identifying effective drugs among billions of chemical compounds, more quickly and cost-efficiently than current methods. The V-SYNTHES (Virtual Synthon Hierarchical Enumeration Screening) method works directly with synthons, the virtual chemical building blocks of the REAL (readily available for synthesis) Space library, to identify the best molecules to match up with specific protein targets. V-SYNTHES was able to mine synthon libraries to identify drug-like molecules that could selectively target cannabinoid receptors more than 5,000 times faster than standard algorithms. ... ' 

 

Forest Health Analysis

Back to my forestry analysis Days,   much better sensing now. 

Real-Time, Interactive Monitoring of Forest Health

Technical University of Munich (Germany)

December 10, 2021

A data analysis and visualization tool developed by researchers at Germany's Technical University of Munich (TUM) uses satellite images to track the health of European forests. The interactive Forest Condition Monitor (FCM) also allows users to view and download data for specific countries and time ranges. Using remote sensing data, the open-access interactive platform can color-code the greenness of European forests based on deviations from long-term norms, helping to identify hotspots for forest die-back and decline. Said TUM's Anja Rammig, "The FCM data, complemented by additional ground-based studies and monitoring campaigns, could help to identify causes for variations in the greenness of tree canopies and thus to gain a better understanding of the eco-physiology of trees under stress in natural surroundings."

Tiny Robotics

Very tiny robotics,  collaborative?

A new micro aerial robot based on dielectric elastomer actuators

by Ingrid Fadelli , Tech Xplore

A 0.16 g microscale robot that is powered by a muscle-like soft actuator. Credit: Ren et al.

Micro-sized robots could have countless valuable applications, for instance, assisting humans during search-and-rescue missions, conducting precise surgical procedures, and agricultural interventions. Researchers at Massachusetts Institute of Technology (MIT) have recently created a tiny, flying robot based on a class of artificial muscles known as dielectric elastomer actuators (DEAs).

This new robot, presented in a paper published in Wiley's Advanced Materials journal, significantly outperformed many DEA-based micro-systems developed in the past. Most notably, the robot can operate at low voltages and has high endurance despite its miniature size.

"Our group has a long-term vision of creating a swarm of insect-like robots that can perform complex tasks such as assisted pollination and collective search-and-rescue," Kevin Chen, one of the researchers who carried out the study, told Tech Xplore. "Since three years ago, we have been working on developing aerial robots that are driven by muscle-like soft actuators." ... ' 

Saturday, December 18, 2021

Shrinking AI, Expanding Capability

 Computation need for AI soars, as devices expand, and help with Moore's Law

Shrinking Artificial Intelligence   By Chris Edwards

Communications of the ACM, January 2022, Vol. 65 No. 1, Pages 12-14   10.1145/3495562

The computational demand made by artificial intelligence (AI) has soared since the introduction of deep learning more than 15 years ago. Successive experiments have demonstrated the larger the deep neural network (DNN), the more it can do. In turn, developers have seized on the availability of multiprocessor hardware to build models now incorporating billions of trainable parameters.

The growth in DNN capacity now outpaces Moore's Law, at a time when relying on silicon scaling for cost reductions is less assured than it used to be. According to data from chipmaker AMD, cost per wafer for successive nodes has increased at a faster pace in recent generations, offsetting the savings made from being able to pack transistors together more densely (see Figure 1). "We are not getting a free lunch from Moore's Law anymore," says Yakun Sophia Shao, assistant professor in the Electrical Engineering and Computer Sciences department of the University of California, Berkeley.

Though cloud servers can support huge DNN models, the rapid growth in size causes a problem for edge computers and embedded devices. Smart speakers and similar products have demonstrated inferencing can be offloaded to cloud servers and still seem responsive, but consumers have become increasingly concerned over having the contents of their conversations transferred across the Internet to operators' databases. For self-driving vehicles and other robots, the round-trip delay incurred by moving raw data makes real-time control practically impossible.

Specialized accelerators can improve the ability of low-power processors to support complex models, making it possible to run image-recognition models in smartphones. Yet a major focus of R&D is to try to find ways to make the core models far smaller and more energy efficient than their server-based counterparts. The work began with the development of DNN architectures such as ResNet and Mobilenet. The designers of Mobilenet recognized the filters used in the convolutional layers common to many image-recognition DNNs require many redundant applications of the multiply-add operations that form the backbone of these algorithms. The Mobilenet creators showed that by splitting these filters into smaller two-dimensional convolutions, they could cut the number of calculations required by more than 80%.

A further optimization is layer-fusing, in which successive operations funnel data through the weight calculations and activation operations of more than one layer. Though this does not reduce the number of calculations, it helps avoid repeatedly loading values from main memory; instead, they can sit temporarily in local registers or caches, which can provide a big boost to energy efficiency.

More than a decade ago, research presented at the 2010 International Symposium on Computer Architecture by a team from Stanford University showed the logic circuits that perform computations use far less energy compared to what is needed for transfers in and out of main memory. With its reliance on large numbers of parameters and data samples, deep learning has made the effect of memory far more apparent than with many earlier algorithms.

Accesses to caches and local scratchpads are less costly in terms of energy and latency than those made to main memory, but making best use of these local memories is difficult. Gemmini, a benchmarking system developed by Shao and colleagues, shows even the decision to split execution across parallel cores affects hardware design choices. On one test of ResNet-50, Shao notes convolutional layers "benefit massively from a larger scratchpad," but in situations where eight or more cores are working in parallel on the same layer, simulations showed larger level-two cache as more effective.

Reducing the precision of the calculations that determine each neuron's contribution to the output both cuts the required memory bandwidth and energy for computation. Most edge-AI processors now use many 8-bit integer units in parallel, rather than focusing on accelerating the 32-bit floating-point operations used during training. More than 10 8-bit multipliers can fit into the space taken up by a single 32-bit floating-point unit.

With its reliance on large numbers of parameters and data samples, deep learning has made the effect of memory far more apparent than with earlier algorithms.  ... ' 

P vs NP: Complexity, The Problem today

 This was a big deal when I was in grad school, we did many tests to scope how easy it was to solve problems in evolving contexts.   Had the impression this was still unsolved.    Below a good scoping of the current day views.  Includes good intro video:

 Fifty Years of P vs. NP and the Possibility of the Impossible   By Lance Fortnow

Communications of the ACM, January 2022, Vol. 65 No. 1, Pages 76-85   10.1145/3460351

On May 4, 1971, computer scientist/mathematician Steve Cook introduced the P vs. NP problem to the world in his paper, "The Complexity of Theorem Proving Procedures." More than 50 years later, the world is still trying to solve it. In fact, I addressed the subject 12 years ago in a Communications article, "The Status of the P versus NP Problem."13

The P vs. NP problem, and the theory behind it, has not changed dramatically since that 2009 article, but the world of computing most certainly has. The growth of cloud computing has helped to empower social networks, smartphones, the gig economy, fintech, spatial computing, online education, and, perhaps most importantly, the rise of data science and machine learning. In 2009, the top 10 companies by market cap included a single Big Tech company: Microsoft. As of September 2020, the first seven are Apple, Microsoft, Amazon, Alphabet (Google), Alibaba, Facebook, and Tencent.38 The number of computer science (CS) graduates in the U.S. more than tripled8 and does not come close to meeting demand.

Rather than simply revise or update the 2009 survey, I have chosen to view advances in computing, optimization, and machine learning through a P vs. NP lens. I look at how these advances bring us closer to a world in which P = NP, the limitations still presented by P vs. NP, and the new opportunities of study which have been created. In particular, I look at how we are heading toward a world I call "Optiland," where we can almost miraculously gain many of the advantages of P = NP while avoiding some of the disadvantages, such as breaking cryptography.

As an open mathematical problem, P vs. NP remains one of the most important; it is listed on the Clay Mathematical Institute's Millennium Problems21 (the organization offers a million-dollar bounty for the solution). I close the article by describing some new theoretical computer science results that, while not getting us closer to solving the P vs. NP question, show us that thinking about P vs. NP still drives much of the important research in the area

The P vs. NP Problem

Are there 300 Facebook users who are all friends with each other? How would you go about answering that question? Let's assume you work at Facebook. You have access to the entire Facebook graph and can see which users are friends. You now need to write an algorithm to find that large clique of friends. You could try all groups of 300, but there are far too many to search them all. You could try something smarter, perhaps starting with small groups and merging them into bigger groups, but nothing you do seems to work. In fact, nobody knows of a significantly faster solution than to try all the groups, but neither do we know that no such solution exists.

This is basically the P vs. NP question. NP represents problems that have solutions you can check efficiently. If I tell you which 300 people might form a clique, you can check relatively quickly that the 44,850 pairs of users are all friends. Clique is an NP problem. P represents problems where you can find those solutions efficiently. We don't know whether the clique problem is in P. Perhaps, surprisingly, Clique has a property called NP-complete—that is, we can efficiently solve the Clique problem quickly if and only if P = NP. Many other problems have this property, including 3-Coloring (can a map be colored using only three colors so that no two neighboring countries have the same color?), Traveling Salesman (find the shortest route through a list of cities, visiting every city and returning to the starting place), and tens to hundreds of thousands of others.

Formally, P stands for "polynomial time," the class of problems that one can solve in time bounded by a fixed polynomial in the length of the input. NP stands for "nondeterministic polynomial time," where one can use a nondeterministic machine that can magically choose the best answer. For the purposes of this survey, it is best to think of P and NP simply as efficiently computable and efficiently checkable.

For those who want a longer informal discussion on the importance of the P vs. NP problem, see the 2009 survey13 or the popular science book based on that survey.14 For a more technical introduction, the 1979 book by Michael Garey and David Johnson16 has held up surprisingly well and remains an invaluable reference for those who need to understand which problems are NP-complete.

Why Talk About It Now?

On that Tuesday afternoon in 1971, when Cook presented his paper to ACM Symposium on the Theory of Computing attendees at the Stouffer's Somerset Inn in Shaker Heights, OH, he proved that Satisfiability is NP-complete and Tautology is NP-hard.10 The theorems suggest that Tautology is a good candidate for an interesting set not in [P], and I feel it is worth spending considerable effort trying to prove this conjecture. Such a proof would represent a major breakthrough in complexity theory.

Dating a mathematical concept is almost always a challenge, and there are many other possible times where we can start the P vs. NP clock. The basic notions of algorithms and proofs date back to at least the ancient Greeks, but as far as we know they never considered a general problem such as P vs. NP. The basics of efficient computation and nondeterminism were developed in the 1960s. The P vs. NP question was formulated earlier than that, we just didn't know it.  .... ( more at link including intro video and additional resources) ..... ' 


Friday, December 17, 2021

Farmers Markets

Have followed farmers markets for some time, reinvention needed?  Good discussion.

Do farmers markets need to be reinvented for the digital age?  in Retailwire,   Good discussion.

Small and medium-sized independent farms have discovered online selling in recent years with the arrival of numerous farm-to-door delivery apps, possibly threatening the popularity of farmers markets. 

Many farmers markets were already struggling due to over-saturation prior to the pandemic. A March 2019 article from NPR noted that the number of farmers markets exploded from 2,000 in 1994 to more than 8,600 in 2019. Crowds were heading to the bigger markets for variety and one-stop shopping, forcing scores of smaller ones to fold.

Newer competition has been coming as well from subscription-driven community-supported agriculture (CSA) programs and home delivery options from Amazon.com, Instacart and Blue Apron.

With the pandemic, the temporary closing of farmers markets and restaurants forced farms to pivot online to capitalize on the resurgence in home cooking. In many cases, the pandemic accelerated the use of local, direct-to-consumer food systems, such as Barn2Door, Farm to People, Our Harvest, Harvie and WhatsGood, that were already gaining traction. A number of farmers markets set up their own online shops.

Going online can help farms tap directly into the broader growth in online grocery in addition to reaching customers who can’t frequent farmers markets. Online, farms can offer a wider variety of products versus their farmers market stall while avoiding waking up well before dawn and spending the day in inclement weather.   .... '

Tiny ML, less Memory in IOT

 Always interested in the integration of AI and secure IOT.  Smarter, better devices

ACM TECHNEWS

Tiny ML Design Alleviates Bottleneck in Memory Usage on IoT Devices

By MIT News, December 16, 2021

Massachusetts Institute of Technology (MIT) researchers have come up with a machine learning (ML) method to reduce the amount of memory required for Internet of Things (IoT) devices.

The researchers boosted the efficiency of TinyML software by analyzing memory use on microcontrollers running convolutional neural networks; they applied a new inference technique and neural architecture to address imbalanced memory utilization-induced bottlenecks, reducing peak memory usage four- to eight-fold.

When deployed on the next-generation MCUNetV2 tinyML vision system, the method was more accurate than other ML techniques running on microcontrollers.

"Without [graphics processing units] or any specialized hardware, our technique is so tiny it can run on these small cheap IoT devices and perform real-world applications like these visual wake words, face mask detection, and person detection," said MIT's Song Han. "This opens the door for a brand-new way of doing tiny AI and mobile vision."

From MIT News