/* ---- Google Analytics Code Below */

Wednesday, August 31, 2022

Robot Dogs Learning Tough Terrain

Been following this for some time, and potential uses. 

Robot Dog Learns to Walk Tough Terrain in 20 Minutes

New Scientist, Alex Wilkins,  August 26, 2022

Researchers at the University of California, Berkeley (UC Berkeley) developed a machine learning algorithm that enabled a robot dog to learn to navigate difficult terrain in only 20 minutes. The Q-learning algorithm does not need a model of the target terrain. As a result, said UC Berkeley's Sergey Levine, "We don't need to understand how the physics of an environment actually works, we just put the robot into an environment and turn it on." The algorithm teaches the robot by rewarding it for each successful action until reaching its ultimate goal. The researchers demonstrated that the robot was able to walk on terrains it had not previously encountered, including grass, a layer of bark, a memory foam mattress, and a hiking trail, after about 20 minutes of training on each. ... ' 

3D Printable Bioelectronic Inks

Researchers Design Inks for 3D-Printable Wearable Bioelectronics, By Texas A&M University Engineering,   August 25, 2022

A new class of biomaterial inks developed by researchers at Texas A&M University exhibit the same characteristics as highly conductive human tissue like skin, making them useful for three-dimensionally (3D) printed wearable bioelectronics.

The ink uses molybdenum disulfide (MoS2), a two-dimensional (2D) nanomaterial that can be combined with modified gelatin to create a flexible hydrogel.

The researchers developed a customizable, multi-head 3D bioprinter to 3D-print the ink. The 3D-printed hydrogel ink is electrically conductive and can be used to make complex 3D circuits.

The researchers said it will allow for bioelectronics that can be customized for an individual patient's needs.

Said Texas A&M's Kaivalya Deo, "These 3D-printed devices are extremely elastomeric and can be compressed, bent, or twisted without breaking. In addition, these devices are electronically active, enabling them to monitor dynamic human motion and paving the way for continuous motion monitoring."

From Texas A&M University Engineering

View Full Article   

COPPA and more Online

More on COPPA and updates, protecting children online.

ACM NEWS

Protecting Children's Privacy Online   By Gregory Goth, Commissioned by CACM Staff, August 30, 2022

Digital games and educational apps for children can be a boon. They keep youngsters engaged in interactive play and learning, and can give parents a break.

Unfortunately, though, a large percentage of those games' characters and features are designed not to altruistically enlighten children, but to make them spend more time on the platform–and to get their parents to spend more money on extra features.

"We assume adults are better at recognizing persuasion pressure and are hopefully less magically engaged with their parasocial relationships with characters," said Jenny Radesky, M.D.,  principal investigator of the Radesky Lab at the University of Michigan Medical School. "Kids' relationships with Elmo or Daniel Tiger or Strawberry Shortcake are very important to them, and they are more likely to follow those characters' instructions."

In her lab's most recent research on children's mobile apps, Radesky found concerning evidence that game developers are putting their interests ahead of their young audience in designing and creating their products: only 20% of 133 mobile apps played by 160 children aged 3 to 5 had no manipulative design features intended to better monetize the child's experience.

The manipulative features Radesky and her colleagues found included parasocial relationship pressure, fabricated time pressure, navigation constraints, and the use of "attractive lures" to encourage longer game play or more in-app purchases. These features are usually tied to data collection mechanisms that exploit a child's inherent trust.

The study, published in the Journal of the American Medical Association's JAMA Open, seemed to confirm concerning results published elsewhere in recent months:

An analysis of evident privacy policies in products in the Google and Apple app stores by fraud, privacy, and compliance data analytics firm Pixalate, found 11% of child-directed apps in the Google Play store, and 21% of those in the Apple store, had potential access to users' personal information but no detectable privacy policy; almost 250,000 had no discernible country of origin, a nightmare for enforcement agencies.

An examination by Human Rights Watch of how well (or poorly) educational technology deployed for remote schooling during the Covid-19 pandemic protected children's privacy found that 145 of 169 educational applications "appeared to engage in data practices that put children's rights at risk, contributed to undermining them, or actively infringed on these rights."

Radesky said that while it is evident across all this research that children's privacy and priorities are given short shrift by app and game designers, it is also an encouraging sign that children's needs are now being given more widespread attention. What is not so evident is some sort of consensus about how best to address these shortcomings.

Numerous existing laws such as the U.S. federal government's Children's Online Privacy Protection Act (COPPA), in place since 1998 and updated in 2013, and the European Union's General Data Protection Regulation (GDPR), in place since 2018, offer some level of privacy protection. However, the increasing complexity of the digital ecosystem has revealed loopholes in some of these policies that allow app developers to skirt the boundaries–and sometimes, to cross the line–of what's thical, if not outright illegal.  .... ' 

For example, COPPA's primary goal is to place parents in control of the information online games and services collect from their children under age 13. According to the Federal Trade Commission (FTC), it applies without question to developers whose products collect, use, or disclose personal information from those children, or on whose behalf such information is collected or maintained (such as when personal information is collected by an ad network to serve targeted advertising).  .... ' 

LastPass Hacked

Reasons still unclear.

LastPass, Password Manager with Millions of Users, Is Hacked

The Wall Street Journal

By Alyssa Lukpat, August 26, 2022

On Aug. 25, online password manager LastPass reported the theft of some of its source code and proprietary information, but said there is no evidence customer information from its more than 33 million users or encrypted password vaults were accessed. LastPass' Karim Toubba said a developer account had been breached, allowing an unauthorized party to access the company's development environment. The unusual activity was detected two weeks ago, prompting an investigation. Toubba said the company is working with a cybersecurity and forensics firm and has rolled out additional security measures. LastPass stores encrypted login information that users can access online with a master password, but they cannot see customers' data. ... '

Tuesday, August 30, 2022

Towards Helpful Robots: Grounding Language in Robotic Affordances

In Google Blog,   what and how can we get things done with commands?    How will the language models help.  

Towards Helpful Robots: Grounding Language in Robotic Affordances

Tuesday, August 16, 2022  ...Posted by Brian Ichter and Karol Hausman, Research Scientists, Google Research, Brain Team

Over the last several years, we have seen significant progress in applying machine learning to robotics. However, robotic systems today are capable of executing only very short, hard-coded commands, such as “Pick up an apple,” because they tend to perform best with clear tasks and rewards. They struggle with learning to perform long-horizon tasks and reasoning about abstract goals, such as a user prompt like “I just worked out, can you get me a healthy snack?”

Meanwhile, recent progress in training language models (LMs) has led to systems that can perform a wide range of language understanding and generation tasks with impressive results. However, these language models are inherently not grounded in the physical world due to the nature of their training process: a language model generally does not interact with its environment nor observe the outcome of its responses. This can result in it generating instructions that may be illogical, impractical or unsafe for a robot to complete in a physical context. For example, when prompted with “I spilled my drink, can you help?” the language model GPT-3 responds with “You could try using a vacuum cleaner,” a suggestion that may be unsafe or impossible for the robot to execute. When asking the FLAN language model the same question, it apologizes for the spill with "I'm sorry, I didn't mean to spill it,” which is not a very useful response. Therefore, we asked ourselves, is there an effective way to combine advanced language models with robot learning algorithms to leverage the benefits of both?

In “Do As I Can, Not As I Say: Grounding Language in Robotic Affordances”, we present a novel approach, developed in partnership with Everyday Robots, that leverages advanced language model knowledge to enable a physical agent, such as a robot, to follow high-level textual instructions for physically-grounded tasks, while grounding the language model in tasks that are feasible within a specific real-world context. We evaluate our method, which we call PaLM-SayCan, by placing robots in a real kitchen setting and giving them tasks expressed in natural language. We observe highly interpretable results for temporally-extended complex and abstract tasks, like “I just worked out, please bring me a snack and a drink to recover.” Specifically, we demonstrate that grounding the language model in the real world nearly halves errors over non-grounded baselines. We are also excited to release a robot simulation setup where the research community can test this approach.

With PaLM-SayCan, the robot acts as the language model’s “hands and eyes,” while the language model supplies high-level semantic knowledge about the task.

A Dialog Between User and Robot, Facilitated by the Language Model

Our approach uses the knowledge contained in language models (Say) to determine and score actions that are useful towards high-level instructions. It also uses an affordance function (Can) that enables real-world-grounding and determines which actions are possible to execute in a given environment. Using the the PaLM language model, we call this PaLM-SayCan.

Our approach selects skills based on what the language model scores as useful to the high level instruction and what the affordance model scores as possible.

Our system can be seen as a dialog between the user and robot, facilitated by the language model. The user starts by giving an instruction that the language model turns into a sequence of steps for the robot to execute. This sequence is filtered using the robot’s skillset to determine the most feasible plan given its current state and environment. The model determines the probability of a specific skill successfully making progress toward completing the instruction by multiplying two probabilities: (1) task-grounding (i.e., a skill language description) and (2) world-grounding (i.e., skill feasibility in the current state).

There are additional benefits of our approach in terms of its safety and interpretability. First, by allowing the LM to score different options rather than generate the most likely output, we effectively constrain the LM to only output one of the pre-selected responses. In addition, the user can easily understand the decision making process by looking at the separate language and affordance scores, rather than a single output.

PaLM-SayCan is also interpretable: at each step, we can see the top options it considers based on their language score (blue), affordance score (red), and combined score (green).

Training Policies and Value Functions

Each skill in the agent’s skillset is defined as a policy with a short language description (e.g., “pick up the can”), represented as embeddings, and an affordance function that indicates the probability of completing the skill from the robot’s current state. To learn the affordance functions, we use sparse reward functions set to 1.0 for a successful execution, and 0.0 otherwise.

We use image-based behavioral cloning (BC) to train the language-conditioned policies and temporal-difference-based (TD) reinforcement learning (RL) to train the value functions. To train the policies, we collected data from 68,000 demos performed by 10 robots over 11 months and added 12,000 successful episodes, filtered from a set of autonomous episodes of learned policies. We then learned the language conditioned value functions using MT-Opt in the Everyday Robots simulator. The simulator complements our real robot fleet with a simulated version of the skills and environment, which is transformed using RetinaGAN to reduce the simulation-to-real gap. We bootstrapped simulation policies’ performance by using demonstrations to provide initial successes, and then continuously improved RL performance with online data collection in simulation.   ... ' 

Should the Cryptocurrency Crash Scare Retailers?

 Was brought  to my attention.  Implications regarding use cases and trust.  

Should the cryptocurrency crash scare retailers?     by Tom Ryan

Nearly 75 percent of retailers plan to accept either cryptocurrency or stablecoin payments within the next two years, according to Deloitte’s “Merchants Getting Ready For Crypto” study.

The survey of 2,000 U.S. retail executives was taken in the first two weeks of December 2021, just before valuations on digital currencies collapsed.

According to Barron’s, Bitcoin, the dominant token, continues to trade at around one-third of its November 2021 all-time high, with the market capitalization of the overall crypto space also tumbling.

Deloitte’s study, done in collaboration with PayPal, found retailers bullish on the digital asset’s potential:

Eighty-five percent anticipated that digital currency payments will be ubiquitous in their respective industries within five years, with 54 percent having invested more than $1 million towards enabling digital currency payments.

Eighty-seven percent agreed that organizations accepting digital currencies have a competitive advantage. Three ways value is expected to be derived: improved customer experience, cited by 48 percent; increased customer base, 46 percent; and being perceived as cutting edge, 40 percent.

Eighty-six percent see a significant benefit to their finance and cash management from accepting digital currency payments. Value is seen in enabling immediate access to funds, cited by 40 percent; taking advantage of blockchain-based innovations in decentralized digital finance, 39 percent; and allowing in-house management of the revenue cycle/treasury/finance department, 39 percent.

Survey participants saw the top barriers to adoption to be security of the payment platforms, cited by 43 percent; followed by the changing regulatory landscape, 37 percent; and the instability of the digital currency market, 36 percent.  ... ' 

Post-Quantum Cryptography Scheme Is Cracked?

 Generally true? Is this easily fixable, say by adding more digits? This kind of work, which challenges new methods is key. 

‘Post-Quantum’ Cryptography Scheme Is Cracked on a Laptop

Two researchers have broken an encryption protocol that many saw as a promising defense against the power of quantum computing.

By Jordana Cepelewicz, Senior Writer,    QuantaMagazine

If today’s cryptography protocols were to fail, it would be impossible to secure online connections — to send confidential messages, make secure financial transactions, or authenticate data. Anyone could access anything; anyone could pretend to be anyone. The digital economy would collapse.

When (or if) a fully functional quantum computer becomes available, that’s precisely what could happen. As a result, in 2017 the U.S. government’s National Institute of Standards and Technology (NIST) launched an international competition to find the best ways to achieve “post-quantum” cryptography.

Last month, the agency selected its first group of winners: four protocols that, with some revision, will be deployed as a quantum shield. It also announced four additional candidates still under consideration.

Abstractions navigates promising ideas in science and mathematics. Journey with us and join the conversation.

Then on July 30, a pair of researchers revealed that they had broken one of those candidates in an hour on a laptop. (Since then, others have made the attack even faster, breaking the protocol in a matter of minutes.) “An attack that’s so dramatic and powerful … was quite a shock,” said Steven Galbraith, a mathematician and computer scientist at the University of Auckland in New Zealand. Not only was the mathematics underlying the attack surprising, but it reduced the (much-needed) diversity of post-quantum cryptography — eliminating an encryption protocol that worked very differently from the vast majority of schemes in the NIST competition.

“It’s a bit of a bummer,” said Christopher Peikert, a cryptographer at the University of Michigan.

The results have left the post-quantum cryptography community both shaken and encouraged. Shaken, because this attack (and another from a previous round of the competition) suddenly turned what looked like a digital steel door into wet newspaper. “It came out of the blue,” said Dustin Moody, one of the mathematicians leading the NIST standardization effort. But if a cryptographic scheme is going to get broken, it’s best if it happens well before it’s being used in the wild. “There’s many emotions that go through you,” said David Jao, a mathematician at the University of Waterloo in Canada who, along with IBM researcher Luca De Feo, proposed the protocol in 2011. Certainly surprise and disappointment are among them. “But also,” Jao added, “at least it got broken now.”  .... ' 

Kroger to Reinvent Shopping Experience with NVIDIA Omniverse

Want to see this ...

Kroger Reinvents the Shopping Experience with NVIDIA AI   in Retailwire

Kroger and NVIDIA embarked on a strategic collaboration to reimagine the shopping experience using AI applications and services. Discover the progress made by adopting AI and NVIDIA Omniverse.

 Register now: 

Deploying Decentralized, Privacy-Preserving Proximity Tracing

Considerable and  interesting, below an intro, very relevant, much more at the link.

Deploying Decentralized, Privacy-Preserving Proximity Tracing

By Carmela Troncoso, Dan Bogdanov, Edouard Bugnion, Sylvain Chatel, Cas Cremers, Seda Gürses, Jean-Pierre Hubaux, Dennis Jackson, James R. Larus, Wouter Lueks, Rui Oliveira, Mathias Payer, Bart Preneel, Apostolos Pyrgelis, Marcel Salathé, Theresa Stadler, Michael Veale

Communications of the ACM, September 2022, Vol. 65 No. 9, Pages 48-57  10.1145/3524107

Contact tracing is a time-proven technique for breaking infection chains in epidemics. Public health officials interview those who come in contact with an infectious agent, such as a virus, to identify exposed, potentially infected people. These contacts are notified that they are at risk and should take efforts to avoid infecting others—for example, by going into quarantine, taking a test, wearing a mask continuously, or taking other precautionary measures.

n March 2020, as the first wave of the COVID-19 pandemic was peaking, traditional manual contact tracing efforts in many countries were overwhelmed by the sheer volume of cases; by the rapid speed at which SARS-CoV-2 spread; and by the large fraction of asymptomatic, yet infectious, individuals.

Many people quickly and independently proposed using ubiquitous smartphones to implement digital contact tracing (DCT). In this new approach, an app on a user's phone could record contacts (encounters with other people) of sufficient time duration. If a physically close contact was diagnosed as infected, the app could inform the phone's potentially infected user. The envisioned technology would complement manual contact tracing by notifying people faster; reducing the burden on trained contract tracers; increasing scalability; and finding anonymous contacts, such as those in public spaces like shops and transportation, who would be otherwise unreachable through traditional systems.  ...>

AI-Created Lenses Let Camera Ignore Some Objects

 A kind of sensory focus, but is this another privacy issue?

AI-Created Lenses Let Camera Ignore Some Objects

New ScientistBy Matthew Sparkes, August 23, 2022

University of California, Los Angeles researchers developed a deep-learning artificial intelligence (AI) model design three-dimensionally (3D) printed plastic camera lenses that capture images of certain objects, while ignoring others in the same frame. The researchers trained the model using thousands of images of numbers, designated either as target objects to appear in images or objects to ignore. The model was told when images that were supposed to reach the camera's sensor did and did not pass through a trio of lenses, and when images that were not supposed to reach the sensor did. The AI used the data to improve its lens design. The completed lenses use complex patterns printed into the plastic to diffract away light relating to objects that are not designated to appear in the final image. Unwanted objects are not captured digitally, so they do not need to be edited out of the image.

Monday, August 29, 2022

Amazon Callisto Heading to the Moon

 Following the specifics of this. 

NASA Mission Carrying Voice Assistant Tech to the Moon 

Space.com,   Brett Tingley, August 26, 2022

The U.S. National Aeronautics and Space Administration (NASA)'s Artemis 1 mission scheduled to launch today (but delayed) will carry an Alexa-enabled voice assistant called Callisto into lunar orbit. Designed by engineers at Lockheed Martin, Cisco, and Amazon, Callisto aims to enhance future spaceflights with real-time data, augmented connectivity, and mission-specific feedback. Lockheed Martin said Callisto will show "how voice technology, AI [artificial intelligence], and portable tablet-based videoconferencing can help improve efficiency and situational awareness for those on board the spacecraft," as well as providing "access to real-time mission information and a virtual connection to people and information back on Earth." Amazon said Callisto will link to mission controllers using NASA's Deep Space Network, and will feature Local Voice Control, which "allows Alexa to process voice commands locally, rather than sending information to the cloud."

Kenya Building a Tech Hub

Key is seriously learning math and Science.

Kenya's tech hub: Meeting the DIY coders and gurus of the future  in the BBC

On a balmy morning in Nairobi a group of children are building robots using motors and wires, while in an adjacent room a child is learning how to use software to spell their name on a computer.

This hive of tech activity is taking place at the headquarters of the Stem Impact Centre, a two-story bungalow in the centre of the Kenyan capital.

Established in September 2020, the centre supports schools by providing their students with the space to learn coding and robotics and take a DIY approach to learning technology.

A child learning at the Stem Impact Centre in Nairobi, Kenya  Alex Magu wants to encourage more home-grown tech entrepreneurs by getting children interested at an early age

The centre is the brainchild of Alex Magu, who founded it driven by a passion to "democratise computer science" in Kenya.   He believes giving every child access to tech-based resources is vital for the development of Kenya.

And it seems that the Kenyan government agrees with him.   In April, it announced it would implement a new technology curriculum for primary and secondary schools that will teach coding and tech skills.

Kenya has long been known as one of Africa's biggest tech hubs and is often dubbed the "Silicon Savannah" as many global tech giants have set up here, including Amazon and Google.  .... '

A Trap for Efficient Light Use

 And thus using it more efficiently.  How well?

Creating a perfect trap for light

by Hebrew University of Jerusalem  and TU Wein    in TechExplore

Whether in photosynthesis or in a photovoltaic system: If you want to use light efficiently, you have to absorb it as completely as possible. However, this is difficult if the absorption is to take place in a thin layer of material that normally lets a large part of the light pass through.

Now, research teams from TU Wien and from The Hebrew University of Jerusalem (HU) have found a surprising trick that allows a beam of light to be completely absorbed even in the thinnest of layers: They built a "light trap" around the thin layer using mirrors and lenses, in which the light beam is steered in a circle and then superimposed on itself—exactly in such a way that the beam of light blocks itself and can no longer leave the system. Thus, the light has no choice but to be absorbed by the thin layer—there is no other way out.

This absorption-amplification method, which has now been presented in the scientific journal Science, is the result of a fruitful collaboration between the two teams: the approach was suggested by Prof. Ori Katz from The Hebrew University of Jerusalem and conceptualized with Prof. Stefan Rotter from TU Wien; the experiment was carried out in by the lab team in Jerusalem and the theoretical calculations came from the team in Vienna.

"Absorbing light is easy when it hits a solid object," shared Prof. Stefan Rotter from the Institute of Theoretical Physics at TU Wien. "A thick black wool jumper can easily absorb light. But in many technical applications, you only have a thin layer of material available and you want the light to be absorbed exactly in this layer."

There have already been attempts to improve the absorption of materials: For example, the material can be placed between two mirrors. The light is reflected back and forth between the two mirrors, passing through the material each time and thus having a greater chance of being absorbed. However, for this purpose, the mirrors must not be perfect—one of them must be partially transparent, otherwise the light cannot penetrate the area between the two mirrors at all. But this also means that whenever the light hits this partially transparent mirror, some of the light is lost.

To prevent this, it is possible to use the wave properties of light in a sophisticated way. "In our approach, we are able to cancel all back-reflections by wave interference", noted HU's Prof. Ori Katz. Helmut Hörner, from TU Wien, who dedicated his thesis to this topic explained, "in our method, too, the light first falls on a partially transparent mirror. If you simply send a laser beam onto this mirror, it is split into two parts: The larger part is reflected, a smaller part penetrates the mirror."  .... ' 

Rise of Virtual Assistants

 Indeed, robots and more.

The Rise of Virtual Assistants: How Machines & Algorithms Increase the Potential for Consumer Contact

By Piergiorgio Vittori  in Datanami

In this hyper-competitive, consumer-focused marketplace, new technologies are increasingly allowing data analysis and information reprocessing to foster “virtual relationships.” This is an exciting new world of information capture and consideration, one which empowers strong companies and leaders to optimize fundamental corporate performance and improve strategic business decisions.

Of course personal choices and preferences will always inhabit crucial roles in core consumer behaviors. But enhancing conversations via artificial intelligence (AI) mechanisms — such as chatbots and virtual assistants — represent a farsighted way for businesses and organizations to engage audiences and partners across a myriad digital arena.

These new technologies are both powerful and dynamic — beginning with “touch”-based interfaces and evolving on to interfaces powered by “voice.” They’re all efforts further aided by robust graphic interfaces that allow for simpler and faster ways to generate feedback and capture information from the widest range of smart devices.

This is particularly true with virtual assistants (VA), which developers and end-users now define as a sort of “digital human.” VAs today have never been more effective or elegantly designed. Visually realistic, they’re also powerful enough to comprehend data, answer consumer questions and even achieve quasi-emotional “human” connections. Through subtle— and often not so subtle — uses of body language, VAs today can even blink, nod and, yes, wink; in other words, they possess the beginnings of humanity and the ability to communicate like truly sentient beings.

The creation of virtual assistants and chatbots is a multi-billion-dollar industry (Olivier-Le-Moal/Shutterstock)

The role of voice in VAs is crucial – voice is what facilitates language, voice conveys emotion, voices facilitates subtlety and allows for emphasis. Our voices generate compassion, empathy and most crucially trust – and for VAs to truly impact the marketplace, consumers must trust in their efforts, abilities and results. Trust not only results in a better user experience, it creates loyalty and encourages repeat use – core actions which allow AI-powered devices to iterate, improve and evolve.

By deploying essential conversational elements, VAs — and the organizations that embrace them — are further embedding the core tenets of communication into digital interactions of all sorts: Verbal, textual, vocal and gestural. The result is a highly mutual experience with long-term potential to help data-driven ecosystems truly thrive — all aided by algorithms capable of exploiting the mechanisms of Deep Learning (DL).  ... ' 


Interpretable Machine Learning and Diagnosis

Well covered in ACM pieces of late, useful takes. 

ACM PRACTICE

Interpretable Machine Learning: Moving from Mythos to Diagnostics

By Valerie Chen, Jeffrey Li, Joon Sik Kim, Gregory Plumb, Ameet Talwalkar

Communications of the ACM, August 2022, Vol. 65 No. 8, Pages 43-50  10.1145/3546036

The emergence of machine learning as a society-changing technology in the past decade has triggered concerns about people's inability to understand the reasoning of increasingly complex models. The field of interpretable machine learning (IML) grew out of these concerns, with the goal of empowering various stakeholders to tackle use cases, such as building trust in models, performing model debugging, and generally informing real human decision-making.7,10,17

Yet despite the flurry of IML methodological development over the past several years, a stark disconnect characterizes the current overall approach. As shown in Figure 1, IML researchers develop methods that typically optimize for diverse but narrow technical objectives, yet their claimed use cases for consumers remain broad and often underspecified. Echoing similar critiques about the field,17 it has thus remained difficult to evaluate these claims sufficiently and to translate methodological advances into widespread practical impact. .... '

Sunday, August 28, 2022

Tufte Courses

Notable always liked Tufte's work.   Attended a few courses in the big enterprise.

ANALYZING/PRESENTING    DATA/INFORMATION:

AN ONLINE VIDEO COURSE TAUGHT BY EDWARD TUFTE

EDWARD TUFTE COURSE REVIEWS + REGISTRATION INFORMATION

ET Books - Covers

"Best single-day class ever." TAOSECURITY       "The da Vinci of data." THE NEW YORK TIMES

"In university halls and conference centers, Tufte's appeal crackles. Fans spend the day looking at art and information through Tufte's eyes, as he walks them through images and analysis of his books. In 4 books and popular auditorium gigs, he teaches by visual example. Next to a bad example of a graph, he positions a sublimely clear treatment, often using the same data. Tufte's work is relevant to anyone who needs to write or present information clearly, from business executives to students. About 10 years ago, The New York Times crowned Tufte the "da Vinci of data." A more fitting title might be the "Galileo of graphics." Where da Vinci is remembered as an inventor of new technologies, Galileo put right our understanding of the solar system by positioning the sun at its center. Tufte, who owns a handful of nearly 400-year-old first editions by Galileo considers the early scientist a master of analytical design." BLOOMBERG

"The Information Sage: Meet Edward Tufte, the graphics guru to the power elite who is revolutionizing how we see data." Edward Tufte's many goverment data presentations in Washington, DC, including his presidential appointment to the Recovery Independent Advisory Panel and work on recovery.org."" THE WASHINGTON MONTHLY

"One visionary day. Few speak as eloquently as Edward Tufte, whose theories of information design not only illuminate, they inspire. In a full-day seminar, Tufte, author of the classic The Visual Display of Quantitative Information, uses maps, graphs, charts, and tables to communicate what prose alone cannot. For information designers Tufte's work is a model of clarity and craftsmanship. Given that the heart of his enterprise is statistics (of which he's a professor at Yale), one might worry about "lognormal distributions" and "trimetric projections." This would be a mistake. Tufte keeps jargon to a minimum. His insights lead to new levels of understanding both for creators and viewers of visual display. What makes Tufte most persuasive are his works themselves: His books and his seminar embody his belief that "good design is clear thinking made visible." WIRED

"Ivy League Rock and Roll: Edward Tufte, the world's most renowned visualization expert, holds legendary information design courses. I recently was among the hundreds that flock to each of his live performances. What an experience! Tufte's seminars are legendary. Anyone who deals with data and visualization on a professional level knows his books. Today in Washington's Crystal Forum many participants come from government agencies and military institutions. There are also several students whose majors range from information technology and graphic design to economics, biology and medicine. Some, balancing notebooks on their laps, will take detailed notes so that they can fully absorb Tufte's messages later at home. He gently takes an awe-striking original from Galileo Galilei or a centuries-old copy of Euclid's scripts and proceeds to carry it down the aisle. Later, one of his assistants will also walk through the auditorium with one of these precious books in hand." NICOLAS BISSANTZ  .... ' 

Visibility Can Ease the Pain of Long Chassis Return Times

Made me think the process again.

How Visibility Can Ease the Pain of Long Chassis Return Times

August 26, 2022

By William Sandoval, SCB Contributor, in SupplyChainBrain

After a year of steady increases in freight prices, the U.S. trucking industry is experiencing a cooldown in demand, with linehaul rates declining significantly from their highs at the start of 2022. Trucking capacity has loosened considerably, thanks to new space entering the system, and a plateauing of consumer demand across major geographic markets.

The rise in trucking capacity also has to do with an improvement in efficiency across different nodes in the end-to-end supply chain, be it at ports, intermodal yards or warehouses. Transportation networks work in partnership, meaning that any tangible improvement in efficiency within one logistics segment has ripple effects on the efficiency of other segments.

As throughput across ports, intermodal yards and warehouses improved this year, the long truck queues outside their gates started dwindling, reducing idling times. This enabled drivers to keep their trucks moving for a longer duration within their allotted hours of service (HOS), directly increasing capacity availability without injecting fresh capacity into the system.

That said, the trucking industry is still far from solving some of its deepest challenges since the pandemic, such as the shortage of intermodal chassis in circulation. This bottleneck continues to tighten, threatening to set off a vicious cycle of delays throughout the logistics pipeline.

The industry isn’t suffering from an actual physical shortage of chassis. Instead, the current situation is a reflection of logjams that have persisted for a while, caused by a failure to optimize chassis usage. Chassis turnover days are a lynchpin metric that determines the health of the trucking economy, with higher numbers indicating a fall in efficiency.

TRAC Intermodal, the largest marine chassis provider in the U.S., reports a threefold increase in wait times for truckers to return chassis, compared with the pre-pandemic normal. This has an enormous impact on chassis availability. For fleets, increasing capital investment in procuring new chassis will also not be enough, considering the holdups in the system that will only continue to accumulate in the absence of serious optimization.

The headwinds to movement come from the railroad segment as well. The U.S. rail industry is seeing massive congestion across intermodal hubs such as Chicago and Joliet, with trains backed up for miles around their destinations. With the peak season approaching, shippers eager to front-load their inventories will cause an even greater surge in demand for capacity. 

This would add more burden to an already precarious situation. One common reason for delays in chassis returns is the truckers themselves. Stuck in long queues outside intermodal hubs and warehouses, they prefer to “drop and hook” their chassis, reducing the hassle of waiting for the containers to be off-loaded.

But with warehouses swimming in excess containers and struggling with historically low space availability, containers sit longer atop chassis, rendering the chassis non-operational for that duration. As warehouses continued to reel under space and labor shortage, container-bound chassis keep piling outside their gates, drastically increasing chassis turnover times. 

While fleets focus on leveraging drop-and-hook as a way to maximize their driver hours, it often comes at the expense of fleet utilization. With the industry being cyclical, the burden of delayed chassis inevitably comes back to hurt the fortunes of fleet businesses, as they scramble to find an empty chassis to haul the freight they signed up for.   .... ' 

Electronic Brain Stimulation and Memory

New Look atbrain stimulation.

Electrical Brain Stimulation Boosts Memory in Seniors, Study Finds

By Adrianna Nine on August 23, 2022 in ExtremeTech

Memory lapse remains a major concern for those approaching their golden years. As many as 40 percent of US adults over the age of 65 have some type of age-associated memory impairment, with approximately 160,000 of them receiving dementia diagnoses each year. While long-term it would be ideal to find ways to prevent the initial onset of such memory impairments, scientists are working to help seniors mediate their memory obstacles in the meantime. A new study suggests electrical currents to the brain might be a way to do just that.

Scientists at Boston University recently used a double-blind study to test the effects of non-invasive electrical brain stimulation on older adults’ memories. Having seen the brain circuits and networks involved in memory capacity in past research, the team assembled a cap dotted with electrodes that could deliver electrical currents to the wearer’s brain. These currents would focus on one of two areas of the brain: the dorsolateral prefrontal cortex (DLPFC), which is essential to long-term memory, and the inferior parietal lobule (IPL), which is a major component of working memory. .... ' 

The Economics of YouTube

Have now followed former University Prof Adam Ragusea for a few months on YouTube.  Mostly about food and cooking topics.    All very nicely and informatively done.  A good intro into the YouTube economy.   He has moved his family to a YouTube based economy and writes about the experience to date in a recent YouTube post: 

https://www.youtube.com/watch?v=G6pVD9Bya3E

Ask Adam: How does YouTube money work (or not)?     (PODCAST E23)

31,155 views Aug 27, 2022

Adam Ragusea, 2.03M subscribers   #podcast #askadam #foodie #foodpodcast #cookingpodcast #q&a

Saturday, August 27, 2022

On Solving the AI Common Sense Problem

Not yet, when?   What will it take?   Some thoughts, but not enough.  

The Common sense in Context problem 

By TechTalks, August 9, 2022

Ronald Jay Brachman is director of the Jacobs Technion-Cornell Institute at Cornell Tech and co-author of the book, Machines Like Us.

In recent years, deep learning has taken great strides in some of the most challenging areas of artificial intelligence (AI); however, some problems remain unsolved. Deep-learning systems are poor at handling novel situations, they require enormous amounts of data to train, and they sometimes make weird mistakes. Some scientists believe these problems will be solved by creating larger neural networks trained on bigger datasets. Others think that what the field of AI needs is a little bit of human "common sense."

In an interview, Brachman discusses what common sense is and is not, why machines do not have it, and how "knowledge representation" can steer the AI community in the right direction.

From TechTalks

Brain Activity Turned into Images

 AI, Images, MRI

‘Mind-Reading’ Technology Can Turn Brain Activity Into Images   By Adrianna Nine on August 26, 2022 at 8:37 am

Researchers at Radboud University in the Netherlands have developed technology that can “read minds” by turning a person’s neurological activity into stunningly accurate pictures.

The system, devised by neurologists, AI researchers, and cognitive scientists at Radboud University, combines AI with medical imaging techniques. It begins with a more sophisticated version of the magnetic resonance imaging (MRI) scanner called a functional magnetic resonance imaging (fMRI) scanner. While a conventional MRI machine facilitates imaging of a person’s anatomy to diagnose trauma or disease, an fMRI machine detects tiny changes in metabolic function. This includes neuron activity and the minuscule changes in blood flow within the brain.  ... ' 

Military Grade Cybersecurity Needed in Business

Good thoughts regards key issues. 

ACM NEWS

Raising the Ramparts, By David Geer, Commissioned by CACM Staff,   August 11, 2022

The global military cybersecurity market will grow from US$25,692.4 million in 2021 to US$ 43,675.2 million by 2031, says Visiongain Research, Inc., a U.K. market intelligence firm.

That growth is no surprise, with commonplace nation-state attacks on critical infrastructure and government data assets. The U.S. federal government and its agencies, with the aid of the Cybersecurity & Infrastructure Security Agency (CISA), are ramping up cyber defenses to combat disabling ransomware and complex attacks. They are using approved security products that the government and the military vet specifically for these purposes.

However, government organizations are not the only ones in jeopardy.

Nation-states target private enterprises, too, with support from their military and insidious Advanced Persistent Threat (APT) groups. Facing the same threats that government agencies do, companies need military-grade cybersecurity.

Military-grade cybersecurity proceeds from a Military Specification (MIL-SPEC) purchasing process, with rigorous testing to ensure cybersecurity components are the most secure, resilient product the military can get, says Peter Hay, Lead for Instruction at SimSpace Corporation, a military-grade cybersecurity risk management platform. The military uses extensive mission-based training to ensure its human cybersecurity talent adheres to MIL-SPEC security requirements, too.

MIL-SPEC cybersecurity products are a necessity, as high-profile cases of military-level attacks demonstrate. The Indian APT group ModifiedElephant stealthily attacked dissidents for 10 years without detection. The group used military-grade remote access trojans (RATs), keyloggers, and other attack tools, according to SC Media, a publication of the CyberRisk Alliance, an organization that, according to its Website, was "formed to help cybersecurity professionals face the challenges and obstacles that threaten the success and prosperity of their organizations."

The APT group Shadow Brokers stole the EternalBlue military-grade exploit from the U.S. National Security Agency (NSA) in 2017. It released the exploit to criminal hackers globally via subscription-based access to data dumps, according to The New York Times. Cybercriminals have since used EternalBlue successfully in many attacks.

According to Tom Van de Wiele, a principal of WithSecure, an endpoint detection and response company in Finland, the 2010 Stuxnet attack was the most profound military-level cyberattack on record. Stuxnet used intelligence gathering, local spies bridging air-gapped networks using USB thumb drives, and zero-day exploits to gain access and persist long enough to disrupt Iranian uranium enrichment infrastructure, he says.

With an increase in nation-state data breaches, cybersecurity vendors serving the military are offering comparable products and services to the private sector to maintain the balance of power against nation-state attacks.

For example, CrowdStrike provides its cloud-based endpoint and identity product Falcon to the U.S. Government with FedRAMP authorization, according to a CrowdStrike media release. Falcon also is available to private enterprises. ... 

Facing the same threats that government agencies do, companies need military-grade cybersecurity...

Friday, August 26, 2022

5G Networks Are Worryingly Hackable

Oops, first time I have seen this

5G Networks Are Worryingly Hackable

IEEE Spectrum

Edd Gent, August 24, 2022

German security researchers determined 5G networks can be hacked, having breached and hijacked live networks in a series of "red teaming" exercises. Poorly configured cloud technology made the exploits possible, they said, and Karsten Nohl at Germany's Security Research Labs cited the failure to implement basic cloud security. He suggested telecommunications companies may be taking shortcuts that could prevent 5G networks' "containers" from functioning properly. The emergence of 5G has escalated demand for virtualization, especially for radio access networks that link end-user devices to the network core. Nohl said 5G networks respond to the greater complexity with more automated network management, which makes exploitation easier.   ... ' 

Data Platform for Chatbot Development

Just reviewing of this, some good thoughts.    Data prep in particular 

A Data Platform for Chatbot Development

Alex Woodie

One of the most compelling use cases for AI at the moment is developing chatbots and conversational agents. While the AI part of the equation works reasonably well, getting the training data organized to build and train accurate chatbots has emerged as the bottleneck for wider adoption. That’s what drove the folks at Dashbot to develop a data platform specifically for chatbot creation and optimization.

Recent advances in natural language processing (NLP) and transfer learning have helped to lower the technical bar to building chatbots and conversational agents. Instead of creating a whole NLP system from scratch, users can borrow a pre-trained deep learning model and customize just a few layers. When you combine this democratization of NLP tech with the workplace disruptions of COVID, we have a situation where chatbots appear to have sprung up everywhere almost overnight.

Andrew Hong also saw this sudden surge in chatbot creation and usage while working at a venture capital firm a few years ago. With the chatbot market expanding at a 24% CAGR (according to one forecast), it’s a potentially lucrative place for a technology investor, and Hong wanted to be in on it.

“I was looking to invest in this space. Everybody was investing in chatbots,” Hong told Datanami recently. “But then it kind of occurred to me there’s actually a data problem here. That’s when I poked deeper and saw this problem.”  The problem (as you may have guessed) is that conversational data is a mess. According to Hong, organizations are devoting extensive data science and data engineering resources to prepare large amounts of raw chat transcripts and other conversational data so it can be used to train chatbots and agents.

The problem boils down to this: Without a lot of manual work to prep, organize, and analyze massive amounts of text data used for training, the chatbots and agents don’t work very well. Keeping the bots running efficiently also requires ongoing optimization, which Hong’s company, Dashbot, helps to automate.

“A lot of this is literally hieroglyphics,” Hong said of call transcripts, emails, and other text that’s used to train chatbots. “Raw conversational data is undecipherable. It’s like a giant file with billions of lines of just words. You really can’t even ask it a question.”

While a good chatbot seems to work effortlessly, there’s a lot of work going on behind the scenes to get there. For starters, raw text files that serve as the training data must be cleansed, prepped, and labeled. Sentences must be strung together, and questions and answers in a conversation grouped. As part of this process, the data is typically extracted from a data lake and loaded into a repository where it can be queried and analyzed, such as a relational database.

Next, there’s data science work involved. On the first pass, a machine learning algorithm might help to identify clusters in the text files. That might be followed by topic modeling to narrow down the topics that people are discussing. Sentiment analysis may be performed to help identify the topics that are associated with the highest frustration of users.

Finally, the training data is segmented by intents. Once an intent is associated with a particular piece of training data, then it can be used by an NLP system to train a chatbot to answer a particular question. A chatbot may be programmed to recognize and respond to 100 or more individual intents, and its performance on each of these varies with the quality of the training data.

Dashbot was founded in 2016 to automate as many of these steps as possible, and to help make the data preparation as turnkey as possible before handing the training data over to NLP chatbot vendors like Amazon Lex, IBM Watson, and Google Cloud Dialogflow.

“I think a tool like this needs to exists beyond chatbots,” said Hong, who joined Dashbot as its CEO in 2020. “How do you turn unstructured data into something usable? I think this ETL pipeline we built is going to help do that.”

Chatbot Data Prep

Instead of requiring data engineers and data scientists to spend days working with huge number of text files, Hong developed Dashbot’s offering, dubbed Conversational Data Cloud, to automate many of the steps required to turn raw text into the refined JSON document that the major NLP vendors expect.

“A lot of enterprises have call center transcripts just piling up in their Amazon data lakes. We can tap into that, transform that in a few seconds,” Hong said. “We can integrate with any conversational channel. It can be your call centers, chat bots, voice agents. You can even upload raw conversational files sitting on a data lake.”

The Dashbot product is broken up into three parts, including a data playground used for ETL and data cleansing; a reporting module, where the user can run analytics on the data; and an optimization layer.

The data prep occurs in the data playground, Hong said, while the analytics layer is useful for asking questions of the data that can help illuminate problems, such as: “In the last seven days how many people have called in and asked about this new product line that we just launched and how many people are frustrated by it?”  ... ' 


NASA's Space for Agriculture

In progress.

NASA'a  Space for Agriculture Tele Conference

https://www.youtube.com/watch?v=YxdY20NuhBE

I am attending, and is being recorded, Available on UTube. 

Have some ideas in the space, contact us.

Robot Dogs for Space Force

 The obvious application of such robotics, will likely increase.

U.S. Space Force Tests Robot Dogs to Patrol Cape Canaveral

Space.com

Brett Tingley, August 8, 2022

The U.S. Space Force held a demonstration of dog-like quadruped unmanned ground vehicles (Q-UGVs) for patrols at Cape Canaveral. The demo involved at least two Vision 60 Q-UGVs from Ghost Robotics, and the U.S. Department of Defense said the Space Launch Delta 45 unit responsible for all space launch operations from Kennedy Space Center and Cape Canaveral will use the robot dogs for "damage assessments and patrol." The robots are capable of autonomous, human-controlled, and voice-controlled operation. They also can function as miniaturized communications nodes, carrying antennas to extend networks outside existing infrastructure or in locations lacking infrastructure.  ... ' 


Thinking Causal AI

Good thoughts on the topic: 

Use Causal AI to Go Beyond Correlation-Based Prediction

Gartner, By Leinar Ramos | August 10, 2022   Intro below

This is a short introduction to a research note we published recently on Causal AI, which is accessible here: Innovation Insight: Causal AI.

Correlation is not causation

“Correlation is not causation” is often mentioned, but rarely given the importance it deserves on AI. Correlations are how we see variables moving together in the data, but these relationships are not always causal. 

We can only say that A causes B when an intervention that changes A would also change B as a result (whilst keeping everything else constant). For example, forcing a rooster to crow won’t make the sun rise, even if the two events are correlated.

In other words, correlations are the data we see, whereas causal relationships are the underlying cause-and-effect relationships that generate this data (see image below). Crucially, the data we typically work with exists in a complex web of correlations that obscure the causal relationships we care about.

An image illustrating the distinction between correlations, which are the relationships we directly observe in the data, and causation, which is the underlying set of cause-and-effect relationships that generate the data 

Despite their notable success, statistical models, including those in advanced deep learning (DL) systems, use surface-level correlations to make predictions. The current DL paradigm doesn’t drive models to uncover underlying cause-and-effect relationships but simply to maximize predictive accuracy.

Now, it is worth asking: What is the problem of using correlations for prediction? After all, in order to predict, we just need enough predictive power in the data, regardless of whether it comes from causal relationships or statistical correlations. For instance, hearing a rooster crow is useful to predict sunrises.

The core problem lies with the brittleness of the predictions. For correlation-based predictions to remain valid, the process that generated the data needs to remain the same (e.g., the roosters need to keep crowing before sunrise).

There are two fundamental challenges with this correlation-based approach:

Problem #1: We want to intervene in the world

Prediction is rarely the end goal. We often want to intervene in the world to achieve a specific outcome. Anytime we ask a question of the form “How much can we change Y by doing X?”, we are asking a causal question about a potential intervention. An example would be: “What would happen to customer churn if we increased a loyalty incentive?”

And the problem with correlation-based predictive models, like Deep Learning, is that our actions are likely to change the data-generation process and therefore the statistical correlations we see in the data, rendering correlation-based predictions useless to estimate the effect of interventions. 

For instance, when we use a churn model (prediction) to decide whether or not to give a customer a loyalty incentive (intervention), the incentive affects the data that generated the prediction (we hope the incentive makes the customer stay). In this case, causality really matters, and we can’t simply use correlations to answer questions on what would happen if we took an action (we need to run controlled experiments or use causal techniques to estimate the effects)  .... ' 

AI Voice Jammer

Another security angle.

 Voice Jammer Stops Anyone from Recording Your Speech

New Scientist, Matthew Sparkes, July 29, 2022

Michigan State University's Qiben Yan and colleagues have developed an artificial intelligence voice jammer that can prevent anyone from recording the speech of a single target person. The Neural Enhanced Cancellation (NEC) tool exploits a bug contained within most microphones by introducing sounds at set distances above and below the microphone's recording frequencies. NEC taps this flaw to play inverse speech in the ultrasonic range outside of human hearing, the frequencies needed to clandestinely block an audible voice. The tool effectively blocked voices when tested on a range of Apple, Xiaomi, and Samsung smartphones from up to 3.6 meters (nearly 12 feet) away.  .... ' 

Thursday, August 25, 2022

Wearable Sensor to Analyze Human Sweat

 More sensory input into healthcare. 

Wearable Sensor Detects Even More Compounds in Human Sweat

By Caltech News,   August 23, 2022

A wearable sensor developed by researchers at the California Institute of Technology can detect amino acids and certain vitamins in small amounts of human sweat.

The technology features molecularly imprinted polymers that act as reusable antibodies, overcoming the challenges associated with previous sweat sensors that use antibodies (which can be used just once) to detect compounds at low concentrations.

For a sensor to detect the amino acid glutamine, for instance, the polymer would be prepared with glutamine molecules, leaving holes shaped like glutamine when the molecules are removed through a chemical process.

An electrical signal is generated when sweat contacts the inner layer of the sensor, and the signal weakens as glutamine molecules are plugged into the holes in the polymer, helping to determine how much glutamine is in the wearer's sweat.

The use of microfluidics also means the sensor can operate with a miniscule amount of sweat. ... 

Thanks to microfluidics and the use of a different type of drug, the sensor now needs less sweat, and the current needed to generate the sweat can be very small. .... '

Caltech Article   

Is the Tesla Data Hoard Secure?

 Secure data?

The Radical Scope of Tesla’s Data Hoard Logs and records of its customers’ journeys fill out petabytes—and court case dockets        By MARK HARRIS in IEEE Spectrum

You won’t see a single Tesla cruising the glamorous beachfront in Beidaihe, China, this summer. Officials banned Elon Musk’s popular electric cars from the resort for two months while it hosts the Communist Party’s annual retreat, presumably fearing what their built-in cameras might capture and feed back to the United States.

Back in Florida, Tesla recently faced a negligence lawsuit after two young men died in a fiery car crash while driving a Model S belonging to a father of one of the accident victims. As part of its defense, the company submitted a historical speed analysis showing that the car had been driven with a daily top speed averaging over 90 miles per hour (145 kilometers per hour) in the months before the crash. This information was quietly captured by the car and uploaded to Tesla’s servers. (A jury later found Tesla just 1 percent negligent in the case.)

Meanwhile, every recent-model Tesla reportedly records a breadcrumb GPS trail of every trip it makes—and shares it with the company. While this data is supposedly anonymized, experts are skeptical.

Alongside its advances in electric propulsion, Tesla’s innovations in data collection, analysis, and usage are transforming the automotive industry, and society itself, in ways that appear genuinely revolutionary.  ... '

Ordinary computers can beat Google’s Quantum Computer

 Algorithm vs Computer?  

Ordinary computers can beat Google’s quantum computer after all

Superfast algorithm put crimp in 2019 claim that Google’s machine had achieved “quantum supremacy”

2 AUG 20225:05 PM BY ADRIAN CHO

If the quantum computing era dawned 3 years ago, its rising sun may have ducked behind a cloud. In 2019, Google researchers claimed they had passed a milestone known as quantum supremacy when their quantum computer Sycamore performed in 200 seconds an abstruse calculation they said would tie up a supercomputer for 10,000 years. Now, scientists in China have done the computation in a few hours with ordinary processors. A supercomputer, they say, could beat Sycamore outright.

“I think they’re right that if they had access to a big enough supercomputer, they could have simulated the … task in a matter of seconds,” says Scott Aaronson, a computer scientist at the University of Texas, Austin. The advance takes a bit of the shine off Google’s claim, says Greg Kuperberg, a mathematician at the University of California, Davis. “Getting to 300 feet from the summit is less exciting than getting to the summit.”

Still, the promise of quantum computing remains undimmed, Kuperberg and others say. And Sergio Boixo, principal scientist for Google Quantum AI, said in an email the Google team knew its edge might not hold for very long. “In our 2019 paper, we said that classical algorithms would improve,” he said. But, “we don’t think this classical approach can keep up with quantum circuits in 2022 and beyond.”

The “problem” Sycamore solved was designed to be hard for a conventional computer but as easy as possible for a quantum computer, which manipulates qubits that can be set to 0, 1, or—thanks to quantum mechanics—any combination of 0 and 1 at the same time. Together, Sycamore’s 53 qubits, tiny resonating electrical circuits made of superconducting metal, can encode any number from 0 to 253 (roughly 9 quadrillion)—or even all of them at once.

Starting with all the qubits set to 0, Google researchers applied to single qubits and pairs a random but fixed set of logical operations, or gates, over 20 cycles, then read out the qubits. Crudely speaking, quantum waves representing all possible outputs sloshed among the qubits, and the gates created interference that reinforced some outputs and canceled others. So some should have appeared with greater probability than others. Over millions of trials, a spiky output pattern emerged.

The Google researchers argued that simulating those interference effects would overwhelm even Summit, a supercomputer at Oak Ridge National Laboratory, which has 9216 central processing units and 27,648 faster graphic processing units (GPUs). Researchers with IBM, which developed Summit, quickly countered that if they exploited every bit of hard drive available to the computer, it could handle the computation in a few days. Now, Pan Zhang, a statistical physicist at the Institute of Theoretical Physics at the Chinese Academy of Sciences, and colleagues have shown how to beat Sycamore in a paper in press at Physical Review Letters ..... '

Wednesday, August 24, 2022

Wireless Tech Measures Soil Moisture at Multiple Depths

Useful Sensors for some planting, some early planting.  

Wireless Tech Measures Soil Moisture at Multiple Depths

NC State University News

Matt Shipman, August 17, 2022

Scientists at North Carolina State University (NC State) developed the wireless Contactless Moisture Estimation (CoMEt) system to measure soil moisture at multiple depths in real time. NC State's Usman Mahmood Khan said, "If we know how far the signal has traveled, and we measure how a wireless signal's wavelength has changed, we can determine the phase shift of the signal. This, in turn, allows us to estimate the amount of water in the soil." An above-ground wireless device transmits radio waves into the soil, receives the signals reflected back, and measures the phase shift. The researchers said CoMEt could help inform irrigation practices to improve crop yield and lower agricultural water usage. .... ' 

Reshaping into 3D?

Saw this kind of thing proposed for packaging, with an Origami template,  could it work?

 Your Next Wooden Chair Could Arrive Flat, Then Dry into a 3D Shape

American Chemical Society

August 23, 2022

Researchers at Israel's Hebrew University of Jerusalem have developed a process in which flat wooden shapes produced by three-dimensional (3D) printers can be programmed to transform into complex 3D shapes. The researchers used a water-based “ink” comprised of wood-waste microparticles and plant-based binders in the printers; they found the pathway of the ink, print speed, and stacking of printed layers determined the final shape of the printed piece as its moisture content evaporates, and that these factors can be controlled to produce different shapes. Said Eran Sharon, one of the project’s principal investigators, “We hope to show that under some conditions we can make these elements responsive—to humidity, for example—when we want to change the shape of an object again.”  .... '

US Inventors must be Human

 Formerly covered here.   Appeal to Supreme court likely follows.  Is this important enough?

Inventors Must Be Human, Federal Court Rules in Blow to AI

By Bloomberg Law, August 10, 2022

Computer scientist Stephen Thaler was dealt a blow in his battle for artificial intelligence machines to be recognized as inventors on patents, after the United States' top patent court found that inventors must be humans.

The term "individual" in the Patent Act refers only to humans, meaning an AI doesn't count as an inventor on a patentable invention, the U.S. Court of Appeals for the Federal Circuit ruled Friday (August 5).   The decision lines up with courts in the European Union, the United Kingdom, and Australia that have refused to accept Thaler's argument. His only currently existing win is from a South African court that said an AI can be a patent inventor.

Unless the U.S. Supreme Court steps in, the Federal Circuit is typically the final authority on U.S. patent matters. Thaler plans to appeal to U.S. Supreme Court, his attorney said.  ...

From Bloomberg Law

Metaverse Interview

Strong believer that some of the most important aspects of the Metaverse are not new.  Here an interview. 

Matthew Ball on the metaverse: We've never seen a shift this enormous     In Protocol

The leading metaverse theorist shares his thoughts on the sudden rise of the concept, its utility for the enterprise and what we still get wrong about the metaverse.

“Read Matthew Ball.”

Talk to anyone in AR, VR or immersive entertainment about the metaverse, and they’ll sooner or later drop his name. Ball’s work has been hailed by Mark Zuckerberg, Tim Sweeney and Reed Hastings, just to name a few of his better-known fans, and his work has been called a must-read for anyone who wants to know about the next big thing.

The former Amazon Video executive-turned-VC began writing essays about the metaverse in early 2019. He has since become the leading theorist for the next version of the internet. His book, “The Metaverse And How It Will Revolutionize Everything,” is coming out this month, and Ball sat down with Protocol this week for a chat about all things metaverse..... 

When you published your first essay about the metaverse in early 2019, it was still a pretty obscure concept. Two years later, Facebook changed its name to Meta, and everyone was talking about the metaverse. Did this pace of change surprise you?

Yes and no. I have never experienced a buzzword become as dominant as rapidly as the metaverse did. Seven of the 11 largest companies on earth have either renamed themselves, made the largest acquisitions in Big Tech history, reorganized or prepped their largest and most significant product launches in decades around this field. I think that's unprecedented. We've never seen a shift this enormous

But the overall transition of investments and corporate strategy doesn't surprise me. When I started writing the piece based on my experiences in Fortnite and Roblox in 2018, you could feel that transformation happening. We know that in 2018, [Facebook gaming executive] Jason Rubin wrote an internal memo saying that the metaverse was theirs to lose. We know that in 2015, [Facebook] looked at buying Unity.  .... ' 

A Neuromorphic Chip for AI on the Edge

Chips for AI

 A Neuromorphic Chip for AI on the Edge

UC San Diego News Center

By Ioana Patringenaru, August 17, 2022

An international team of researchers created the NeuRRAM neuromorphic chip to compute directly in memory and run artificial intelligence (AI) applications with twice the energy efficiency of platforms for general-purpose AI computing. The chip moves AI closer to running on edge devices, untethered from the cloud; it also produces results as accurate as conventional digital chips, and supports many neural network models and architectures. "The conventional wisdom is that the higher efficiency of compute-in-memory is at the cost of versatility, but our NeuRRAM chip obtains efficiency while not sacrificing versatility," said former University of California, San Diego researcher Weier Wan. ... '

Starlink Hacked

 Another look at this recent development. 

Researcher Hacks Starlink Terminal to Warn SpaceX of Dangerous Flaws

Lennert Wouters has apparently made the details of his hacking tool open source.

By Passant Rabie

A researcher from Belgium created a $25 hacking tool that could glitch Starlink’s internet terminals, and he is reportedly going to make this tool available for others to copy. Lennert Wouters, a security researcher at KU Leuven, demonstrated how he was able to hack into Elon Musk’s satellite dishes at the Black Hat Security Conference being held this week in Las Vegas, Wired reported.

During his presentation at the conference on Wednesday, Wouters went through the hardware vulnerabilities that allowed him to access the Starlink satellite terminal and create his own custom code. “The widespread availability of Starlink User Terminals (UT) exposes them to hardware hackers and opens the door for an attacker to freely explore the network,” Wouters wrote in the description of Wednesday’s briefing.

SpaceX has launched a total of 3,009 satellites to low Earth orbit, building out a megaconstellation designed to beam down connectivity to even the most distant parts of the world. Starlink customers get a 19-inch wide Dishy McFlatface (a clever name bestowed upon the company’s satellite dish) to install on their homes, or even carry with them on the road. 

In order to hack the Starlink dish, Wouters created a modchip, or a custom circuit board that can be attached to the satellite dish, according to Wired. The modchip was put together using off-the-shelf parts that cost about $25 in total, and Wouters has reportedly made the details of the modchip available for download on Github. The small device can be used to access McFlatface’s software, launching an attack that causes a glitch and opens up previously locked parts of the Starlink system. “Our attack results in an unfixable compromise of the Starlink [user terminal] and allows us to execute arbitrary code,” Wouters wrote. “The ability to obtain root access on the Starlink [user terminal] is a prerequisite to freely explore the Starlink network.”  ... ' 

Tuesday, August 23, 2022

Disruption Examined

Via Irving Wladawsky-Berger

A collection of his observations, news and resources on the changing nature of innovation, technology, leadership, and other subjects.... 

Disruptive Forces Necessitate Bold Decisions

In January of 2021 I attended Predictions 21, an online event organized by Forrester Research.  “Faced with the pandemic, firms did things that once seemed impossible - sometimes overnight,” said Forrester last year, adding that “2021 will be the year that every company - not just the 15% of firms that were already digitally savvy - doubles down on technology-fueled experiences, operations, products, and ecosystems.”

Earlier this year I attended Predictions 22, and was particularly curious to see how things had changed in the intervening year. “Disruptive Forces Necessitate Bold Decisions,” was the overriding message in this year’s event guide.  “The old ways of working no longer work. The future is up for grabs. Leading firms will use the crucibles of 2020 and 2021 to forge a path to an agile, creative, and resilient tomorrow.”

Let me summarize some of Forrester’s key predictions in three areas: technology, customer experience, and industry trends.  ... ' 


Anti-Reflective Coating Allows Wi-Fi Through Walls

Could be useful in homes, offices.

 'Anti-Reflective' Coating Allows Wi-Fi Through Walls

In  TechRadar,  Steve McCaskill,  August 18, 2022

Scientists at Austria's Vienna University of Technology (TU Wien) and France's University of Rennes have enabled Wi-Fi signals to pass through walls more effectively. The method calculates an anti-reflective invisible structure to a wall, which TU Wien's Stefan Rotter likened to "the anti-reflective coating on your pair of glasses." The researchers transmitted microwaves through a labyrinth of obstacles, then calculated a matching anti-reflective structure that almost completely removed the signals' reflection. "We were able to show that this information can be used to calculate a corresponding compensating structure for any medium that scatters waves in a complex way, so that the combination of both media allows waves to pass through completely," explained TU Wien's Michael Horodynski.  ....

Stories, Dice, and Rocks That Think:

Just  Reading, very interesting, will review further as I progress.  by a correspondent I have often mentioned here.  


Stories, Dice, and Rocks That Think: How Humans Learned to See the Future--and Shape It  ...  
 by Byron Reese 

". . . Byron Reese gets to the heart of what makes humans different from all others." —Midwest Book Review

What makes the human mind so unique? And how did we get this way?   Amazon Description:

This fascinating tale explores the three leaps in our history that made us what we are—and will change how you think about our future.

Look around. Clearly, we humans are radically different from the other creatures on this planet. But why? Where are the Bronze Age beavers? The Iron Age iguanas? In Stories, Dice, and Rocks That Think, Byron Reese argues that we owe our special status to our ability to imagine the future and recall the past, escaping the perpetual present that all other living creatures are trapped in. 

Envisioning human history as the development of a societal superorganism he names Agora, Reese shows us how this escape enabled us to share knowledge on an unprecedented scale, and predict—and eventually master—the future.

Thoughtful, witty, and compulsively readable, Reese unravels our history as an intelligent species in three acts: 

Act I: Ancient humans undergo “the awakening,” developing the cognitive ability to mentally time-travel using language

Act II: In 17th century France, the mathematical framework known as 'probability theory' is born—a science for seeing into the future that we used to build the modern world

Act III: Beginning with the invention of the computer chip, humanity creates machines to gaze into the future with even more precision, overcoming the limits of our brains

A fresh new look at the history and destiny of humanity, readers will come away from Stories, Dice, and Rocks that Think with a new understanding of what they are—not just another animal, but a creature with a mastery of time itself.  ... ' 

Monday, August 22, 2022

More on Car Security Issues

Schneier points to other vehicle encryption issues, here just a snippit, more at the link. in the 

Software developer cracks Hyundai car security with Google search   in IheRegister

Top tip: Your RSA private key should not be copied from a public code tutorial

Thomas Claburn Wed 17 Aug 2022 // 20:19 UTC...

A developer says it was possible to run their own software on the car infotainment hardware after discovering the vehicle's manufacturer had secured its system using keys that were not only publicly known but had been lifted from programming examples.

An unidentified developer posting under the name "greenluigi1" wanted to modify the in-vehicle infotainment (IVI) system in his 2021 Hyundai Ioniq SEL.

To do so, he would have to figure out how to connect to the device and bypass its security.

After trying to figure out how to customize firmware updates for the IVI's D-Audio2 system, made by the car company's mobility platform subsidiary Hyundai Mobis, and have them accepted by the IVI, the unidentified car hacker found an unexpected way – through Google.

The IVI accepts firmware updates in the form of password-protected ZIP archives. This led to downloading an update ZIP from Hyundai's website and was able to bypass the simple password protection on the archive to access its contents, which included encrypted firmware images for various parts of the IVI.

The goal then became creating his own firmware images and encrypt them in a way within a ZIP that the car would accept, install, and run, thus allowing control of the hardware from the hacker's own supplied code.

As luck would have it, "greenluigi1" found on Mobis's website a Linux setup script that created a suitable ZIP file for performing a system update.   .... '

Google Tests AI Robotics in a Kitchen

Look forward to seeing this.  Kitchen has many tasks that could be automated, but usually not well positioned for broad application and integration.  Note here a number of hints at Google investment in the topic.  Ready to test.

Google Is Testing Its Latest AI-Powered Robot In a Kitchen  in ExtremeTech   By Adrianna Nine on August 17, 2022

Many robots are created to conduct highly-controlled jobs like frying potatoes, watering plants, or collecting litter. But a truly life-changing robot is one that can adapt to changing—and sometimes hectic—circumstances (ideally without losing its cool). That’s the line of thinking Google is following as it meshes its language-handling AI with a handy robot assistant.

Google’s Pathways Language Model (PaLM) is a relatively new 540-billion parameter network built to complete a variety of complex language-based tasks. It’s said to be intelligent enough to describe how it solved a math problem and annoy you by explaining your own jokes. Rather than focusing on one area of “expertise” and starting fresh every time it learns a new skill, PaLM can “stack” previously-learned knowledge to devise solutions to novel problems, similar to how humans assess new situations. This is important if a robot is meant to help humans in their jobs or day-to-day personal lives.

It just so happens that Google’s parent company, Alphabet, has been working on a new robotics firm called Everyday Robots. As its name suggests, the firm’s goal is to build robots that learn on their own and take care of “time-consuming, everyday tasks.” Combined with PaLM, Everyday Robots’ SayCan robot becomes the PaLM-SayCan, a bot capable of assessing its own capabilities, its environment, and the task at hand, then breaking that task into smaller sub-tasks to achieve the desired goal. ... ' 

Competition Makes Big Datasets Winners

 For a number of reasons, big datasets are better.  I have used ImageNet, good example, very useful. Also Mechanical Turk. 

Competition Makes Big Datasets the Winners   By Chris Edwards

Communications of the ACM, September 2022, Vol. 65 No. 9, Pages 11-13   10.1145/3546955

If there is one dataset that has become practically synonymous with deep learning, it is ImageNet. So much so that dataset creators routinely tout their offerings as "the ImageNet of …" for everything from chunks of software source code, as in IBM's Project CodeNet, to MusicNet, the University of Washington's collection of labelled music recordings.

The main aim of the team at Stanford University that created ImageNet was scale. The researchers recognized the tendency of machine learning models at that time to overfit relatively small training datasets, limiting their ability to handle real-world inputs well. Crowdsourcing the job by recruiting recruiting casual workers from Amazon's Mechanical Turk website delivered a much larger dataset. At its launch at the 2009 Conference on Computer Vision and Pattern Recognition (CVPR), ImageNet contained more than three million categorized and labeled images, which rapidly expanded to almost 15 million.

The huge number of labeled images proved fundamental to the success of the AlexNet model based on deep neural networks (DNNs) developed by a team led by Geoffrey Hinton, professor of computer science at the University of Toronto, that in 2012 won the third annual competition built around a subset of the ImageNet dataset, easily surpassing the results from the traditional artificial intelligence (AI) models. Since then, the development of increasingly accurate DNNs and large-scale datasets have gone hand in hand.

Teams around the world have collected and released to the academic world or the wider public thousands of datasets designed for use in both developing and assessing AI models. The Machine Learning Repository at the University of California at Irvine, for example, hosts more than 600 different datasets that range from abalone descriptions to wine quality. Google's Dataset Search indexes some 25 million open datasets developed for general scientific use, and not just machine learning. However, few of the datasets released to the wild achieve widespread use.

Bernard Koch, a graduate student at the University of California at Los Angeles, teamed with Emily Denton, a senior research scientist at Google, and two other researchers from the University of California; the team found in their work presented at the Conference on Neural Information Processing (NeurIPS) last year a long tail of rarely used sources headed by a very small group of highly popular datasets. To work out how much certain datasets predominated, they analyzed five years of submissions to the Papers With Code website, which collates academic papers on machine learning and their source data and software. Just eight datasets, including ImageNet, each appeared more than 500 times in the collected papers. Most datasets were cited in fewer than 10 papers.

Much of the focus on the most popular datasets revolves around competitions, which have contributed to machine learning's rapid advancement, Koch says. "You make it easy for everybody to understand how far we've advanced on a problem." Koch says.

Groups release datasets in concert with competitions in the hope that the pairing will lead to more attention on their field. An example is the Open Catalyst Project (OCP), a joint endeavor between Carnegie Mellon University and Facebook AI Research that is trying to use machine learning to speed up the process of identifying materials that can work as chemical catalysts. It can take days to simulate their behavior, even using approximations derived from quantum mechanics formulas. AI models have been shown to be much faster, but work is needed to improve their accuracy.

Using simulation results for a variety of elements and alloys, the OCP team built a dataset they used to underpin a competition that debuted at NeurIPS 2021. Microsoft Asia won this round with a model that borrows techniques from the Transformers used in NLP research, rather than the graphical neural networks (GNNs) that had been the favored approach for AI models in this area.

"One of the reasons that I am so excited about this area right now is precisely that machine learning model improvements are necessary," says Zachary Ulissi, a professor of chemical engineering at CMU who sees the competition format as one that can help drive this innovation. "I really hope to see more developments both in new types of models, maybe even outside GNNs and transformers, and incorporating known physics into these models." ... '

Surgery Robot on ISS

Clearly needed as distances and times in space grow. 

A Surgery Robot Will Board the ISS in 2024

By Adrianna Nine on August 4, 2022   in ExtremeTech

After nearly 20 years of development, a small remote-controlled surgery robot is preparing to join the most exclusive medical arena currently known: the International Space Station (ISS).

In partnership with robotics company Virtual Incision, engineers at the University of Nebraska-Lincoln have devised a narrow robot that helps medical professionals conduct surgical procedures from afar. MIRA, short for “miniaturized in vivo robotic assistant,” can be controlled remotely and even perform surgery autonomously. And thanks to a $100,000 grant from NASA, it could be proving its chops in space in as little as two years.

At first glance, the two-pound robot almost looks like a small kitchen gadget. Its base rod has a few basic switches and eventually gives way to a claw-like apparatus, which performs the actual surgery. Between each arm of the claw exists a camera, which helps to guide the robot throughout the procedure. But no one will be manning the camera when MIRA’s aboard the ISS. Instead, MIRA will work autonomously to complete simulation exercises, such as cutting taut rubber bands or pushing metal rings along a wire, which the University of Nebraska-Lincoln says imitates surgical activity.  .... ' 

Bionic Hand Arms Race

 Quite interesting developments are at hand at hand.  Good overview of the space, linking to more. 

THE BIONIC-HAND ARMS RACE  in IEEE Spectrum

The prosthetics industry is too focused on high-tech limbs that are complicated, costly, and often impractical.

N JULES VERNE’S 1865 NOVEL From the Earth to the Moon, members of the fictitious Baltimore Gun Club, all disabled Civil War veterans, restlessly search for a new enemy to conquer. They had spent the war innovating new, deadlier weaponry. By the war’s end, with “not quite one arm between four persons, and exactly two legs between six,” these self-taught amputee-weaponsmiths decide to repurpose their skills toward a new projectile: a rocket ship.

The story of the Baltimore Gun Club propelling themselves to the moon is about the extraordinary masculine power of the veteran, who doesn’t simply “overcome” his disability; he derives power and ambition from it. Their “crutches, wooden legs, artificial arms, steel hooks, caoutchouc [rubber] jaws, silver craniums [and] platinum noses” don’t play leading roles in their personalities—they are merely tools on their bodies. These piecemeal men are unlikely crusaders of invention with an even more unlikely mission. And yet who better to design the next great leap in technology than men remade by technology themselves?

As Verne understood, the U.S. Civil War (during which 60,000 amputations were performed) inaugurated the modern prosthetics era in the United States, thanks to federal funding and a wave of design patents filed by entrepreneurial prosthetists. The two World Wars solidified the for-profit prosthetics industry in both the United States and Western Europe, and the ongoing War on Terror helped catapult it into a US $6 billion dollar industry across the globe. This recent investment is not, however, a result of a disproportionately large number of amputations in military conflict: Around 1,500 U.S. soldiers and 300 British soldiers lost limbs in Iraq and Afghanistan. Limb loss in the general population dwarfs those figures. In the United States alone, more than 2 million people live with limb loss, with 185,000 people receiving amputations every year. A much smaller subset—between 1,500 to 4,500 children each year—are born with limb differences or absences, myself included.

Today, the people who design prostheses tend to be well-intentioned engineers rather than amputees themselves. The fleshy stumps of the world act as repositories for these designers’ dreams of a high-tech, superhuman future. I know this because throughout my life I have been fitted with some of the most cutting-edge prosthetic devices on the market. After being born missing my left forearm, I was one of the first cohorts of infants in the United States to be fitted with a myoelectric prosthetic hand, an electronic device controlled by the wearer’s muscles tensing against sensors inside the prosthetic socket. Since then, I have donned a variety of prosthetic hands, each of them striving toward perfect fidelity of the human hand—sometimes at a cost of aesthetics, sometimes a cost of functionality, but always designed to mimic and replace what was missing.

In my lifetime, myoelectric hands have evolved from clawlike constructs to multigrip, programmable, anatomically accurate facsimiles of the human hand, most costing tens of thousands of dollars. Reporters can’t get enough of these sophisticated, multigrasping “bionic” hands with lifelike silicone skins and organic movements, the unspoken promise being that disability will soon vanish and any lost limb or organ will be replaced with an equally capable replica. Prosthetic-hand innovation is treated like a high-stakes competition to see what is technologically possible. Tyler Hayes, CEO of the prosthetics startup Atom Limbs, put it this way in a WeFunder video that helped raise $7.2 million from investors: “Every moonshot in history has started with a fair amount of crazy in it, from electricity to space travel, and Atom Limbs is no different.”

We are caught in a bionic-hand arms race. But are we making real progress? It’s time to ask who prostheses are really for, and what we hope they will actually accomplish. Each new multigrasping bionic hand tends to be more sophisticated but also more expensive than the last and less likely to be covered (even in part) by insurance. And as recent research concludes, much simpler and far less expensive prosthetic devices can perform many tasks equally well, and the fancy bionic hands, despite all of their electronic options, are rarely used for grasping.  ... '