/* ---- Google Analytics Code Below */

Wednesday, September 28, 2022

Chula Chatbot Career Coach

Interesting, especially regards to how much a coach it can be. 

Chula Chatbot Serves As an Education and Career Coach

By Chulalongkorn University, September 26, 2022

A chatbot developed by lecturers at Chulalongkorn University is designed to serve as an education and career coach for learners and students.

The EmpowerMe chatbot "is like having a student counselor or personal coach who can offer advice on how to develop career skills in the digital age so that students can find the job for which they are well suited," says Professor Jintavee Khlaisang. It will also suggest lessons to help a user develop the types of skills needed in a particular career. It assesses a learner's performance and motivates them with rewards in the form of medals.  

The application received a gold medal award from the 2021 Seoul International Invention Fair. It has been patented for user-interface design.

From Chulalongkorn University    (Korea) 

View Full Article   

Neural Networks Predict Forces in Jammed Granular Solids

 Unexpected application for a neural network.

Neural Networks Predict Forces in Jammed Granular Solids  By Göttingen University (Germany), September 8, 2022

A team of researchers from Germany's Göttingen University and Belgium's Ghent University used machine learning and computer simulations to create a tool for predicting force chains within granular solids.

The researchers showed that graph neural networks can be trained in a supervised manner to anticipate the position of force chains that manifest while deforming a granular system, provided an undeformed static structure.

Said Göttingen's Peter Sollich, "The efficiency of this new method is surprisingly high for different scenarios with varying system size, particle density, and composition of different particles types. This means it will be useful in understanding force chains for many types of granular matter and systems."

From Göttingen University (Germany)

View Full Article   

On Weak IOT Security

 Considerable piece.  IOT has long been a susceptible ecosystems. 

Security in the billions: Toward a multinational strategy to better secure the IoT ecosystem  From AtlanticCouncil

By Patrick Mitchell, Liv Rowley, and Justin Sherman with Nima Agah, Gabrielle Young, and Tianjiu Zuo

Executive summary

The explosion of Internet of Things (IoT) devices and services worldwide has contributed to an explosion in data processing and interconnectivity. Simultaneously, this interconnection and resulting interdependence have amplified a range of cybersecurity risks to individuals’ data, company networks, critical infrastructure, and the internet ecosystem writ large. Governments, companies, and civil society have proposed and implemented a range of IoT cybersecurity initiatives to meet this challenge, ranging from introducing voluntary standards and best practices to mandating the use of cybersecurity certifications and labels. However, issues like fragmentation among and between approaches, complex certification schemes, and placing the burden on buyers have left much to be desired in bolstering IoT cybersecurity. Ugly knock-on effects to states, the private sector, and users bring risks to individual privacy, physical safety, other parts of the internet ecosystem, and broader economic and national security.

In light of this systemic risk, this report offers a multinational strategy to enhance the security of the IoT ecosystem. It provides a framework for a clearer understanding of the IoT security landscape and its needs—one that focuses on the entire IoT product lifecycle, looks to reduce fragmentation between policy approaches, and seeks to better situate technical and process guidance into cybersecurity policy. Principally, it analyzes and uses as case studies the United States, United Kingdom (UK), Australia, and Singapore, due to combinations of their IoT security maturity, overall cybersecurity capacity, and general influence on the global IoT and internet security conversation. It additionally examines three industry verticals, smart homes, networking and telecommunications, and consumer healthcare, which cover different products and serve as a useful proxy for understanding the broader IoT market because of their market size, their consumer reach, and their varying levels of security maturity.  ... '

(Much more ....See also Schneier  on this report, with much additional comment) 

AI, GPS Technology Could Save Lives in Wildfires, Floods, Disaster

 Watching Hurricane activity in South Florida, getting further appreciation of linking many camera and sensor resources to make disaster, supply chain decisions. 

AI, GPS Technology Could Save Lives in Wildfires, Floods, Disaster

The Wall Street Journal

Jim Carlton, August 30, 2022

Artificial intelligence and global positioning system (GPS) technology could help save lives in wildfires, floods, and other natural disasters by getting emergency alerts out faster. Last month, the U.S. Department of Homeland Security completed a proof-of-concept test on a satellite-based alert sent to automobiles' GPS navigation systems. Meanwhile, Pascal Schuback from the nonprofit Cascadia Region Earthquake Workgroup said he has been working with companies and organizations on a service for Amazon's Alexa virtual assistant to process emergency alerts. Schuback envisions a scenario in which Alexa could instruct a garage to open and a self-driving car to back out following an imminent tremor alert. Recent applications designed to help people keep in touch on disaster alerts include the Federal Emergency Management Agency app, which transmits real-time emergency alerts from the National Weather Service and California fire monitor Watch Duty. ... 

Erasure Key to Practical Quantum Computing

Tips towards Quantum improvement.

Erasure Key to Practical Quantum Computing

Researchers at Princeton University, Yale University, and the University of Wisconsin-Madison have developed a method for error correction in a quantum computer's calculations.   The researchers focused on physical causes of error; in their proposed system, the most common source of error eliminates damaged data instead of corrupting it.

They studied the electrons in ytterbium qubits, and when errors cause the electrons to fall to the ground state from their excited state, they visibly scatter light.  This means that when shining a light on ytterbium qubits, only the faulty ones light up and can be written off as errors.

Said Princeton's Jeff Thompson, "These erasure errors are vastly easier to correct because you know where they are." ... 

Said Princeton's Jeff Thompson, “We see this project as laying out a kind of architecture that could be applied in many different ways. We are already seeing a lot of interest in finding adaptations for this work.”  ... 

From Princeton University School of Engineering and Applied Science

View Full Article  

Security by Labeling

Security by Labeling

By Andreas Kuehn

Communications of the ACM, September 2022, Vol. 65 No. 9, Pages 23-25  10.1145/3548762

Empowering consumers to make risk-informed purchasing decisions when buying Internet-of-Things (IoT) devices or using digital services is a principal thrust to advance consumer cybersecurity. Simple yet effective labels convey relevant cybersecurity information to buyers at the point of sale and encourage IoT vendors to up their cybersecurity game as they now can recoup their security investments from risk-aware buyers. These dynamics benefit consumers and the industry alike, resulting in better, more resilient cybersecurity for all.

Consumers are insufficiently aware of risks emanating from IoT and are ill-equipped to manage them. For all the much-heralded benefits of consumer IoT to come true, the industry must ensure all the smart home appliances, connected thermostats, and digital services are secure and can be trusted. The industry has for long been criticized for not paying sufficient attention to the cybersecurity of its products. Concerns over security were pushed aside, yielding precedence to shorter time-to-market and higher corporate profits. Less time for testing translates into insecure products in residential homes.

The full cost of insecurity is on display when consumers, industry, and governments must respond to and clean up after cyber incidents. The toll of consumer cybercrime alone adds up to more than 100 billion USD per year globally.4 The industry, with support from government, must find ways to put IoT security front and center and make the necessary up-front investments that enhance consumer cyber-security and lower cost to everyone.

Lack of Information Drives Cyber Insecurity

Consumer cybersecurity is suffering from information asymmetry, the skewed appraisal of the quality of a property that Nobel Laureate economist George Akerlof described in his seminal writing "The Market for Lemons: Quality Uncertainty and the Market Mechanism."1 In the secondhand car market, Akerlof observed, buyers of used cars could not tell good cars from bad ones and thus differentiated the product on price alone, rather than including the quality of the preowned vehicles in their purchase decision-making. Sellers had no incentive to sell higher-quality cars since they could not find buyers willing to pay a higher price. Thus, the information asymmetry between the seller, who knows the quality of the car, and the buyer, who cannot assess the quality of the car, led to a market of lemons, a degraded market of subquality cars, which frequently break down and are in constant need of expensive repairs.

The challenges on the way to consumer IoT cybersecurity labeling are considerable but not insurmountable.

The consumer IoT marketplace faces a similar conundrum. Buyers cannot discern a secure Internet-connected camera from its insecure, cheaper alternative. With no market demand, IoT manufacturers have no incentive to invest in cybersecurity. All that is left is to compete on price, further incenting the reduction of security to save on cost and hindering the much-needed consumer adoption of secure Internet-connected devices and services. Adding transparency by means of a recognized, trusted cybersecurity label can break this vicious cycle, empower buyers to make risk-informed purchases, and allow vendors to reap the rewards of their cybersecurity investments by marketing to security-aware customers. In fact, research shows that a sizable portion of consumers is willing to pay a 30% markup for secure IoT products. ... '

Amazon Introduces Encrypted Communication Service AWS Wickr

 Had not heard of this offering.

Amazon Introduces Encrypted Communication Service AWS Wickr   by  Renato Losio

A year after the acquisition of the company Wickr, Amazon recently announced the preview of the collaboration suite AWS Wickr. Built on a proprietary encryption protocol, the new managed service provides enterprises and government agencies with security and administrative controls to meet security and compliance requirements.

Wikcr uses a multilayered AES-256 end-to-end encryption and key handling protocols to allow users securely sharing mission-critical information. Every call, message, and file in AWS Wickr is encrypted with a new random encryption key and messages. According to the cloud provider, encryption keys are accessible only within Wickr applications and not disclosed to Wickr servers.

Among the suggested use cases, the new service can help secure sensitive communications and enable out-of-band communications for disaster recovery and incident response, facilitate data governance and enable internal and external collaboration through federation. After the acquisition of the company that builds end-to-end encryption-based collaboration solutions for public sector and enterprise customers, Amazon integrated Wickr as an AWS service and developed new features including a new SDK and updated crypto protocols.

Even if AWS claims they cannot access the communications, the choice of a proprietary protocol raised some concerns in the community. Christophe Tafani-Dereeper, cloud security researcher & advocate at Datadog, comments:

"AWS Wickr encrypts every message, call, and file with a proprietary, 256-bit end-to-end encryption protocol" This awfully reads "we're rolling our own encryption". ... ' 

Tuesday, September 27, 2022

Economic Development Administration Awards Georgia Tech

Impressed what I saw in long past visits  ... 

Economic Development Administration Awards Georgia Tech $65 Million for AI Manufacturing Project

Largest grant ever awarded to a Georgia Tech-led coalition of partners to drive Build Back Better initiatives

The Georgia Institute of Technology has been awarded a $65 million grant from the U.S. Department of Commerce’s Economic Development Administration (EDA) to support a statewide initiative that combines artificial intelligence and manufacturing innovations with transformational workforce and outreach programs. The grant will increase job and wage opportunities in distressed and rural communities, as well as among historically underrepresented and underserved groups. ... '

Pre Bunking against Misinformation

Mostly,  always consider the source

Fighting Against Misinformation

By Associated Press, August 30, 2022

The researchers said short online videos that teach basic critical thinking skills can make people better able to resist misinformation online.

Researchers at the U.K.'s Cambridge University and University of Bristol, the University of Western Australia, and Google have found that "pre-bunking" is a simple and promising method to prevent misinformation.

Pre-bunking involves using fictional examples to teach people how misinformation works and how to become resistant to it.

The researchers created short online videos akin to public service announcements, highlighting certain misinformation techniques, like emotionally charged language, personal attacks, and false comparisons of unrelated items.

After giving participants a series of claims, the researchers found that those who watched the videos were better able to distinguish between false and accurate information.

Google now will issue a series of pre-bunking videos on YouTube, Facebook, and TikTok in Eastern Europe on scapegoating, which has been seen in misinformation campaigns about Ukrainian refugees. 

   APNews   

Monday, September 26, 2022

AI Directed Security Issues

 Also discussed in Schneier, with further comment, makes me think of 'dueling AI' scenarios.

You can’t solve AI security problems with more AI  From Simonwillison.net

One of the most common proposed solutions to prompt injection attacks (where an AI language model backed system is subverted by a user injecting malicious input—“ignore previous instructions and do this instead”) is to apply more AI to the problem.

I wrote about how I don’t know how to solve prompt injection the other day. I still don’t know how to solve it, but I’m very confident that adding more AI is not the right way to go.

These AI-driven proposals include:

Run a first pass classification of the incoming user text to see if it looks like it includes an injection attack. If it does, reject it.

Before delivering the output, run a classification to see if it looks like the output itself has been subverted. If yes, return an error instead.

Continue with single AI execution, but modify the prompt you generate to mitigate attacks. For example, append the hard-coded instruction at the end rather than the beginning, in an attempt to override the “ignore previous instructions and...” syntax.

Each of these solutions sound promising on the surface. It’s easy to come up with an example scenario where they work as intended.

But it’s often also easy to come up with a counter-attack that subverts that new layer of protection! ... ' 

On NeuroSymbolic AI

New to me, worth understanding in useful contexts ...

Neurosymbolic AI, By Don Monroe

Communications of the ACM, October 2022, Vol. 65 No. 10, Pages 11-13   10.1145/3554918

The ongoing revolution in artificial intelligence (AI)—in image recognition, natural language processing and translation, and much more—has been driven by neural networks, specifically many-layer versions known as deep learning. These systems have well-known weaknesses, but their capability continues to grow, even as they demand ever more data and energy. At the same time, other critical applications need much more than just powerful pattern recognition, and deep learning does not provide the sorts of performance guarantees that are customary in computer science.

To address these issues, some researchers favor combining neural networks with older tools for artificial intelligence. In particular, neurosymbolic AI incorporates the long-studied symbolic representation of objects and their relationships. A combination could be assembled in many different ways, but so far, no single vision is dominant.

The complementary capabilities of such systems are frequently likened to psychologist Daniel Kahneman's human "System 1" which, like neural networks, makes rapid, heuristic decisions, and the more rigorous and methodical "System 2." "The field is growing really quickly, and there's a lot of excitement," said Swarat Chaudhuri of the University of Texas at Austin. Even though "Neural networks are going to become ubiquitous, even more than they are today," he said, "not all of computer science is going to become replaced by deep learning."

A Long History

In the early years of artificial intelligence, researchers had high hopes for symbolic rules, such as simple if-then rules and higher-order logical statements. Although some experts, such as Doug Lenat at Cycorp, still hold hopes for this strategy to impart common sense to AI, the collection of rules needed is widely regarded as Unpractically large. "If you try to encode all human knowledge manually, we know that's not possible. That has been tried and failed," said Asim Munawar, a program director of neurosymbolic AI at IBM.

Neural networks also fell short of their aspirations in the 1980s and '90s, and artificial intelligence entered a long "winter" of reduced interest and funding. This situation changed a decade ago, however, largely due to the availability of enormous datasets for training, and massive computer power. Recent architectural innovations, notably attention and transformers, have driven further advances, such as the uncannily plausible text generation by OpenAI's large language model, GPT-3.

Deep learning does surprisingly well at generalizing, for reasons that are only partly understood. Despite impressive successes on average, however, these systems still make some odd errors when presented with novel examples that do not fit patterns they infer from the training data. Errors also can be created using maliciously altered data, sometimes in ways essentially imperceptible to people.

In addition, racial, gender, and other biases in the training data can be unintentionally enshrined by neural networks. Thus, for ethical and safety reasons, users often expect an explanation of how the networks came to a conclusion in medical, financial, legal, and military applications.

In spite of widespread concerns, these problems are not actually "limitations of deep learning systems," said Yann LeCun, chief AI scientist and a vice president at Meta, of the widely used supervised learning paradigm. LeCun, who shared the 2018 ACM A.M. Turing Award with fellow deep learning pioneers Geoffrey Hinton and Yoshua Bengio, believes that if users adopt "self-supervised learning, things that are not trained for a given task but are trained generically, a lot of those problems will essentially disappear." (LeCun regards explainability as a "non-issue.")  ... ' 

Kroger Seeking to Improve Customer Pickup

Semi automation of customer pickup interaction and satisfaction.

Will electronic carts help Kroger fulfill curbside pickup orders more quickly?  by Matthew Stern in Retailwire

Kroger is introducing a temperature-controlled electronic cart to speed up its curbside fulfillment.

The cart, called the BrightDrop Trace Grocery cart, was originally piloted in Kroger stores in Lexington and Versailles, KY, and is marked for a broader rollout since the pilot yielded a noticeable improvement in both customer and associate experience, according to the Detroit News.

The cart is fitted with nine secure, temperature controlled drawers in which employees can stock grocery orders before wheeling the cart to the curb. It is mechanized to allow an employee to easily pull 350 pounds of groceries at a comfortable walking pace. The next wave of the rollout will be limited, but the companies expect wide-scale availability of the cart by 2024.

Curbside pickup experienced an unprecedented spike in adoption at the beginning of the novel coronavirus pandemic as government regulations limited in-store shopping and concerns about potentially contracting COVID-19 kept shoppers out of stores. ,,,  ' 

Make the Mask a Way to Detect Disease

Had seen something similar presented,  sanitizing more complex masks was mentioned as an issue.  ,

Smart Mask Could Be Early Warning System   By South China Morning Post (Hong Kong), September 22, 2022

A team of Chinese scientists have developed a wearable bioelectric mask that can detect respiratory diseases in the air, including Covid-19 and influenza, and report results in 10 minutes.

Once connected to a wireless network, the mask can transmit real-time data to a user's mobile device, including detection alerts, according to a study published in the peer-reviewed journal Matter on Monday.

The mask is intended to be used as an early warning system to prevent future outbreaks of respiratory infectious diseases, researchers said.

These diseases are spread through the air by droplets or aerosols. But direct detection of viruses in the air can be difficult as the concentrations can be extremely low.

From South China Morning Post (Hong Kong)

View Full Article   

X-rays, AI and 3D printing Bring lost Van Gogh Artwork to life

Worked previously with UCL

X-rays, AI and 3D printing bring lost Van Gogh Artwork to life

by University College London

Using X-rays, artificial intelligence and 3D printing, two UCL researchers reproduced a "lost" work of art by renowned Dutch painter Vincent Van Gogh, 135 years after he painted over it.

Ph.D. researchers Anthony Bourached (UCL Queen Square Institute of Neurology) and George Cann (UCL Space and Climate Physics), working with artist Jesper Eriksson, used cutting edge technology to recreate a long-concealed Van Gogh painting.

It's the latest in their "NeoMasters" series of recreations, a project they've been working on since 2019 to bring lost works of art to life.

They developed a process to recreate lost works that uses X-ray imaging to see through every layer of paint, AI to extrapolate the artist's style, and 3D printing to fabricate the final piece.

This newest effort, dubbed "The Two Wrestlers," depicts two shirtless wrestlers grappling in front of an abstract background. It recreates a painting originally by Van Gogh who covered over the two figures when he reused the canvas for an unrelated painting of flowers.

Bourached, who is researching Machine Learning and Behavioral Neuroscience at UCL, says that "how much it is like the original painting is impossible to tell at this point because the information doesn't exist. I think it's very convincing—by far the best guess we can get with current technology."

The obscured image was first discovered in 2012 when art experts at the University of Antwerp investigated whether the work "Still Life with Meadow Flowers and Roses" was an authentic Van Gogh. The researchers examining the artwork used X-rays to peer through the layers of paint and discovered two ghostly figures that had been painted over.

The covered-over wrestlers displayed brush strokes and pigments that were consistent with Van Gogh, and the subject matter was also a common theme at the Antwerp Art Academy where Van Gogh was studying in 1886, authenticating the work. ... '

Sunday, September 25, 2022

AI Teaching Cursive Handwriting

Unusual application for these times, but I agree it can have useful side effects.

Applied AI Teaches Handwriting    By Esther Shein

Communications of the ACM, October 2022, Vol. 65 No. 10, Pages 19-20   10.1145/3554919

Researchers from Germany's Karlsruhe Institute of Technology (KIT) and pen-maker Stabilo are collaborating on an artificial intelligence (AI)-based pen to teach schoolchildren what is becoming a lost art in an increasingly digital world: handwriting.

The joint project—Kaligo-based Intelligent Handwriting Teacher (KIHT)—is funded by the German Federal Ministry of Education and Research.

German children are taught to write by redrawing the shape of letters, which requires them to think about writing, explains Tanja Harbaum, a researcher at KIT who is involved with the project. "We want them to be able to write without having to think about writing. That's what we as adults do."

The eyes of unskilled writers are not able to keep up with writing, and "that's really a problem because if you force a child to redraw shapes, they won't be able to practice fluent writing at the same time," according to Harbaum.

While teaching shapes should be the first step, children are "painting the letters, not writing them," says Peter Kämpf, head of special product development at Stabilo. "Painting means that pen movement is slow and deliberate, with close hand-eye coordination. Therefore, the next step must focus on the dynamics of writing," which is the wrist movement, he says, and not focus on shapes "until the writing movement has developed to the point where it is an overlearned motion that does not depend on optical control."

This not only speeds up the writing process but also frees up cognitive capacity, he says.

Styli have been shown to enhance the ability to write. "Writing with the finger is more suitable for performing large, but not very accurate motions, while writing with the stylus leads to a higher precision and more isotropic motion performance," according to a 2015 study published in the National Library of Medicine.

Regardless of whether a stylus or an old-fashioned pen or pencil is used, however, studies have found there is a significant connection between handwriting, cognitive development, and the ability to retain information.

In 2015, Finland became one of the first countries to phase out handwriting instruction altogether, to keep pace with technological progress. (Although U.S. schools are not required to teach cursive writing, schools in some states continue to do so.) Some researchers do not agree with Finland's decision.

University of Washington professor Virginia Berninger told the online news site Qcostarica that writing with a pen not only helps develop fingers, but also thinking skills, because the brain works harder to write.

Other studies support the fact that handwriting is a complex task that requires more brainpower to process a word than just reading or typing it. Handwriting is both physical and mental, and the brain has to apply motor skills and thought processing when applying pen to paper to create words. 

Even adults can benefit from continuing to use their handwriting skills. A 2021 study by Johns Hopkins University published in the peer-reviewed journal Psychological Science posits practicing handwriting "refines fine-tuned motor skills and creates a perceptual-motor experience that appears to help adults learn generalized literacy-related skills surprisingly faster and significantly better than if they tried to learn the same material by typing on a keyboard or watching videos."

"Our results clearly show that handwriting compared with nonmotor practice produces faster learning and greater generalization to untrained tasks than previously reported," researchers Robert Wiley and Brenda Rapp told Psychology Today. "Furthermore, only handwriting practice leads to the learning of both motor and amodal symbolic letter representations."  ..... ' 

Ask Dumb Questions

Ask enough questions and you will notuce they are getting smarter.

When AI Asks Dumb Questions, It Gets Smart Fast   In Science, September 22, 2022

New research suggests patiently correcting artificial intelligence (AI) when it asks dumb questions may be key to helping the technology learn.

Stanford University scientists trained a machine learning AI to identify gaps in its knowledge, as well as to formulate often-stupid questions about images that strangers would answer.

When people responded, the system received feedback prompting it to adjust its inner mechanisms to behave similarly in the future; the researchers also "rewarded" the AI for writing smart questions to which humans responded.

The AI absorbed lessons in language and social norms over time, refining its ability to compose sensible and easily answerable queries.

The researchers said the system's accuracy at answering questions similar to those it had asked improved 118% over eight months and across more than 200,000 questions ...

Many AI systems become smarter by relying on a brute-force method called machine learning: they find patterns in data to, say, figure out what a chair looks like after analyzing thousands of pictures of furniture. ... 

From Science

Cognixion One, a Brain Interface with AR

Brought to my attention.    Updates to Cognixion.   More at the link. Let me know of your experience and I will report it here ....

CXN ONE

The World's First Brain Computer Interface with Augmented Reality

Wearable Speech™ Generating Device   ©2020 Cognixion, Patents Pending

Say Hello to Cognixion ONE​  ...  (link above) 

For too long, the assistive technology industry has relied on repurposed consumer electronics, often years behind the cutting edge of what is possible.

At Cognixion, we believe that every individual deserves a solution as unique as they are – and that it’s possible to build one tool for communication, access, and everything else life brings their way.

Enter Cognixion ONE. No more dangling wires. No more PC monitor blocking your view of the person you’re speaking with. No more leaving your voice and your control at home, or in the classroom; Cognixion ONE is a wearable window to the world, offering both speech and an integrated AI assistant for home automation control and other enrichment. 

Whether you use Cognixion ONE’s AR environment via head pointing, Brain-Computer Interface, or switch, you’ll see how it’s built from the ground up to reduce the lag between intention and outcome.

Think it. Say it. Do it.

Cognixion ONE - The world's first wearable brain computer Interface with AR | Product Hunt

Extending Abilities to the Most Vulnerable

Designed by neurologists and biosignal engineers in concert with Speech-Language Pathologists and a large group of users and professionals, Cognixion ONE comes out of the gate with integrated home automation AI and multiple context-aware predictive keyboards. We’re prepared to meet the needs of anyone with complex communication disorders including CP, ALS, and hundreds of other conditions exhaustively researched and tested by our team.

Almost anyone who has been reliant on switch or eye control access is an immediate candidate for Cognixion ONE, as well as many other conditions for which there was previously no good access method such as Locked-In Syndrome or hyperkinetic conditions causing too much movement for reliable eye tracking. Our durable design and optional accessory straps mean you can use Cognixion ONE with confidence all throughout your daily life.

Your Window to the World

Cognixion ONE is an entirely wireless, mobile solution that doesn’t just give you freedom to move – it gives you freedom to move the world. 

Direct integration with a popular AI assistant (tba) provides access to home automation, music, games, and more; you don’t need to own a hub, because it is a hub. 

Our speech generating software also displays text on the mirrored lens, so you can be understood even in loud environments – or disable sound and use text only for private conversations. 

A 4G cellular connection ensures you always have a direct line to your AI assistant and our other cloud services, all of which are completely secure and compliant with all international privacy laws.  ... '

Optimizing Fluid Mixing with Machine Learning

 Interaction between RL and Markov decision method is interesting 

Optimizing Fluid Mixing with Machine Learning

Tokyo University of Science (Japan)

August 29, 2022

Researchers in Japan have proposed a machine learning-based approach for optimizing fluid mixing for laminar flows. The researchers used reinforcement learning (RL), in which intelligent agents perform actions in an environment to maximize the cumulative reward. The team addressed RL's inefficiencies in dealing with systems involving high-dimensional state spaces by describing fluid motion using only a single parameter. Researchers used the Markov decision process to formulate the RL algorithm, and the Tokyo University of Science's Masanobu Inubushi said the program "identified an effective flow control, which culminated in an exponentially fast mixing without any prior knowledge." The RL method also enabled effective transfer learning of the trained "mixer," significantly reducing its time and training cost.

Saturday, September 24, 2022

Robotics Rolling in China

 Stats on Global and Chinese Robot use: 

China's Factories Accelerate Robotics Push as Workforce Shrinks

The Wall Street Journal

Jason Douglas, September 18, 2022

A report from the International Federation of Robotics (IFR) revealed more than 243,000 industrial robots were shipped to China last year, up 45% from 2020. China, the top market worldwide for robot manufacturers, accounted for slightly less than half of all heavy-duty industrial-robot installations in 2021, which was nearly double the number installed in the Americas and Europe last year combined. However, the U.S., Japan, Germany, and South Korea still have more robots on production lines than China. The rapid growth of automation in China can be attributed to a rush to catch up to those countries, as well as adapting to a decline in its workforce as its population shrinks and more younger workers opt for service jobs over manufacturing. ... ' 

Building a Brain Atlas

New effort moves forward: 

NIH’s BRAIN Initiative puts $500 million into creating most detailed ever human brain atlas

Neuroscientists will build on census of mouse brain as massive program moves into new phase

22 SEP 202210:00 AM BYJOCELYN KAISER

The BRAIN Initiative, the 9-year-old, multibillion-dollar U.S. neuroscience effort, today announced its most ambitious challenge yet: compiling the world’s most comprehensive map of cells in the human brain. Scientists say the BRAIN Initiative Cell Atlas Network (BICAN), funded with $500 million over 5 years, will help them understand how the human brain works and how diseases affect it. BICAN “will transform the way we do neuroscience research for generations to come,” says BRAIN Initiative Director John Ngai of the National Institutes of Health (NIH).

BRAIN, or Brain Research Through Advancing Innovative Neurotechnologies, was launched by then-President Barack Obama in 2013. It began with a focus on tools, then developed a program called the BRAIN Initiative Cell Census Network, resulting in a raft of papers in 2021. The studies combined data on the genetic features, shapes, locations, and electrical activity of millions of cells to identify more than 100 cell types across the primary motor cortex—which coordinates movement—in mice, marmosets, and humans. Hundreds of researchers involved in the network are now completing a cell census for the rest of the mouse brain. It is expected to become a widely used, free resource for the neuroscience community.

Now, BICAN will characterize and map neural and nonneuronal cells across the entire human brain, which has 200 billion cells and is 1000 times larger than a mouse brain. “It’s using similar approaches but scaling up,” says Hongkui Zeng, director of the Allen Institute for Brain Science, which won one-third of the BICAN funding. Zeng says the results of the effort will serve as a reference—a kind of Human Genome Project for neuroscience.

Other groups will add data from human brains across a range of ancestries and ages, including fetal development. “We will try to cover the breadth of human development and aging,” says Joseph Ecker of the Salk Institute for Biological Studies, which leads BICAN studies of epigenetics, the study of heritable changes that are passed on without changes to the DNA. Ngai expects BICAN to study several hundred human brains overall, although investigators are just starting to work out details. “The sampling and coverage is going to be a big, big topic of discussion,” Ngai says. ... 


Denoising Images

 A New Method for Denoising Images

Gwangju Institute of Science and Technology (South Korea)

September 13, 2022

Researchers from South Korea's Gwangju Institute of Science and Technology (GIST), Vietnam's VinAI Research, and Canada's University of Waterloo have developed a reference-free image denoising method. The new approach uses a post-correction network and a self-supervised machine learning framework to improve the quality of path-traced visuals. The model tripled the quality of rendered images relative to input images by preserving finer details, and could be trained on the fly to output final images in just 12 seconds. Said GIST's Bochang Moon, "Our approach is the first that does not rely on pre-training with an external dataset. This, in effect, will shorten the production time and improve the quality of offline rendering-based content such as animation and movies."  ... '

Future of Electric Planes

Good overview of efforts underway.

Electric planes take off   in Strategy-Business

The potential for short-haul electric flight is energizing aviation’s newest startups.

by Raymond Colitt

Two airports in Spain illustrate both the past and future of commercial aviation. In the country’s east, more than 100 jet aircraft, including giant A380s, glisten like a mirage under a scorching Iberian sun at Teruel Airport, a parking lot for technology past. Only a few of these gas guzzlers are likely to fly again. Around 250 miles to the south, the ATLAS Flight Test Centre in Villacarrillo is providing a runway for a new breed of much smaller aircraft: electric vertical takeoff and landing planes, or eVTOLs.

by Nochane Rousseau, Mario Longpré, and Paul Barbagallo

With more countries and companies agreeing to reduce emissions, the future for the bulky jets we fly in today and the companies that operate them is changing. “The biggest challenge to commercial aviation is the commitment that’s been made to net-zero carbon emissions by 2050,” Tony Douglas, group CEO of Etihad Aviation Group, told the Global Aerospace Summit in Abu Dhabi in May. Flying accounts for only 2.5% of CO2 emissions. But those emissions are created by the relatively small proportion of humans who fly each year, and the industry is poised to expand. It will be very difficult to reduce flying-related emissions without grounding airplanes. “I imagine everybody in this room understands that the physics of powered flight render the achievement of that objective [net zero] extremely difficult anytime soon,” Douglas added.

Not surprisingly, engineers and scientists around the globe are in a race to crack the nut of zero-emissions flying. The challenge of replicating the electric vehicle (EV) revolution in the air is that, simply put, defying gravity requires more energy. Moving a heavy battery along a flat road in a car is easier than lifting it into the air on a plane or helicopter. (Clean airplane fuels, which are liquids or gases derived from sustainable sources, are still very expensive and, according to a PwC report, won’t be widely available or cost-effective for more than a decade. But they are still likely to be the fuel of the future for long-haul flying.) .... ' 

Friday, September 23, 2022

Will DART Save us?

  Closely following this, will be shown live on Monday. Earth will inevitably be hit by a serious space rock again, so its good to be ready now.     Lets learn as much as we can.

NASA’s DART Mission Aims to Save the World. Robotic probe sent to crash into asteroid in test of planetary defense By NED POTTER   in Spectrum IEEE

Armageddon ruined everything. Armageddon—the 1998 movie, not the mythical battlefield—told the story of an asteroid headed straight for Earth, and a bunch of swaggering roughnecks sent in space shuttles to blow it up with a nuclear weapon.

“Armageddon is big and noisy and stupid and shameless, and it's going to be huge at the box office,” wrote Jay Carr of the Boston Globe.

Carr was right—the film was the year’s second biggest hit (after Titanic)—and ever since, scientists have had to explain, patiently, that cluttering space with radioactive debris may not be the best way to protect ourselves. NASA is now trying a slightly less dramatic approach with a robotic mission called DART—short for Double Asteroid Redirection Test. On Monday at 7:14 p.m. EDT, if all goes well, the little spacecraft will crash into an asteroid called Dimorphos, about 11 million kilometers from Earth. Dimorphos is about 160 meters across, and orbits a 780-meter asteroid, 65803 Didymos. NASA TV plans to cover it live.

DART’s end will be violent, but not blockbuster-movie-violent. Music won’t swell and girlfriends back on Earth won’t swoon. Mission managers hope the spacecraft, with a mass of about 600 kg, hitting at 22,000 km per hour, will nudge the asteroid slightly in its orbit, just enough to prove that it’s technologically possible in case a future asteroid has Earth in its crosshairs.

“Maybe once a century or so, there'll be an asteroid sizeable enough that we'd like to certainly know, ahead of time, if it was going to impact,” says Lindley Johnson, who has the title of Planetary Defense Officer at NASA.

“If you just take a hair off the orbital velocity, you’ve changed the orbit of the asteroid so that what would have been impact three or four years down the road is now a complete miss.”

So take that, Hollywood! If DART succeeds, it will show there are better fuels to protect Earth than testosterone.

The risk of a comet or asteroid that wipes out civilization is really very small, but large enough that policymakers take it seriously. NASA, ordered by the U.S. Congress in 2005 to scan the inner solar system for hazards, has found nearly 900 so-called NEOs—Near-Earth Objects—at least a kilometer across, more than 95 percent of all in that size range that probably exist. It has plotted their orbits far into the future, and none of them stand more than a fraction of a percent chance of hitting Earth in this millennium.

An infographic showing the orientation of Didymos,  Dimorphos, DART, and LICIACube.The DART spacecraft should crash into the asteroid Dimorphos and slow it in its orbit around the larger asteroid Didymos. The LICIA Cube cubesat will fly in formation to image the impact.JOHNS HOPKINS APL/NASA

But there are smaller NEOs, perhaps 140 meters or more in diameter, too small to end civilization but large enough to cause mass destruction if they hit a populated area. There may be 25,000 that come within 50 million km of Earth’s orbit, and NASA estimates telescopes have only found about 40 percent of them. That’s why scientists want to expand the search for them and have good ways to deal with them if necessary. DART is the first test.

NASA takes pains to say this is a low-risk mission. Didymos and Dimorphos never cross Earth’s orbit, and computer simulations show that no matter where or how hard DART hits, it cannot possibly divert either one enough to put Earth in danger. Scientists want to see if DART can alter Dimorphos’ speed by perhaps a few centimeters per second. ... '


NVIDIA AI Generate Objects and Characters for Virtual Worlds

NVIDIA always impressive:

NVIDIA's new AI model quickly generates objects and characters for virtual worlds  in Engadget

GET3D could make it easier for developers to make games and VR experiences.

3D objects created by NVIDIA's GET3D AI model, including cars, chairs, animals, motorbikes and human characters.

NVIDIA is looking to take the sting out of creating virtual 3D worlds with a new artificial intelligence model. GET3D can generate characters, buildings, vehicles and other types of 3D objects, NVIDIA says. The model should be able to whip up shapes quickly too. The company notes that GET3D can generate around 20 objects per second using a single GPU.

Researchers trained the model using synthetic 2D images of 3D shapes taken from multiple angles. NVIDIA says it took just two days to feed around 1 million images into GET3D using A100 Tensor Core GPUs.

The model can create objects with "high-fidelity textures and complex geometric details," NVIDIA's Isha Salian wrote in a blog post. The shapes GET3D makes "are in the form of a triangle mesh, like a papier-mâché model, covered with a textured material," Salian added.

Users should be able to swiftly import the objects into game engines, 3D modelers and film renderers for editing, as GET3D will create them in compatible formats. That means it could be much easier for developers to create dense virtual worlds for games and the metaverse. NVIDIA cited robotics and architecture as other use cases.

The company said that, based on a training dataset of car images, GET3D was able to generate sedans, trucks, race cars and vans. It can also churn out foxes, rhinos, horses and bears after being trained on animal images. As you might expect, NVIDIA notes that the larger and more diverse the training set that's fed into GET3D, "the more varied and detailed the output."  ... ' 


European automobile industry is going quantum

Automobile Industry takes a number of leaps

The European automobile industry is going quantum  By Tristan Greene    in TheNextWeb

Spooky action at a distance goes vroom

September 19, 2022 - 7:40 pm

It’s a bold new world for automobile makers. After a century of development and fine-tuning, the combustion engine is going the way of the dodos as Europe shifts to clean energy.

But there’s more to the future of cars than just electric motors. The onset of fully autonomous vehicles may lie just beyond the technological horizon and the promise of a million-mile battery draws ever closer. In order to navigate the road to these technologies, European automakers are partnering with quantum computing companies at an increasing pace.

The European automobile industry has a long, rich history of technological innovation. From its onset with the Nesselsdorfer Wagenbau in 1898 to the masterpiece that is the 2023 McLaren Artura, Europe’s place at the cutting edge of the industry has never been questioned. With that in mind, let’s contemplate the future.

Greetings, humanoids

The next steps for the industry involve taking a quantum leap forward. Despite the fact that quantum computing and other quantum-based technologies are still in their infancy, there are myriad ways in which they can aid the automotive industry.

Right up front, the low hanging fruit is autonomous driving. Despite the early hype, researchers and automobile makers have yet to crack the self-driving car nut. For every step companies such as BMW, Tesla, and Waymo take forward, it seems like hundreds of edge cases pop up that the AI is unable to deal with.

We’re probably still a long way off from building a quantum computer that can fit into a car where, presumably, it would act as its brain. But quantum speedup — the ability for quantum processors to perform calculations and/or run algorithms that a classical system couldn’t do in a useful amount of time — could offer advances in several foundational areas for autonomous vehicle systems.

Scientists at Terra Quantum AG, recently partnered with Volkswagen to find novel methods for using hybrid quantum neural networks to improve image recognition. This particular experiment demonstrated the potential for quantum technologies to improve the quality assurance process drastically.

Essentially, the researchers used quantum-powered AI to increase the accuracy of its image detection abilities in order to improve the quality of the car manufacturing process. The techniques they’re working to develop could easily spill over into other industries, but they could also be used to give self-driving cars better “eyes” by increasing the speed and accuracy at which neural networks can process images.

Pasqal, a Paris-based quantum startup also partnered up with BMW in another quantum-based endeavor. Together with the German-owned automobile maker, the company hopes to find new, lighter, more durable materials to build cars out of. The team hopes to eventually reach the point where the design process is fast, accurate, and involves zero-prototyping in order to ensure a clean energy approach to every facet of the car-making process.

BMW and Volkswagen are early adopters out in front of the impending quantum computing hardware explosion, but you can be sure that every other major automobile maker also has a plan to get in on the action — experts predict the quantum technologies market will hit nearly $500B by 2030. And the shift towards autonomous vehicles (and away from ownership) will require an entirely different view on supply and logistics, something the quantum industry is heavily invested in improving.  ... ' 

Training Neural Nets on Small Devices

 Good direction.

We Can Train Big Neural Networks on Small Devices

IEEE Spectrum

Matthew Hutson, September 20, 2022

A new training method expands small devices' capabilities to train large neural networks, while potentially helping to protect privacy. The University of California, Berkeley's Shishir Patil and colleagues integrated offloading and rematerialization techniques using suboptimal heuristics to reduce memory requirements for training via the private optimal energy training (POET) system. Users feed POET a device's technical details and data on the architecture of a neural network they want to train, specifying memory and time budgets; the system generates a training process that minimizes energy usage. Defining the problem as a mixed integer linear programming challenge was critical to POET's effectiveness. Testing showed the system could slash memory usage by about 80% without significantly increasing energy consumption.  ...

Thinking Industrial Policy

 Brought to my attention for comment,  somewhat dense, but scannable:

FIRST: A Note on "Industrial Policy”…

Stephen S. Cohen & J. Bradford DeLong

September, 2022

The not-quite-surprise passage of the CHIPS Act and the surprise passage of the IRA have brought the idea that the United States should consciously pursue  “industrial policy” back to the front burner of politics, and of political economy.

In some ways, however, “industrial policy” is a poisonous term in the discourse of U.S. politics. In the 1980s Democratic economic-policy stalwart Charles Schultze engaged in a full-throated campaign against the idea that the U.S. could run a successful “industrial policy”—“picking winners” in the rhetorical dismissive. Follower nations like Japan attempting to catch up to the U.S. that had succeeded in insulating economic bureaucracies from interest-group rent-seeking might be able to, he argued. But the United States would, to the extent that it embarked on industrial policy, further entrench dissipative rent-seeking interests. Allowing “industrial policy” into the rhetorical room would provide them with yet another set of plausible excuses that the legislators they influence could use for channeling resources in directions that had neither a valid social-welfare nor a valid economic-growth rationale for government protection and assistance.

So let us not call it “industrial policy”. Let us call it “pragmatism” instead. For it is a fact that, from Hamilton through Eisenhower and a bit longer, the American government’s attitude toward the use of public power and public funds for economic development was highly, highly pragmatic—and successful.

Looking back at the economic history of the United States, there is a pattern by which again and again the U.S. economy has been redesigned. The shifts of the economy toward new growth directions have sometimes been the emergent outcomes of innumerable individual actions guided by local price signals. But at other times, and perhaps at more times, they have not. New directions have, instead, been the results of purposeful decisions, taken by government backed by powerful and often broad political forces, guided by their vision of how the economy ought to change. And once the public sector and its allies have launched a new economic space, it has then been expanded and transformed in unimaginable ways by entrepreneurial activity and energy surging into those new directions.

CPG Brands and Direct to Consumer

 After Covid influence involved?

Should CPG brands cast their DTC (Direct to Consumer) businesses in a supporting role to stores?

Sep 22, 2022    by Melissa Minkow  in Retailwire

It appears there’s been a significant tide turn for consumer packaged goods (CPG) brands in the direct to consumer (DTC) selling versus retailer distribution debate as marked by a consensus among presenters this week at Groceryshop in Las Vegas.

A year ago, the repeated objective mentioned among CPG executives focused on shifting consumers towards buying from their DTC sites in order to capture the most first-party data possible. This year, regardless of the session, the universal takeaway from the CPG success stories was to instead leverage DTC sites in a way that complements their presence on grocery store shelves and websites.

Charisse Hughes, chief brand and advanced analytics officer of The Kellogg Company, said on day one of the conference that DTC is useful for unearthing consumer insights and for experimenting.

Ms. Hughes said, however, that while DTC is where Kellogg’s innovates and tests, it does present challenges with respect to the last mile. Kellogg’s has determined that DTC sites are best for “swag and new products” so that there’s a differentiated value to the owned channel.

A similar strategy was echoed on day two by Francesca Hahn, VP of digital commerce at Mondelez, who highlighted the engaging ways shoppers can interact with the Oreo and Sour Patch Kids sites.  ... ' 

Using AI to Improve Agricultural Yields

Impressive outlines, Podcast: 

Big Data in Agriculture

August 30, 2022 / The farm-to-fork cooperative uses artificial intelligence to improve agricultural yields.

You might have seen Land O’Lakes’ dairy products on store shelves without giving much thought to how they got there, but that’s something CTO Teddy Bekele thinks about every day. While the farmers and agricultural retailers of Land O’Lakes work to produce the cooperative’s products, starting from the seeds used to grow animal feed, Teddy Bekele is focused on supporting agriculture’s “fourth revolution” — one that’s embracing technologies like artificial intelligence. On this episode of the Me, Myself, and AI podcast, Teddy explains how Land O’Lakes uses predictive analytics and AI to help farmers and other agricultural producers be more productive and make better decisions about the business of farming.

Teddy Bekele, Land O’Lakes

Teddy Bekele is the CTO of Land O’Lakes, leading the organization’s digital transformation by leveraging existing and emerging technologies to discover, implement, and deliver solutions and ecosystems. Previously, Bekele served as vice president of ag technology for WinField United. Bekele holds an MBA from Indiana University and a bachelor of science degree in mechanical engineering from North Carolina State University. His community leadership includes serving as chair of the Minnesota Broadband Task Force and the Federal Task Force on Precision Ag Connectivity, and as a board member for Stella Health, Genesys Works Twin Cities, and the Minnesota Technology Association.

AI for Leaders on LinkedIn

If you’re enjoying the Me, Myself, and AI podcast, continue the conversation with us on LinkedIn. Join the AI for Leaders group today.

Read more about our show and follow along with the series at https://sloanreview.mit.edu/aipodcast.

Subscribe to Me, Myself, and AI on Apple Podcasts, Spotify, or Google Podcasts.

Give your feedback in this two-question survey.

Transcript

Sam Ransbotham: You may have used the phrase “bet the farm,” but if you don’t work in agriculture, you might not fully appreciate what that means. On today’s episode, find out how technology can support successful farm production.

Teddy Bekele: I’m Teddy Bekele from Land O’Lakes, and you’re listening to Me, Myself, and AI.

Sam Ransbotham: Welcome to Me, Myself, and AI, a podcast on artificial intelligence in business. Each episode, we introduce you to someone innovating with AI. I’m Sam Ransbotham, professor of analytics at Boston College. I’m also the AI and business strategy guest editor at MIT Sloan Management Review.

Shervin Khodabandeh: And I’m Shervin Khodabandeh, senior partner with BCG, and I colead BCG’s AI practice in North America. Together, MIT SMR and BCG have been researching and publishing on AI for six years, interviewing hundreds of practitioners and surveying thousands of companies on what it takes to build and to deploy and scale AI capabilities and really transform the way organizations operate.

Sam Ransbotham: Shervin and I are talking today with Teddy Bekele, chief technology officer at Land O’Lakes. Teddy, thanks for joining us. Welcome.

Teddy Bekele: Thank you for having me. I’m very excited to be here.

Sam Ransbotham: I think we first met back in 2018, when you were at WinField United and we did a webinar together about data and analytics. Now you’re at the parent company, Land O’Lakes. Can you tell us about your current role?  .... 


Robots Built from Magnetic Fluid

A robot made from magnetic fluid can be made smaller, thinner, or directed to break up with special magnets, which could be useful for delivering drugs into the body

PHYSICS 16 September 2022  in New Scientist

By Karmela Padavic-Callaghan

Researchers Bring Underwater Messaging App to Smartphones

Quite Unusual Direction here, but could empower such teams.

 Researchers Bring Underwater Messaging App to Smartphones

Allen School News (University of Washington)

Kristin Osborne,  August 29, 2022

The AquaApp mobile interface developed by University of Washington (UW) researchers facilitates underwater messaging using acoustic signals. "With AquaApp, we demonstrate underwater messaging using the speaker and microphone widely available on smartphones and watches," said UW's Tuochao Chen. AquaApp users can choose among 240 preset messages corresponding to hand signals employed by divers, with the 20 most-used signals displayed for easy access; they also can screen messages through categories such as directional indicators, environmental factors, and equipment status. AquaApp features an algorithm that optimizes the bitrate and acoustic frequencies of each transmission based on certain parameters in real time, while a networking protocol shares access to an underwater network.  ... ' 

Thursday, September 22, 2022

Microfluidic Lab on a Chip

New to me.  

ACM NEWS

Bring the Laboratory With You

By R. Colin Johnson

For decades, laboratory procedures have been a popular target for automation; sequencing the human genome, for instance, would not have been feasible without it. Now the scale of automation is being reduced to individual laboratories on a chip by virtue of micron-level manipulation of fluid droplets (microfluidics)—not for complete genome sequencing (yet), but for the myriad of simpler medical procedures today performed by human technicians in full-sized laboratories worldwide.

The main contribution of the lab on a chip, so far, has been the development of medical point-of-care devices that can diagnose specific maladies in minutes, rather than requiring the capture of a blood (or other bodily fluid) sample and its transportation to a lab for analysis.

"Point-of-care diagnostic devices have proved incredibly useful in the last 20 years, in particular delivering much-needed rapid HIV and tuberculosis diagnosis to the developing world [where traditional labs are often not available]," said Maïwenn Kersaudy-Kerh, a professor in Heriot-Watt University's School of Engineering and Physical Sciences and in Scotland's Institute of Biological Chemistry, Biophysics, and Bioengineering, as well as an Honorary Lecturer in the Deanery of Biomedical Sciences of Scotland's University of Edinburgh.

A microfluidic lab on a chip consists of pipe-like micron-sized channels and reservoirs to hold droplet samples—usually blood or other body fluids—to be processed by mixing them with reagents and other chemicals needed to identify a malady. Such a lab on a chip also requires closely integraed electronics to control the processing steps.

A subset of micro-electromechanical systems (MEMS), these microfluidic implementations perform precisely the same steps as in a conventional lab, but using individual droplets of samples instead of test tubes. As a result, they use less reagents and less energy, cost less, and are faster in execution than traditional lab procedures, providing in-house results in minutes, rather than overnight (at best).

Just last month, hepatitis-C diagnoses were added to the list of applications provided by labs on chips. It was demonstrated with a working lab-on-a-chip prototype developed at Florida Atlantic University's (FAU) College of Engineering and Computer Science. According to FAU associate professor Waseem Asghar, about 80% of the 354 million people infected with Hepatitis-C worldwide will develop cancer, cirrhosis, or complete liver failure if not treated. Unfortunately, only about 20% have been identified in developed countries, and only 1% in developing countries.

Asghar said the new "disposable microfluidic lab-on-a-chip device is fully automated and offers reliable diagnoses—by changing a dye color from orange to green—in about 45 minutes and costs only about $2."

Many more medical diagnostic labs on a chip are under development. Many of those are destined not only for point-of-care and in-the-field usage, but for over-the-counter pharmacy shelves as well. This burgeoning development effort has attracted green-minded researchers to analyze the environmental impact of these one-use devices. In hospitals and clinics, for instance, the risk of infection from the body fluids inside require the devices to be incinerated after use. Since many use polymers and other fossil-derived materials, their incineration means a larger carbon footprint for the medical institutions (as burning them releases CO2). If even they are used in the home, the devices will end up in landfills, where cyanide and other toxic chemicals they contain will be released into the environment, according to Kersaudy-Kerh

"The medical waste issue is problematic, with the production of massive amount of plastics, as well as toxic chemical by-products. Now is the time to develop sustainable solutions. Researchers, manufacturers, and practitioners need to come together to design novel solutions using safer, less-polluting, and more easily degradable materials, form-factor reduction and recycling solutions," said Kersaudy-Kerh. "In our lab, we are reviewing existing and possible solutions to make diagnostic development more sustainable, in the hope to inspire and empower researchers to find alternative solutions, as well as reviewing their current practices."

One viable solution is to replace the polymers used in labs on chips with paper-based devices. Off-the-shelf pregnancy tests, for instance, already use paper-based innards within a plastic shell ... '

Linguistics and the Development of NLP

Home/Opinion/Interviews/Linguistics and the Development of NLP/Full Text

ACM OPINION

Linguistics and the Development of NLP   By The Gradient

September 9, 2022

Christopher Manning is the director of the Stanford University AI Lab and an associate director of the Stanford Human-Centered Artificial Intelligence Institute.

In this podcast, Chris Manning, an ACM Fellow, AAAI Fellow, and past president of ACL, discusses a number of topics, including tree recursive neural networks, GloVe, neural machine translation, computational linguistic approaches to parsing, and his current work, which is focused on applying deep learning to natural language processing.

Full article

Amazon Builds a Visual Conversation

 Conversation flow with a No-code environment

Amazon Is Adding Visual Conversation Builder for Amazon Lex

SEP 19, 2022   in Infoq.com

by  Daniel Dominguez

Amazon is introducing the Visual Conversation Builder for Amazon Lex, a drag and drop interface to visualize and build conversation flows in a no-code environment. The Visual Conversation Builder greatly simplifies bot design.

Amazon Lex is a fully managed artificial intelligence service with advanced natural language models to design, build, test, and deploy conversational interfaces in applications. Amazon Lex provides high-quality speech recognition and language understanding capabilities. The visual builder, allows to build and manage complex conversations with dynamic paths. By adding conditions directly to Lex bot, and managing the conversation path dynamically based on user input and business knowledge, all within a no-code environment.

According to Amazon, no machine learning expertise is necessary to use Amazon Lex. Developers can declaratively specify the conversation flow and Amazon Lex will take care of the speech recognition and natural language understanding functionality. Developers provide some sample utterances in plain English and the different parameters that they would like to collect from their user with the corresponding prompts. The language model gets built automatically.

To create a bot, you will first define the actions performed by the bot. These actions are the intents that need to be fulfilled by the bot. For each intent, you will add sample utterances and slots. Utterances are phrases that invoke the intent. Slots are input data required to fulfill the intent. Lastly, you will provide the business logic necessary to execute the action. An Amazon Lex bot can be created both via Console and REST APIs.

Chatbots are majorly incorporated into messaging applications, websites, mobile applications and other digital devices for interacting as digital assistants through text or text-to-speech functionalities. They offer various benefits, such as improved efficiency of business operations, customer engagement, branding and advertisement, data privacy and compliance, payment processing and automatic lead generation and qualification. Other visual chatbot builder alternatives are IBM Watson Chatbot Builder, Microsoft Bot Framework, Meta Bots for Workplace, among others.   ... '

A Look at AI

From the Gartner Blog:

When AI is Really AGF (Artificial Gut Feel)     By Anthony J. Bradley | September 20, 2022

Human Interviewer: “Do you prefer dogs or cats?”

Randy the Robot, “Yes, I’m very familiar with their pixel patterns.”

In the words of former U.S. Secretary of Defense, Donald Rumsfeld, “There are known knowns, known unknowns and unknown unknowns.” He positioned unknown-unknowns as the most challenging situation. For AI, it is an impossible situation. AI operates best in the known-known situation. In other words, it is best to know exactly what you are looking to find. 

Known-Knowns is Where AI Accuracy is Most Accurate

AI is taking the field of radiology by storm. In 2018, Stanford created the AI CheXNeXt algorithm trained with over 100,000 chest X-rays to identify 14 pathologies. For 10 of the diseases CheXNeXt performed on par with radiologists. On one it outperformed radiologists. With three maladies the radiologists outperformed CheXNeXt. This success is only possible because we know what normal and abnormal chest X-rays look like for these pathologies. Actually, the known-known scenario applies to a large number of AI computer vision detection scenarios from recognizing defects on a manufacturing line to identifying products in a shoppers grocery cart and even diagnosing illnesses by analyzing facial features. Computer vision is one of the fastest growing sectors of AI because we know what we are looking for and deep learning is getting better and better at image recognition.   ... (more at link) 

Wednesday, September 21, 2022

Tumor Size Tracking

Seems novel application.

Engineers Develop Wearable to Monitor Tumor Size

Stanford News

Andrew Myers, September 16, 2022

Stanford University engineers have designed a wearable device that can measure the size of tumors. The researchers say the battery-powered Flexible Autonomous Sensor measuring Tumors (FAST) device can monitor cancer drug effectiveness with great accuracy. FAST uses a stretchable and flexible sensor embedded with gold circuitry and linked to a small electronic backpack to measure strain on the polymer membrane. The device can wirelessly transmit this data to a smartphone application in real time, and the backpack allows potential therapies connected to tumor size regression to be quickly ruled out or accelerated for further investigation.... ' 

The Book: Stories, Dice and Rocks that Think, by Byron Reese

 Completed reading ....  Recommended!  An excellent book that looks at a number of leaps of key technical history that have led us to today.   Should be read by everyone who seeks to understand the possibility and limits of our AI future.   A very nice positioning of how we got here, and the inherent challenges that are left. 


Stories, Dice, and Rocks That Think: How Humans Learned to See the Future--and Shape It  ...  
  by Byron Reese 

". . . Byron Reese gets to the heart of what makes humans different from all others." —Midwest Book Review

What makes the human mind so unique? And how did we get this way?   Amazon Description:

This fascinating tale explores the three leaps in our history that made us what we are—and will change how you think about our future.

Look around. Clearly, we humans are radically different from the other creatures on this planet. But why? Where are the Bronze Age beavers? The Iron Age iguanas? In Stories, Dice, and Rocks That Think, Byron Reese argues that we owe our special status to our ability to imagine the future and recall the past, escaping the perpetual present that all other living creatures are trapped in. 

Envisioning human history as the development of a societal superorganism he names Agora, Reese shows us how this escape enabled us to share knowledge on an unprecedented scale, and predict—and eventually master—the future.

Thoughtful, witty, and compulsively readable, Reese unravels our history as an intelligent species in three acts: 

Act I: Ancient humans undergo “the awakening,” developing the cognitive ability to mentally time-travel using language

Act II: In 17th century France, the mathematical framework known as 'probability theory' is born—a science for seeing into the future that we used to build the modern world

Act III: Beginning with the invention of the computer chip, humanity creates machines to gaze into the future with even more precision, overcoming the limits of our brains

A fresh new look at the history and destiny of humanity, readers will come away from Stories, Dice, and Rocks that Think with a new understanding of what they are—not just another animal, but a creature with a mastery of time itself.  ... ' 

In Amazon.

>   See also his previous book, which I also enjoyed:   "The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity"    By Byron Reese 

Pitching Your Startup to a robot.

 Found this case study of how you might pitch a new startup to a GPT-3 Based robot, called Pitchexpert.

I pitched my ridiculous startup idea to a robot VC,    By Luke Dormehl  in DigitalTrends

September 1, 2022 6:30AM

Aqua Drone. HighTides. Oh Water Drone Company. H2 Air. Drone Like A Fish. Whatever I called it, it was going to be big. Huge. Well, probably.

It was the pitch for my new startup, a company that promised to deliver one of the world’s most popular resources in the most high-tech way imaginable: an on-demand drone delivery service for bottled water. In my mind I was already picking out my Gulfstream private jet, bumping fists with Apple’s Tim Cook, and staging hostile takeovers of Twitter. I just needed to convince a panel of venture capitalists that I (and they) were onto a good thing.

There were three VCs in total. The good news was that at least one of them already loved my idea. But he still had a few pointers. I should, they suggested, focus on a niche market, whether that be athletes, office workers, or music festival attendees. They also wanted me to do a better job articulating why people should buy water from a drone rather than just picking it up at the store. Fair enough, I suppose. Not everyone can quite envision the widespread appeal of bottled water from the sky.  ... ' 

(Read the whole thing for results)  .... 

Low Code for Data Science

Low Code for Data Science

3 Reasons Why You Need Low-code Platforms For Data Science Solutions  in TowardsDataScience

Low-code ML applications help address the challenges of model maintenance, time-to-market, and talent shortage

Organizations across industries are turning to data and analytics to solve business challenges. A survey by New Vantage Partners found that 91 percent of enterprises have invested in AI. However, the same study found that just 26 percent of these firms have AI in widespread production.

Organizations are struggling to solve business challenges with AI. They find that building machine learning (ML) applications takes time and requires expensive maintenance and talent that’s in short supply. Leaders say that over 70% of data science projects report minimal or zero business impact.

Here’s how low-code ML platforms can help tackle these challenges.

What is low-code, and why this craze now?

Low-code is a software development approach that leverages a visual user interface to create applications instead of traditional hand-coding. For decades, developers built applications by writing thousands of lines of code from scratch, often round-the-clock.

Building software solutions using low-code falls somewhere in the continuum between programming from scratch and buying off-the-shelf. It brings the best of both worlds by balancing flexibility and time-to-market.

A low-code development platform (LCDP) is considered quicker to build, economical to maintain, and developer-friendly because of its visual approach.

Low-code tools empower enterprises by democratizing software development. Today, anyone with a business interest and basic technology skills can build an app using low-code technology. According to Gartner, by 2024, more than 65 percent of all app development will be on low code. Globally, the low-code market is projected to reach $187 billion by 2030.   ... ' 

Metaverse Standards Forum Established

Will this be sufficient?

The Metaverse Needs Standards, Too The big players have founded a “forum”—but will it make the place come to life any sooner?   By Michael Koziol 

When Meta (formerly Facebook) announced in October 2021 that it would be developing metaverse technologies, it prompted a flurry of speculation and attendant announcements from other companies. Beyond that, it triggered an avalanche of confusion around what exactly the metaverse is supposed to be.

Nearly a year later, the concrete details of the metaverse are as opaque as ever. The Metaverse Standards Forum, which launched on 21 June 2022, isn’t trying to wrangle those details—not directly. But the forum sees an opportunity to get everyone to sit down at the same (probably virtual) table and hash out the basic technologies needed. With a more solid foundation, the forum believes, the metaverse can better develop and evolve.

Now, the forum has announced that after two months of hashing priorities, it has a list of initial priority topics that will steer metaverse standards development in its domain working groups. The topics include both straightforward technical problems like augmented and virtual reality standards, as well as concerns around privacy, ethics, and user safety.

“I like the theory that there’s only one metaverse, and you go between different experiences within the metaverse. Because we need an analogy to the Web.”

—Neil Trevett, Metaverse Standards Forum

To be clear: The metaverse does not exist yet, and probably won’t for some years to come. But there’s enough industry interest in beginning the process toward building it—whatever it may ultimately be. So Neil Trevett, the chair of the Metaverse Standards Forum, says now is the time to start standardizing. “I think what we’re seeing, much to everyone’s surprise—including our own—is the level of interest in standards for the metaverse. I think there is a thirst, or hunger, for them.”

The Metaverse Standards Forum is being organized by the Khronos Group, a software consortium developing royalty-free standards around technologies like virtual reality, augmented reality, and vision processing. The forum began with just 35 founding members, but its roster has in two months already ballooned to 1,500.

Standardizing the Standards

The MSF isn’t a standards body, Trevett says, so much as it’s a liaison to improve coordination and trust between the big commercial metaverse interests to date—including Google, Meta, Microsoft, and others using the technologies required for the creation, care, and maintenance of virtual worlds—as well as the standards organizations that will define those technologies. ... '

Tuesday, September 20, 2022

Disentangling Quantum Facts

Overview of interest: Opinion

Disentangling the Facts From the Hype of Quantum Computing IEEE Quantum Week is a chance to celebrate progress and acknowledge the challenges    JAMES S. CLARKE  19 SEP 2022

This is a guest post in recognition of IEEE Quantum Week 2022. The views expressed here are solely those of the author and do not represent positions of IEEE Spectrum or the IEEE.

Few fields invite as much unbridled hype as quantum computing. Most people’s understanding of quantum physics extends to the fact that it is unpredictable, powerful, and almost existentially strange. A few years ago, I provided IEEE Spectrum an update on the state of quantum computing and looked at both the positive and negative claims across the industry. And just as back in 2019, I remain enthusiastically optimistic today. Even though the hype is real and has outpaced the actual results, much has been accomplished over the past few years.

First, let’s address the hype.

Over the past five years, there has been undeniable hype around quantum computing—hype around approaches, timelines, applications, and more. As far back as 2017, vendors were claiming the commercialization of the technology was just a couple of years away—like the announcement of a 5,000-qubit system by 2020 (which didn’t happen). There was even what I’d call antihype, with some questioning if quantum computers would materialize at all (I hope they end up being wrong).

More recently, companies have shifted their timelines from a few years to a decade, but they continue to release road maps showing commercially viable systems as early as 2029. And these hype-fueled expectations are becoming institutionalized: The Department of Homeland Security even released a road map to protect against the threats of quantum computing, in an effort to help institutions transition to new security systems. This creates an “adopt or you’ll fall behind” mentality for both quantum-computing applications and postquantum cryptography security.

Market research firm Gartner (of the “Hype Cycle” fame) believes quantum computing may have already reached peak hype, or phase two of its five-phase growth model. This means the industry is about to enter a phase called “the trough of disillusionment." According to McKinsey & Company, “fault tolerant quantum computing is expected between 2025 and 2030 based on announced hardware roadmaps for gate-based quantum computing players.” I believe this is not entirely realistic, as we still have a long journey to achieve quantum practicality—the point at which quantum computers can do something unique to change our lives.

In my opinion, quantum practicality is likely still 10 to 15 years away. However, progress toward that goal is not just steady; it’s accelerating. That’s the same thing we saw with Moore’s Law and semiconductor evolution: The more we discover, the faster we go. Semiconductor technology has taken decades to progress to its current state, accelerating at each turn. We expect similar advancement with quantum computing.

In fact, we are discovering that what we have learned while engineering transistors at Intel is also helping to speed our quantum-computing development work today. For example, when developing silicon spin qubits, we’re able to leverage existing transistor-manufacturing infrastructure to ensure quality and to speed up fabrication. We’ve started the mass production of qubits on a 300-millimeter silicon wafer in a high-volume fab facility, which allows us to fit an array of more than 10,000 quantum dots on a single wafer. We’re also leveraging our experience with semiconductors to create a cryogenic quantum control chip, called Horse Ridge, which is helping to solve the interconnect challenges associated with quantum computing by eliminating much of the cabling that today crowds the dilution refrigerator. And our experience with testing semiconductors has led to the development of the cryoprober, which enables our team to get testing results from quantum devices in hours instead of the days or weeks it used to take.

Others are likely benefiting from their own prior research and experience, as well. For example, Quantinuum’s recent research showed the entanglement of logical qubits in a fault-tolerant circuit using real-time quantum error correction. While still primitive, it’s an example of the type of progress needed in this critical field. For its part, Google has a new open-source library called Cirq for programming quantum computers. Along with similar libraries from IBM, Intel, and others, Cirq is helping drive development of improved quantum algorithms. And, as a final example, IBM’s 127-qubit processor, called Quantum Eagle, shows steady progress toward upping the qubit count. .... '  (moreat link) 

Law and Technology

Worth a read, NFT and more.

These Are Not the Apes You Are Looking For

By Andres Guadamuz

Communications of the ACM, September 2022, Vol. 65 No. 9, Pages 20-22    10.1145/3548761

Imagine you want to stream some music. On today's Web, you would sign up for a service such as Spotify or Apple Music. These platforms have obtained copyright licenses from record companies and artists, and they offer you that music for a monthly subscription. The music streaming services are centralized intermediaries. They exist to connect musicians and fans, and in exchange they take a substantial cut of the money.

But a growing number of technology enthusiasts have a different vision, which they call Web3. To them, it "represents the next phase of the Internet and, perhaps, of organizing society.'a One of the pillars of the Web3 vision is tokenization: using representing ownership of different assets using cryptographic tokens that can be exchanged on a blockchain or other decentralized system. Only the person who knows the private key associated with a token can use or transfer it. A token can be used to represent anything, from frequent-flyer miles to hotel reservations. By transferring a token from user to user, it can record who owns an associated asset.

In a Web3 world, your music experience would be mediated not by Spotify but by tokens. Instead of signing up for a music service, you would buy a token directly from the artist. The token would represent your right to listen to the music. The token's cryptography would be tied directly into the digital rights management protecting the music, so that only token owners would be able to listen. In other words, the token living on a decentralized blockchain would let you and the artist automatically cut out the middlemen like Spotify, and maybe even record companies.

One of the sectors receiving particularly intense Web3 interest and investment is the creative industries. In this area, the tokenization push is being driven by non-fungible tokens (NFTs), cryptographic tokens that represent a unique asset. One banana is pretty much like any other banana, but a Picasso portrait and an Ai Weiwei sculpture are radically different. The tokens representing them are not interchangeable, or fungible, hence the name.

The most famous NFT project is the Bored Ape Yacht Club, whose collection of "Bored Ape" NFTs have been selling for hundreds of thousands of dollars. Each of the 9,999 Bored Ape NFTs consists of a token on the Ethereum blockchain linked to a JPEG cartoon drawing of an ape. The JPEGs were procedurally generated with different combinations of traits, including jackets, hats, and facial expressions. While they all resemble each other, each individual Bored Ape is unique, a bit like the different cards in a trading-card set

There is currently a push to move the economy in the direction of a wider use of tokens, and this is being driven mostly by a combination of Silicon Valley venture capitalists and crypto-currency holders and investors. If the funders, developers, and artists pushing NFTs and Web3 get their way, the media landscape will look very different from what it looks like now.

This might sound like a great idea, but only until you start looking in detail at how it would actually work. As soon as you do, there are serious problems at every practical level.  .... ' 

Walmart Tries out AR with Machine Learning for Clothing Trial

Experimented with this and related cosmetics approaches.  

Walmart Rolls Out AR to Try on Clothing Virtually

By Fast Company, September 19, 2022

Walmart has launched a new augmented reality feature that allows shoppers to virtually try on clothes.

The Be Your Own Model tool uses machine learning technology originally developed for topographic mapping so customers can view how apparel will fit on their bodies, along with shadows, colors, and simulated fabric draping.

Be Your Own Model runs on Walmart's iOS application, with shoppers able to upload photos to model the apparel on images of their bodies.

Walmart's Cheryl Ainoa said the process is energy efficient so customers will not run down their phones' batteries.

From Fast Company

View Full Article   

AI New Role in Science

 Note Microsoft's AI4Science.

Supercomputer Emulator: AI's New Role in Science

By IEEE Spectrum

August 29, 2022

Artificial intelligence (AI) has become an indispensable tool in many scientists' lives, such that its use by researchers now has its own moniker—AI4Science—used by conferences and laboratories. Last month, Microsoft announced its own AI4Science initiative.

In an interview, Chris Bishop, its director, discusses the evolution of the scientific method, explains Microsoft's new AI4Science initiative, training an emulator, the four paradigms of scientific discovery, and more. "We see new paradigm emerging," Bishop said. "You can trace its origins back many decades, but it's a different way of using machine learning in the natural sciences." ...

We see a very exciting opportunity over the next decade at the intersection of machine learning and the natural sciences—chemistry, physics, biology, astronomy, and so on." -Chris Bishop

From IEEE Spectrum

View Full Article