/* ---- Google Analytics Code Below */

Sunday, July 31, 2022

Using Wearable Tech to Detect COVID-19 before Onset

 Using Wearable Tech to Detect COVID-19 before Onset of Symptoms

By McMaster University (Canada), July 27, 2022

Pairing wrist-worn health devices with machine learning, researchers in Canada and Europe were able to detect COVID-19 prior to the onset of symptoms. More than 1,100 participants wore a fertility tracker that monitors respiration, heart rate, heart-rate variability, skin temperature, and blood flow during sleep. The tracker was synchronized to a mobile application that recorded activity that might affect the central nervous system, as well as potential COVID-19 symptoms.

More than 100 participants tested positive for the virus, and the tracker detected changes in all physiological markers during infection.

From McMaster University (Canada)

View Full Article  

Cell Computations Discovered

 Interesting direction, cell computations.

A Secret Language of Cells? Cell Computations Uncovered

By King Abdullah University of Science and Technology, July 28, 2022

The researchers' model centers on how glial cells cooperate with neurons to fuel the brain and participate in computations.

Scientists at King Abdullah University of Science and Technology (KAUST) in Saudi Arabia and  EPFL's Blue Brain Project in Switzerland have published a theory describing a secret cell language for sharing information about the outside world.

The researchers used a computational model to confirm that metabolic pathways could plausibly encode details about neuromodulators that boost energy consumption. Neuromodulators regulate the exchange of information in the brain, and the researchers' astrocytic energy metabolism model focused on how astrocytes collaborate with neurons to drive the brain and engage in computation.

KAUST's said, "The teams' simulations of neuromodulator-stimulated glucose metabolism in an astrocyte suggest that metabolic pathways could be capable of more information processing than we previously realized," says Pierre Magistretti, director of the KAUST Smart Health Initiative.

From King Abdullah University of Science and Technology

View Full Article          

Turing Test

Mostly agreed, it is mostly about deception rather than intelligence.   Rarely gives evidence for real reproducible intelligence for broad context. 

ACM OPINION

It's Time for AI to Retire the Turing Test

By Fortune, June 15, 2022

What to make of the strange case of Blake Lemoine? The Google AI engineer made headlines this past week when he claimed one of the company's chatbots had become "sentient." Not only does Google say Lemoine's claims are untrue, but almost every AI expert agrees that the chatbot, which Google calls LaMDA, is not sentient in the way Lemoine says it is.

Is Lemoine the inevitable result of the field's persistent fetishization of the Turing Test as a benchmark? In many actual versions of the Turing Test, humans often simply don't try that hard to stump the machine. And, in many cases, people are eager to deceive themselves into thinking the bots are real.

From Fortune  

View Full Article (May Require Paid Registration)

Coffee Tech Overview

 Space I worked in for years, a brief  intro in the BBC

The tech helping to bring you your morning coffee,  By Luana Ferreira

Business reporter, BrazilFor an estimated one billion people around the world drinking coffee is a daily regime.

Yet what many coffee lovers might not know is that they are often drinking a brew made, at least in part, from Brazilian beans.

"Brazilian beans have popular characteristics, and are known for their body and sweetness," says Christiano Borges, boss of the country's largest grower, Ipanema Coffees.

"Therefore, many coffee blends in the world use our coffee as a base."

Brazil is far and away the world's largest grower of coffee beans. It accounts for more than one third of all global supplies, or 37% in 2020, to be exact. In second place is Vietnam with 17% of supplies.

Some 70% of Brazil's coffee plants are the highly-priced arabica species, used in fresh coffee. The remaining 30% are robusta, which is used primarily for instant coffee.  ... 

Saturday, July 30, 2022

Meta Goes Unsupervised for Try at Human AI

 Looking to see a good example. 

Meta’s AI Takes an Unsupervised Step Forward In the quest for human-level intelligent AI, Meta is betting on self-supervised learning      By ELIZA STRICKLAND

Meta’s chief AI scientist, Yann LeCun, doesn’t lose sight of his far-off goal, even when talking about concrete steps in the here and now. “We want to build intelligent machines that learn like animals and humans,” LeCun tells IEEE Spectrum in an interview.

Today’s concrete step is a series of papers from Meta, the company formerly known as Facebook, on a type of self-supervised learning (SSL) for AI systems. SSL stands in contrast to supervised learning, in which an AI system learns from a labeled data set (the labels serve as the teacher who provides the correct answers when the AI system checks its work). LeCun has often spoken about his strong belief that SSL is a necessary prerequisite for AI systems that can build “world models” and can therefore begin to gain humanlike faculties such as reason, common sense, and the ability to transfer skills and knowledge from one context to another. The new papers show how a self-supervised system called a masked auto-encoder (MAE) learned to reconstruct images, video, and even audio from very patchy and incomplete data. While MAEs are not a new idea, Meta has extended the work to new domains.

By figuring out how to predict missing data, either in a static image or a video or audio sequence, the MAE system must be constructing a world model, LeCun says. “If it can predict what’s going to happen in a video, it has to understand that the world is three-dimensional, that some objects are inanimate and don’t move by themselves, that other objects are animate and harder to predict, all the way up to predicting complex behavior from animate persons,” he says. And once an AI system has an accurate world model, it can use that model to plan actions.  .... '

Drones for Stadium Security

Examples for upcoming soccer World Cup

World Cup to Use Drones to Help Protect Stadiums

By BBC News, July 26, 2022

A drone interceptor captures a fixed wing drone in mid-air.

This year's World Cup soccer tournament in Qatar will employ autonomous radar-guided interceptor drones from U.S. aerospace company Fortem Technologies for stadium security.

Fortem said its agreement with Qatar's interior ministry reflects anxieties about the threat of potential drone attacks, and its DroneHunter system can safely bring down drones in built-up locations, reducing the risks of injury.  Fortem's Timothy Bean said targets are identified using a "series of very small radars that are distributed throughout the venue, creating a complete picture of the airspace straight up into the air."

The DroneHunters shoot nets to catch target drones, which may then be carried to another site, while nets connected to parachutes can force larger drones to the ground.

Bean said the interceptors carry out their mission "a mile or so away from the venue."

BBC Full Article  

Neurotechnology and the Law: Implants

Considerable issue as Neurotech advances,  Some good examples below and at link.   Including implants for non medical reasons?

Neurotechnology and the Law   By Esther Shein

Communications of the ACM, August 2022, Vol. 65 No. 8, Pages 16-18  10.1145/3542816

As brain implants become more commonplace and may eventually be used for non-medical purposes, some experts believe they must be regulated.

Regulations should be considered "a natural next step," says Rajesh P. N. Rao, a professor at the University of Washington in Seattle with a background in computer science, engineering, and computational neuroscience, who earned his Ph.D. in artificial intelligence (AI)/computer vision, and used a postdoctoral scholarship to train in neuroscience.

Eventually, there will be two-way communication between doctors and the devices, with AI as an intermediary, Rao says. "In the future, that kind of device embedded with AI can look at what's happening in other parts of the brain to treat depression or epilepsy and stopping seizures and bridging an injured area of the brain or shaping the brain to be less depressed."

Efforts are under way to further the use of these devices. For example, BrainGate is a U.S.-based multi-institutional research effort to develop and test novel neurotechnology aimed at restoring communication, mobility, and independence. It is geared at people who still have cognitive function, but have lost bodily connection due to paralysis, limb loss, or neurodegenerative disease. BrainGate's partner institutions include Brown, Emory, and Stanford universities, as well as the University of California at Davis, Massachusetts General Hospital, and the U.S. Department of Veterans Affairs.

Tesla CEO Elon Musk is working on a robotically implanted brain-computer interface (BCI) system through his company Neuralink, which aims to allow the brain to communicate with a computer. Neuralink is designing what it claims is the first neural implant that would let a user control a computer or mobile device. The approach is to insert micron-scale threads that contain electrodes into the areas of the brain that control movement. Each thread is connected to Neuralink's implant, the Link.

Rao says he is not aware of any brain implants currently being used for augmentative purposes in humans to facilitate better athletic performance or for enhanced gaming skills, but the potential exists. Achieving such improvements will necessitate "much more nuanced regulations," because once that happens, "one has to think about what this device is doing, since it is being used for enhancing the capabilities of people."

Non-invasive devices already are being used to deliver electricity to the brain to improve sports performance, Rao says.  .... ' 

Friday, July 29, 2022

Selling Zero Days?

Too weird, wouldn't it be directly caught doing this?

0-days sold by Austrian firm used to hack Windows users, Microsoft says

Windows and Adobe Reader exploits said to target orgs in Europe and Central America.

DAN GOODIN  in ArsTechnica

Microsoft said on Wednesday that an Austria-based company named DSIRF used multiple Windows and Adobe Reader zero-days to hack organizations located in Europe and Central America.

Multiple news outlets have published articles like this one, which cited marketing materials and other evidence linking DSIRF to Subzero, a malicious toolset for “automated exfiltration of sensitive/private data” and “tailored access operations [including] identification, tracking and infiltration of threats.”

Members of the Microsoft Threat Intelligence Center, or MSTIC, said they have found Subzero malware infections spread through a variety of methods, including the exploitation of what at the time were Windows and Adobe Reader zero-days, meaning the attackers knew of the vulnerabilities before Microsoft and Adobe did. Targets of the attacks observed to date include law firms, banks, and strategic consultancies in countries such as Austria, the UK, and Panama, although those aren’t necessarily the countries in which the DSIRF customers who paid for the attack resided.

“MSTIC has found multiple links between DSIRF and the exploits and malware used in these attacks,” Microsoft researchers wrote. “These include command-and-control infrastructure used by the malware directly linking to DSIRF, a DSIRF-associated GitHub account being used in one attack, a code signing certificate issued to DSIRF being used to sign an exploit, and other open source news reports attributing Subzero to DSIRF.”   ....(more) ...

Multi Channel Digital Merchant

Brought to my attention: 

Martech and Digital Experience Management: The New Frontier, by John Schneider

Martech is experiencing unprecedented growth. In just the last decade, it grew 5,233%. Over the next few years, global spending on digital transformation is expected to reach $2.8 trillion. What’s driving this surge in digital strategy investment?

The majority of C-suite leaders say an improved customer experience is a top factor driving their digital transformation. More specifically, marketers’ focus has shifted from basic content management to identifying, creating and publishing personalized content at scale.

The new marketing paradigm requires real-time, 1:1 personalization, changing the messaging, content and channel at any moment to serve any individual based on their needs, while predicting future outcomes. To offer this level of personalization, marketers are investing in connected digital experience platforms, with integrated solutions for deploying immersive content experiences at scale, representing the new digital frontier.

Rise of AI and the New Frontier of Experience Management

Today, you must develop a holistic, technology-enabled content strategy in order to succeed. Consumers expect you to anticipate their needs and offer relevant suggestions before their initial contact. Fortunately, the majority of marketers understand this, with 77% acknowledging that real-time personalization is crucial to their company’s success, according to Adobe. And it certainly doesn’t hurt that personalization delivers 5x to 8x the ROI on marketing spend.

To achieve the level of personalization consumers demand, it’s imperative that marketers stay ahead of the digital evolution. Powerful martech tools, such as customer data platforms, AI-based personalization and digital asset management (DAM), help you make “next best action” decisions that future-proof your customer experience strategy.

Before we dive into the new customer experiences and marketing capabilities these platforms provide, let’s take a quick trip back in time to discover how we got here.

1990s: Modern internet formed, websites grow

Websites are built in a painstaking manner by internal IT departments over a period of years. Updates are rare and sites offer limited information and experience, mostly brochureware.

2000s: Websites go from nice-to-have to essential marketing channel

Content Management Systems (CMS) like Interwoven, Vignette and WordPress emerge with WYSIWYG capabilities, enabling marketers to create and edit content without the help of IT. But changing the experience requires developer intervention

2010s: Basis for the Digital Experience Platform (DXP) emerges

The focus begins to shift from managing content to managing experiences. Platforms like Adobe and Sitecore emerge, enabling marketers to build webpages using composable libraries of marketing assets, thanks to integrated DAM and content systems. Still, scale is limited by the output of any single worker and their ability to manually create content and set personalization rules for combining different assets together, based on assumptions about different personas.

Also, commerce and content remain disconnected, managed through disparate experience management and commerce solutions. This leaves customers with a disjointed, clunky experience and hinders marketers from using content to support their entire funnel.

Today: Holistic martech solutions power connected customer experiences

The marriage of omnichannel content and commerce is realized through DXPs like Episerver (now Optimizely), Sitecore and Adobe that seamlessly integrate the content and commerce sides of the funnel. Moreover, AI-powered CDPs, and DAMs allow marketers to deliver individualized customer experiences at scale. Finally, marketing goals and technological capabilities work in lockstep to deliver the future of experience management.  ....

Artificial Creativity

A favorite topic,  like any method that provides a good set of solutions that can be useful. 

Artificial Creativity?    By O'Reilly, July 28, 2022

Image:Artist's palette features oil paints and is held by a robot.

Works of art really have two sources: the idea itself and the technique required to instantiate that idea.

There's a puzzling disconnect in the many articles being written about DALL-E 2, Imagen, and other increasingly powerful tools for generating images from textual descriptions. It is common to read articles that talk about AI having creativity, but that is not the case at all.

As with the discussion of sentience, authors are being misled by a very human will to believe. And in being misled, they are missing out on what is important.

From O'Reilly

 View Full Article       

Considering the Uncanny Valley

Often considered, we did in building early interactions with consumers.

Crossing the Uncanny Valley,    By Logan Kugler

Communications of the ACM, August 2022, Vol. 65 No. 8, Pages 14-15    10.1145/3542817

In 1970, robotics expert Masahiro Mori first described the effect of the "uncanny valley," a concept that has had a massive impact on the field of robotics. The uncanny valley, or UV, effect, describes the positive and negative responses that human beings exhibit when they see human-like objects, specifically robots.

The UV effect theorizes that our empathy towards a robot increases the more it looks and moves like a human. However, at some point, the robot or avatar becomes too lifelike, while still being unfamiliar. This confuses the brain's visual processing systems. As a result, our sentiment about the robot plummets deeply into negative emotional territory.

Yet where the uncanny valley really has an impact is on how humans engage with robots in modern times, an impact that has been proven to change how we see human-like automatons.

In a 2016 research paper in Cognition, Maya Mathur and David Reichling discussed their study of human reactions to robot faces and digitally composed faces. What they found was that the uncanny valley existed across these reactions. They even found that the uncanny valley effect influenced whether or not humans found the robots and digital avatars trustworthy.

"How the uncanny valley has already impacted the design and direction of robots is clear; it has slowed progress," says Karl MacDorman, a professor of human-computer interaction at Indiana University–Purdue University Indianapolis (IUPUI). "The uncanny valley has operated as a kind of dogma to keep robot designers from exploring a high degree of human likeness in human–robot interaction."

To MacDorman and others, the uncanny valley must be dealt with in order to accelerate the adoption of robots in social settings.

More Human, More Problems

For clues as to why, researchers Christine Looser and Thalia Wheatley, then both of Dartmouth College, in 2010 evaluated human responses to a range of simulated faces. The faces ranged in realism from fully human-like to fully doll-like. The researchers found participants stopped viewing a face as doll-like and considered it human when it was 65% or more human-like.

Companies that develop robots now consider findings like this and take active steps to stop the UV effect from impacting how the market receives their technology. One way they do that is by sidestepping the uncanny valley entirely, says Alex Diel, a researcher at Cardiff University's School of Psychology who studies the uncanny valley effect.

"Many companies avoid the uncanny valley altogether by using a mechanical, rather than a human-like, appearance and motion," says Diel. That means companies intentionally remove human-like features, like realistic faces or eyes, from robots—or engineer their movements to be clearly non-human.

One example of this approach is the Tesla Bot, a concept robot unveiled by the electric car manufacturer. While humanoid, the robot has been designed without a face, which makes certain the human brain's facial processing systems will not see it as a deviant version of a human face, says Diel.

Another way companies mitigate the effect of the uncanny valley is by designing robots to be cartoon-like, which helps them appear humanlike and appealing, without becoming too realistic. Diel points to Pepper, a congenial-looking robot manufactured by SoftBank Robotics, as a product that takes this route.

"Cuteness can't be overrated," says Sarah Weigelt, a neuropsychologist researching neural bases of visual perception at the Department of Rehabilitation Sciences of TU Dortmund University in Germany. "If something is cute, you do not fear it and want to interact with it."

If companies can't make a robot cute, they'll often make obvious that it's not a human in some other way. Some companies do this by changing skin tones to non-human colors, or leaving mechanical parts of a robot's body intentionally and clearly exposed, Weigelt says. This averts any confusion that this strange object could be human, sidestepping the UV effect.

While companies work hard to avoid falling into the valley, sometimes they try to pass through the valley and climb out the other side by making robots indistinguishable from humans. However, this presents its own set of problems, says MacDorman.   .... ' 

Thursday, July 28, 2022

Google AR Glasses Update

 Will this open the door to new augmented reality AR?

Google Starts Real-World Testing for Augmented Reality Glasses That May Employ North Focals Tech

ERIC HAL SCHWARTZ on July 21, 2022 at 7:00 am in Voicebot.ai

Google will start real-world testing of augmented reality glasses for translation and navigation next month. Several dozen testers will walk around wearing prototypes AR glasses with inset lens displays, microphones, and cameras for limited activites like translating text and providing audio directions. The description suggests the device extends and improves on the technology Google acquired in 2020 from North, the makers of Focals by North smart glasses.

GOOGLE AR

Google has spent years developing AR devices, code-named Iris, according to some rumors. Recent reports suggest a 2024 release for the device, which may hide its tech enough to look like regular prescription glasses, a goal North was working toward before getting acquired. The upcoming tests are circumscribed to things like translating menus and displaying directions. For privacy, the prototypes won’t even allow for photos or video recordings despite the available hardware.

The version Google releases will likely be more ambitious in employing video and photographic services. The microphone will presumably connect wearers to Google Assistant, though the operating system is still unknown. Google’s vice president of Labs Clay Bavor is still running the AR and VR division, following his Google Cardboard and Daydream VR platform development and is said to be working with Google Assistant creator Scott Huffman on the project too. That Google is moving to tests beyond controlled environments is certainly an encouraging sign for those hoping to see a device that will permanently wipe out bad memories of Google Glass.   .... ' 

Watching Humans to Learn

Humans can be taught,  carefully.   dealt with lots of these.  

 Robots Learn Household Tasks by Watching Humans

Carnegie Mellon University School of Computer Science

Aaron Aupperle, July 20, 2022

Carnegie Mellon University's Shikhar Bahl, Deepak Pathak, and Abhinav Gupta developed the In-the-Wild Human Imitating Robot Learning (WHIRL) algorithm to teach robots to perform tasks by observing people. WHIRL enables robots to gain knowledge from human-interaction videos and apply that data to new tasks, making them well-suited to learning household chores. The researchers outfitted a robot with a camera and the algorithm, and it learned to complete more than 20 tasks in natural environments. In each case, the robot watched a human execute the task once, then practiced and learned to complete the task by itself. "Instead of waiting for robots to be programmed or trained to successfully complete different tasks before deploying them into people's homes, this technology allows us to deploy the robots and have them learn how to complete tasks, all the while adapting to their environments and improving solely by watching," Pathak explained.  ... ' 

Increasing Retail Sales via Product Discovery

 Seems obvious, what are the most useful details?   Google has been on this for some time. 

Increasing Sales via AI Product Discovery

July 28, 2022, Supply ChainBrain

Roland Goassage, SCB Contributor

The pandemic has had long-lasting effects on the supply chain. Product demand is up, e-commerce shopping has increased, and more consumers are returning items purchased online. To cope with these trends, many retailers are over-ordering and accelerating their order and re-order lead times. 

In 2021, PBS found that a pair of shoes took 80 days to get from Asia to retailers in North America — double the time it took pre-pandemic. Around the same time, a Utah-based retailer reporte inventory levels of just 55% compared to normal, all due to freight delays.

One key to helping retailers mitigate supply chain issues today is a product-discovery platform driven by artificial intelligence. Through the use of features made possible by AI and machine learning, retailers can maintain a high-quality user experience by delivering quick, personalized and relevant search and recommendations results. In the process, customers can still purchase products that meet their needs and are in stock.

In Deloitte’s 2022 Retail Industry Outlook, 80% of executives surveyed said consumers will prioritize stock availability over brand loyalty, driving home the role that product discovery can play in reaching sales goals. If the exact product isn’t available, appropriate items must be provided as alternatives to maintain the sale.

The search bar is where the customer journey begins. Whether they complete the purchase online, or use an e-commerce site to research a product before purchasing in-store, nearly half (40%) of consumers start their search online.

A search experience that isn’t up to a customer’s expectations is the quickest and easiest way to lose a sale. A recent report from Google found that approximately $300 billion in sales are lost in the U.S. each year as a result of bad online search experiences.

The impact on the relationship with the customer can extend well beyond one bad visit. The Google report found that 85% of online shoppers view a brand differently after experiencing difficulties with product search.

The revenue lost from poor search experiences, and the potential revenue available from good ones, makes investing in a product discovery application a priority for retailers. The right search technology can help retailers provide up-to-date search results to consumers looking to purchase a product online or research their in-store purchasing options.

Also in the Google report, 90% of consumers said that an easy-to-use search function is essential when shopping on a retail site. AI-powered product discovery platforms can understand customer intent, making search and recommendation results that are hyper-personalized to each individual shopper. ..... ' 

Most Proteins Mapped with Deepmind

 New Digital Biology Advances with Deepmind

DeepMind found the structure of nearly every protein known to science

They’ll all be freely available

By Nicole Wetsman  Jul 28, 2022,

DeepMind is releasing a free expanded database with its predictions of the structure of nearly every protein known to science, the company, a subsidiary of Google parent Alphabet, announced today.

DeepMind transformed science in 2020 with its AlphaFold AI software, which produces highly accurate predictions of the structures of proteins — information that can help scientists understand how they work, which can help treat diseases and develop medications. It first started publicly releasing AlphaFold’s predictions last summer through a database built in collaboration with the European Molecular Biology Laboratory (EMBL). That initial set included 98 percent of all human proteins.

Now, the database is expanding to over 200 million structures, “covering almost every organism on Earth that has had its genome sequenced,” DeepMind said in a statement.

“You can think of it as covering the entire protein universe,” Demis Hassabis, CEO of DeepMind, said during a press briefing. “We’re at the beginning of a new era now in digital biology.”  ... '

Fraunhofer Works Audio Experience

 Emphasizing personal experience, working with Mercedes.

Fraunhofer IDMT’s audio software enables an individual listening experience

News / July 07, 2022  by Frauhofer

Everyone hears differently – this applies inside a vehicle too. That is why Fraunhofer IDMT in Oldenburg has developed a technological concept for fast and individual sound adaptation, which has been integrated into the multimedia system of vehicles from the Mercedes-Benz Group AG. At the heart of the development is an algorithm that easily adjusts the sound of music according to passengers’ wishes. Once set, the sound is optimally adapted to listening preferences.

Fraunhofer IDMT in Oldenburg has developed a technological concept for fast and individual sound adjustment, which has been integrated into vehicles’ multimedia systems.

Important for listening comfort in a vehicle is the quality of the media system’s speech and music reproduction. Pleasant sound and better acoustic intelligibility not only contribute to comfort but can also increase safety, as the driver’s eyes stay focused on the road instead of dwelling on complex audio sub-menus.

»Studies show that people want to perceive sound individually, irrespective of age or hearing ability – inside a vehicle too. Since it’s much more difficult or even impossible to achieve personal feel-good sound with conventional equalisers, we’ve developed new algorithms and an intuitive interface,« explains Dr Jan Rennies-Hochmuth, Group Leader Personalised Hearing Systems at Fraunhofer IDMT in Oldenburg.

Simple instead of complex

Fraunhofer IDMT’s newly developed audio software allows individual sound settings without needing to get to grips with complex sub-menus or parameters. Instead, the user adjusts the sound of individual instruments once only and according to his or her listening preferences by means of an audio example. The result of this personalisation process then serves as the basis for the automatic sound adjustment of all future audio playbacks in the vehicle. Once done, readjusting the sound parameters is no longer necessary. The outcome is a better listening experience at any playback volume.

In addition to infotainment systems, the technology, which Fraunhofer also calls »YourSound«, enables users of other audio devices, such as smartphones or televisions, to adjust the audio playback to their own acoustic preferences in a fun way – without any knowledge of sound levels or frequencies.  ... ' 

Accessibility Challenges

 Working at the edge of  accessibility is very instructive.

For Blind Internet Users, the Fix Can Be Worse Than the Flaws

The New York Times   

Amanda Morris, July 18, 2022

Many people with disabilities say automated accessibility tools have made websites harder to navigate. In some cases, these tools have hidden widgets or incorrectly coded labels for images and buttons. Over 400 businesses using automated accessibility tools or overlays from companies like AudioEye, accessiBe, and UserWay were sued last year over accessibility issues. Further, an open letter from more than 700 accessibility advocates and Web developers has called on organizations to cease their use of accessibility tools given that the "overlays themselves may have accessibility problems" and that many who are blind or low vision already use screen readers or other software to navigate sites. Although the companies offering such tools say their products will improve over time, blind and low vision people argue that it is not fair for them to have to wait when everyday tasks increasingly require them to visit websites.   ..


From Cloud to Edge

 Considering a major rework of smarthome.

How The Migration From Cloud to Edge Powers Tomorrow’s Smart Homes

Carsten Gregeren  in Datanami

The edge is here to stay. The next generation of data processing is growing from strength to strength – reaching $274 billion in value by 2025 – with major benefits including better data management and security, lower connectivity costs, and reliable, uninterrupted connection. Edge computing, in my mind, is the logical successor to cloud computing, and it’s especially exciting to consider the implications for the Internet of Things (IoT).

As devices more often process data at the edge rather than in the cloud, experts believe the evolution will pave the way for increased artificial intelligence (AI) and analytics. The one-two punch of smarter devices with faster edge connections looks set to change our interactions with devices entirely. Consider, for example, door locks with instant facial recognition or smart induction stoves that automatically change cooking temperature. Increasingly, more autonomous devices will be able to make decisions on the behalf of users.

Let’s look at how the great cloud to edge migration powers the smart homes of tomorrow.

Smarter Devices, Smarter Homes

The concept of smart homes has always captured the popular imagination. However, it might not be the stuff of science fiction for much longer. Devices like Amazon’s Alexa and Google’s Home have grown rapidly in just a few short years and consumers are becoming more accustomed to smart devices within the modern home.

Smart home or home automation is the process of automatically controlling different appliances or devices and programing them to replace several human interactions for handling essential functions. The connected sensors and devices are operated via IoT supported platforms, providing connectivity and control to them worldwide.

By 2025, IDC estimates that there will be more than 55 billion connected devices, with 75% of them connected to an IoT platform. And more of these devices are finding their way into the home. However, with more devices comes more data. IDC estimates that connected IoT devices will soon generate up to 73 zettabytes of data. Today’s centralized cloud networks may become overloaded with traffic due to such a spike in data. Therefore, edge computing’s distributed IT architecture may help to combat the impending data rush by transferring information to the network periphery.

In addition to storage benefits, smarter devices are also driving this migration from cloud to edge. Consider that devices are more often loaded with AI-optimized chipsets. These chips are smaller, more economical and less power-consuming. As a result, they enable devices to handle far more processes internally rather than externally, reducing the need to offload unnecessary processes to the cloud. ... ' 

Dead Spiders for Robot Grippers.

 Clever Biomimicry Example.  Necrobiotic use of forms, 

Necrobotics: Dead Spiders Reincarnated as Robot Grippers That microhydraulic gripper you’ve always wanted, thanks to an ex-spider   By EVAN ACKERMAN

Bugs have long taunted roboticists with how utterly incredible they are. Astonishingly mobile, amazingly efficient, super robust, and in some cases, literally dirt cheap. But making a robot that’s an insect equivalent is extremely hard—so hard that it’s frequently easier to just hijack living insects themselves and put them to work for us. You know what’s even easier than that, though?

Hijacking and repurposing dead bugs. Welcome to necrobotics.

Spiders are basically hydraulic (or pneumatic) grippers. Living spiders control their limbs by adjusting blood pressure on a limb-by-limb basis through an internal valve system. Higher pressure extends the limb, acting against an antagonistic flexor muscle that curls the limb when the blood pressure within is reduced. This, incidentally, is why spider legs all curl up when the spider shuffles off the mortal coil: There’s a lack of blood pressure to balance the force of the flexors.

This means that actuating all eight limbs of a spider that has joined the choir invisible is relatively straightforward. Simply stab it in the middle of that valve system, inject some air, and poof, all of the legs inflate and straighten. ... ' 


Sonar and Facial Expressions

Another kind of  privacy invasion?

Cornell University researchers have developed EarIO, a wearable earphone device (earable) that can reconstruct the wearer's face using sonar.

Earable  sends facial movements to a smartphone.

A speaker on either side of the earphone transmits acoustic signals to the sides of the face, and a microphone detects the echoes, which change due to facial movements as wearers talk, smile, or raise their eyebrows.

A deep learning algorithm processes and translates that data back into facial expressions via artificial intelligence.   The earable can communicate with a smartphone via a wireless Bluetooth connection, maintaining the user’s privacy. ... 

The image produced by an earable. The device performs as well as camera-based face tracking technology, but uses less power and offers more privacy. ... 

From Cornell University Chronicle     

View Full Article

Wednesday, July 27, 2022

Defending Your Enterprise

Modeling your Defense with Chaos.

Defending the Enterprise

By David Geer, July 26, 2022 

Cyberattacks bring turbulence and disruption, leaving unpredictable and chaotic system failures in their wake. Organizations using cybersecurity chaos experiments simulate cyber events to uncover deficits in their cyber-elasticity. Test results lead developers and engineers to repair or rearchitect applications and supporting infrastructure for security and continuity under trial.

In the context of cybersecurity, chaos is any security failure that can happen, says Kelly Shortridge, senior principal product technologist, Fastly, an edge cloud platform provider. "Security chaos testing is the practice of continual experimentation to verify that systems operate as we think to improve their resilience to attack," explains Shortridge.

"Cybersecurity chaos engineering is resiliency testing adopted to combat the ever-changing threat landscape. Chaos engineering applies new threats to systems to see what happens to an ecosystem if components fail," says Doug Saylors, partner and co-lead, cybersecurity, for ISG, a global technology research and advisory firm. "Running a penetration test or attack simulation with a zero-day exploit is the most common method of chaos testing in cybersecurity," notes Saylors.

Web applications, distributed systems, and network infrastructure, including infrastructure-as-code, break under the pressure of attacks that bring chaos. "Systems have varying response characteristics depending on the type of attack. Applications stop working or provide erroneous outputs that have downstream effects. Network and infrastructure components see significant performance spikes, which affect users negatively through increased latency or limited access to critical applications," says Saylors.

Criminal hackers are willing to cause chaos to get to the underlying data, says Jenn Bergstrom, senior technical director for Parsons X; they hope you are not monitoring closely enough to quickly notice the signs, such as packet loss. Parsons X is a group within Parsons, a digitally-enabled solutions provider. "Packet loss may happen because they send a jumbo packet with an embedded command that will affect your database. So, the chaos is more of a side effect of what they are doing," says Bergstrom.

"It's important to see how your system behaves" under such circumstances, "so you see those minor differences between standard behavior and the unexpected," says Bergstrom.

Specific attacks create lots of chaos. "The main attack space for security chaos is ransomware, even though its goal is to make money," says Shortridge. However, ransomware causes more security failures than encrypting critical data. It locks enterprises out of systems and machines, and leads to downtime until an organization pays the ransom or restores from backups. 

Ransomware attacks are resource-intensive, requiring network bandwidth, CPU cycles, and hard drive operations to encrypt many files quickly, completing an attack before the organization has time to stop it. Ransomware can take down entire networks. It can encrypt all backups before proceeding to production data to ensure the organization cannot restore it. Any services that count on that data come to a halt. Any software with dependencies on those services suffers, and those operations cease or falter. Cybersecurity chaos experiments must evolve to meet the challenges of modern ransomware. ... ' 

Bluetooth Signals Used to Identify, Track Smartphones

Tracking Smartphones

Bluetooth Signals Can Be Used to Identify, Track Smartphones

UC San Diego News Center

Ioana Patringenaru, June 8, 2022

Engineers at the University of California, San Diego (UCSD) have demonstrated an exploit that taps Bluetooth beacon signals emitted by smartphones to track individuals. The researchers showed the signals bear a unique fingerprint, which UCSD's Nishant Bhaskar said poses a serious threat "as it is a frequent and constant wireless signal emitted from all our personal mobile devices." The fingerprint stems from manufacturing flaws in hardware that are unique to each device, which generate novel Bluetooth distortions that attackers could use to bypass anti-tracking measures. Experiments validated the feasibility of using the exploit in real-world settings, although the researchers noted it requires attackers to possess significant expertise. ... 

Tuesday, July 26, 2022

JW Space Telescope Reshaping Astronomy

Good,  non-tech piece, intro below

Two Weeks In, the Webb Space Telescope Is Reshaping Astronomy  in QuantaMag

In the days after the mega-telescope started delivering data, astronomers reported exciting new discoveries about galaxies, stars, exoplanets and even Jupiter. ... 

Coordinating over Slack, Pascale, an astrophysicist at the University of California, Berkeley, and 14 collaborators divvied up tasks. The image showed thousands of galaxies in a pinprick-size portion of the sky, some magnified as their light bent around a central cluster of galaxies. The team set to work scrutinizing the image, hoping to publish the very first JWST science paper. “We worked nonstop,” said Pascale. “It was like an escape room.”

Three days later, just minutes before the daily deadline on arxiv.org, the server where scientists can upload early versions of papers, the team submitted their research. They missed out on being first by 13 seconds, “which was pretty funny,” said Pascale.

Abstractions navigates promising ideas in science and mathematics. Journey with us and join the conversation....

The victors, Guillaume Mahler at Durham University in the United Kingdom and colleagues, analyzed that same first JWST image. “There was just a sheer pleasure of being able to take this amazing data and publish it,” Mahler said. “If we can do it fast, why should we wait?”

The “healthy competition,” as Mahler calls it, highlights the enormous volume of science that is already coming from JWST, days after scientists started receiving data from the long-awaited, infrared-sensing mega-telescope.  ...  

Leap Seconds Confuse Computers

 Unknown to me ....

Leap seconds cause chaos for computers — so Meta wants to get rid of them

Facebook’s parent company wants new ways to calculate time

By James Vincent  Jul 26, 2022  in TheVerge

Since 1972 there have been 27 leap seconds: additional seconds added to the world’s common clock — Coordinated Universal Time or UTC — to account for changes in the Earth’s rate of rotation. Historically, our concept of time is defined as a fraction of the length of the solar day, but as the Earth’s rate of spin is somewhat irregular (slowing and speeding based on various factors) it means solar time and universal time tend to drift apart. So, in order to compensate, we add leap seconds. And this really confuses computers. .... '  

Genome Editing for Insect Research

 Designing molecular networks in insects using computer simulation!

Scientists Expand Entomological Research Using Genome Editing Algorithm

Hiroshima University (Japan)

July 22, 2022

A team of scientists from Japan's Hiroshima University (HU), the Tokyo University of Agriculture and Technology, and the RIKEN Center for Integrative Medical Sciences has developed an algorithm that uses genome editing to broaden entomological research. The Fanflow4Insects workflow method annotates functional information of insect genes; the researchers use it to transcribe sequence information, as well as genome and protein sequences. The team applied the algorithm to generate a functional annotation pipeline for the silkworm and the Japanese stick insect. "Using Fanflow4Insects, we are going to annotate insects that produce useful substances," said HU's Hidemasa Bono. "The ultimate goal of this study is to make it possible to design molecular networks in insects using computer simulation."

Deep Learning from Molten Salts

Interesting results from local university. 

Deep Learning Method Worth Its Salt

UC (University of Cincinnati) News

Michael Miller, July 22, 2022

A multi-institutional team of researchers led by the University of Cincinnati's Yu Shi has developed a novel technique for modeling the thermodynamic properties of molten salts via deep learning artificial intelligence. Shi said the researchers trained a neural network on data produced by quantum simulations, which they used to estimate the free energy of molten sodium chloride. The research, according to Shi, offers a reliable way of studying the conversion of dissolved gas to vapor in molten salts, helping to understand how impurities and solutes affect corrosion. He added that the method also could help scientists analyze the emission of potentially toxic gas into the atmosphere, which will be useful for fourth-generation molten salt nuclear reactors.  ...

Chip Proccess and Classification

 Impressive speed for detecting and classifying images

Chip Processes, Classifies Nearly Two Billion Images per Second

Penn Engineering Today

Melissa Pappas, June 1, 2022

University of Pennsylvania (Penn) engineers have designed a 9.3-square-millimeter chip that can detect and classify images in less than a nanosecond. The chip directly processes light received from objects of interest using an optical deep neural network. "Our chip processes information through what we call 'computation-by-propagation,' meaning that unlike clock-based systems, computations occur as light propagates through the chip," explained Penn's Firooz Aflatouni. "We are also skipping the step of converting optical signals to electrical signals because our chip can read and process optical signals directly, and both of these changes make our chip a significantly faster technology." Penn's Farshid Ashtiani said direct processing of optical signals makes a large memory unit unnecessary. ... 

Synthetic Data in Machine Learning: What, Why, How?

Podcast of interest,  via Vincent Granville:  

Synthetic Data in Machine Learning: What, Why, How?

Published on July 25, 2022

In this episode, Nicolai Baldin (CEO) and Simon Swan (Machine Learning Lead) of Synthesized are welcoming the founder of Data Science Central and MLTechniques.com Vincent Granville to discuss synthetic data generation, share secrets about Machine Learning on synthetic data, key challenges with synthetic data, and using generative models to solve issues related to fairness and bias.

Contents

0:00 – Introductions

3:24 – How did you become interested in synthetic data?

5:36 – How does the corporate world interact with synthetic data?

8:31 – Problems that synthetic data can help solve

18:55 – Synthetic datasets used by corporations

27:55 – What is driving the interest to synthetic data?

31:21 – How would you define what synthetic data actually is?

38:43 – Creating and sharing high quality synthetic data

41:58 – What criteria should be used to measure synthetic data?

46:02 – Challenges in scaling from standalone tables to databases

49:38 – Data coverage concept and its applications

51:30 – Using synthetic data to help solve biases

57:13 – Fire round

1:00:53 – Conclusions

View podcast here.    .... ' 

Monday, July 25, 2022

Gartner on Hype Cycle for Blockchain and Web3 and NFT

 Spaces I have been asked about of late. Click through to see: 

Gartner Hype Cycle for Blockchain and Web3, 2022

By Avivah Litan | July 22, 2022 | 8 Comments

We just published our  Hype Cycle for Blockchain and Web3, 2022   Crypto and token prices crashed in 1H22, but coin prices should not be conflated with technology value. Consumer apps like NFT games and commerce are driving innovation as enterprises gradually begin to realize business value. A tipping point in adoption will soon be reached, as risks are managed proactively.  ... '  

Could AI Replace Therapists?

 Likely, especially for gathering and arranging pertinent information. 

Could AI Replace Therapists?

By Psychology Today,  July 13, 2022

Therapy is most often based on eliciting a self-healing process from the patient. The role of therapists is to stabilize patients until they heal themselves.

It is very likely that in the near future, artificial intelligence (AI) could be programmed to serve as an effective mental cast that can permit self-healing to occur. In fact, most parts required for such AI therapy are already under development ...

We are close to developing AI to a sufficient degree that would permit it to provide effective psychological therapy.... 

From Psychology Today

View Full Article  

Detecting, interpreting Irony

 I remember some very early work where we considered the detection of irony in consumer reactions.   Aka 'opinion minng', often including irony, often linked to humor.  

Irony machine: Why are AI researchers teaching computers to recognize irony?

by Charles Barbour, The Conversation   in TechExplore

What was your first reaction when you heard about Blake Lemoine, the Google engineer who announced last month the AI program he was working on had developed consciousness?

If, like me, you're instinctively suspicious, it might have been something like: Is this guy serious? Does he honestly believe what he is saying? Or is this an elaborate hoax?  Put the answers to those questions to one side. Focus instead on the questions themselves. Is it not true that even to ask them is to presuppose something crucial about Blake Lemoine: specifically, he is conscious?

In other words, we can all imagine Blake Lemoine being deceptive.  And we can do so because we assume there is a difference between his inward convictions—what he genuinely believes—and his outward expressions: what he claims to believe.

Isn't that difference the mark of consciousness? Would we ever assume the same about a computer?

Consciousness: 'The hard problem'

It is not for nothing philosophers have taken to calling consciousness "the hard problem." It is notoriously difficult to define.

But for the moment, let's say a conscious being is one capable of having a thought and not divulging it.

This means consciousness would be the prerequisite for irony, or saying one thing while meaning the opposite. I know you are being ironic when I realize your words don't correspond with your thoughts.

That most of us have this capacity—and most of us routinely convey our unspoken meanings in this manner—is something that, I think, should surprise us more often than it does.

It seems almost discretely human.  Animals can certainly be funny—but not deliberately so.   What about machines? Can they deceive? Can they keep secrets? Can they be ironic?

AI and irony

It is a truth universally acknowledged (among academics at least) that any research question you might cook up with the letters "AI" in it is already being studied somewhere by an army of obscenely well-resourced computational scientists—often, if not always, funded by the U.S. military.

This is certainly the case with the question of AI and irony, which has recently attracted a significant amount of research interest.

Of course, given that irony involves saying one thing while meaning the opposite, creating a machine that can detect it, let alone generate it, is no simple task.

But if we could create such a machine, it would have a multitude of practical applications, some more sinister than others.  In the age of online reviews, for example, retailers have become very keen on so-called "opinion mining" and "sentiment analysis," which uses AI to map not merely the content, but the mood of reviewer's comments.

Knowing whether your product is being praised or becoming the butt of the joke is valuable information.   Or consider content moderation on social media. If we want to limit online abuse while protecting freedom of speech, would it not be helpful to know when someone is serious and when they are joking?

Or what if someone tweets that they have just joined their local terrorist cell or they're packing a bomb in their suitcase and heading for the airport? (Don't ever tweet that, by the way.) Imagine if we could determine instantly whether they are serious, or whether they are just "being ironic."  ....' 

How Cornerstone AI is making data ‘right’ for healthcare industry

Heathcare Data AI

How Cornerstone AI is making data ‘right’ for healthcare industry

Shubham Sharma.   @shubham_719  in Venturebeat

July 18, 2022 11:30 AM

Technology healthcare And Medicine.

AI has the potential to transform healthcare. Be it predicting the risk of terminal diseases or developing novel drugs, companies are leveraging data-driven algorithms to improve the quality of patient care in every way possible. The use cases are only expected to grow from here, but there are also certain hurdles along the way. Case in point: The lack of high-quality datasets.

Health organizations cumulatively generate about 300 petabytes of data every single day. This information is stored across systems yet not effectively used due to poor preparation. Basically, data teams, which tend to create manual rules for data cleanup, are struggling to keep up with the growing volumes of information. They spend most of their time, almost 80%, on getting the data ready – making it accurate, connected and standardized – rather than actually exploring and analyzing it for potential, life-saving AI applications. 

How to re-engineer global platforms for our multi-cloud reality

Cornerstone AI’s comprehensive solution

To solve this, San Francisco-based company, Cornerstone AI, has launched a solution that automatically characterizes, harmonizes and cleans healthcare data in a fraction of the time taken by traditional methods. The company also announced it has raised $5 million in seed funding.

According to Cornerstone, the algorithm of its platform uses a combination of custom Python and R code to scan each table and data point — inferring their structure and validity — and then organizes the tables for analysis while removing and correcting all notable errors.  ....'

Sunday, July 24, 2022

NFT's and Blockchain for Engineering

Upcoming community pieces that look to be of interest, join in.  Below just an intro, click through for registration and more detail.  If I can I plan to.

Does the engineering world need to care about NFTs and blockchain?  in Venturebeat. 

Join executives from July 26-28 for Transform's AI & Edge Week. Hear from top leaders discuss topics surrounding AL/ML technology, conversational AI, IVA, NLP, Edge, and more. Reserve your free pass now!

If the first question out of people’s mouths about either blockchain or NFTs is “What exactly are they?” the second question inevitably is “Is this something that I actually need to care about?”

If you’re an artist who makes a living selling art, the answer might be yes. But if you’re in the engineering world, the potential benefits are far less clear.

If you’re an artist, you can see where this might be useful: You can put your art up for sale, and a collector can purchase an NFT that says that they are the official owner of that piece of artwork. (In theory, anyway – more on this in a bit.)

So, how might this apply to the engineering software space?     ‘I did that’

Unlike the individuals in the art world, people in the engineering world don’t generally create 2D files and 3D files just for the purpose of artistic expression and then try to sell them. They’re creating these assets because they intend to do something with those files, like designing and manufacturing an actual real-world object.

So, strike one: not much utility to be found for NFTs and blockchain on that particular front. But maybe there’s some other application in the engineering space, perhaps around intellectual property and documentation of the product development process?

Picture a manufacturer that is designing an innovative new bicycle. One engineer is in charge of the bike frame. As they go through the development process, each time they make a new CAD file, they check it into the blockchain so that their work on this new product is documented on the blockchain. Years down the line, if they need to prove their work on this product for some reason, a permanent, publicly available record is there for all to see, and the engineer can say “I did that.”

It sounds like a nifty use case. But alas, here is where the “theoretical” benefits of blockchain quickly run into some buzzsaws.     Not so fast…

For starters, while there are a small handful of companies making technology that does this type of thing, they are few and far between. In terms of the innovation-adoption curve, this field is really still in its infancy – which is surprising since the underlying tech has been around for almost 15 years.

That’s not to say that there aren’t plenty of enthusiastic voices out there around the potential of blockchain and NFTs in the engineering world – but not all of them have saintly motives. If someone has purchased a boatload of Bitcoin or NFTs for purely speculative reasons, they’re likely to champion anything having to do with blockchain and its potential because it indirectly benefits the investments they’ve already made.

Then there’s the matter of blockchain’s environmental impact. Even the newer, more evolved blockchain protocols like Ethereum still consume gargantuan amounts of energy [subscription required] as they record and validate transactions across a distributed and decentralized ledger. With the “proof of stake” mechanism for validating entries, this energy usage is less of a problem, although it has other issues. The bottom line, however, is that, on a fast-warming planet [subscription required] staring down a climate emergency, blockchain is a hard technology to embrace unless transactions can be made radically more efficient.

All of this is to say nothing of the fact that much of what blockchain could enable the engineering space to do is already possible to do – and much more easily accomplished – via existing methods. Want to show proof of prior work on a product? Any time you check your CAD file into some kind of CAD or PLM system, there’s an audit trail of who accessed, created or modified the file. Need an indisputable patent? There’s a patent office that decides those types of things.

While it might be nice to get an “official” NFT saying that the work in a CAD file is officially yours, that NFT doesn’t mean you own it in any legal sense. It’s just a digital signature that means something in “the world of blockchain” but doesn’t necessarily mean something legally. ....  '

Quantum Computer Cools Itself by Performing Calculations

Still a small scale experiement, is there the ability to scale it?

 Quantum Computer Cools Itself by Performing Calculations

New Scientist, Karmela Padavic-Callaghan, July 1, 2022

A small diamond-based quantum computer developed by researchers at Germany's University of Stuttgart performs a sequence of mathematical operations to cool itself. The computer is comprised of three qubits in a diamond that is missing two carbon atoms, one replaced with a nitrogen atom and the other by a vacancy. The qubits were hit with microwaves, which altered the spin of either the nucleus of the nitrogen atom or the nuclei of the two carbon atoms near the vacancy. Such manipulations of the qubits act as logic gates, with a sequence of gates used to change the computer's energy and cool it. The researchers said this algorithmic cooling was very close to the theoretical limit of maximum cooling efficiency.  ... ' 

Should a Black Box be Transparent?

Was once met with exactly this dilemma. 

When Should a Black Box Be Transparent?   Opinions piece with further comments. .... 

By George V. Neville-Neil

Communications of the ACM, August 2022, Vol. 65 No. 8, Pages 23-24    10.1145/3544550

Dear KV,  (Code Vicious)

We have been working with a third-party vendor that supplies a critical component of one of our systems. Because of supply-chain issues, they are trying to "upgrade" us to a newer version of this component, and they say it is a drop-in replacement for the old one. They keep saying this component should be seen as a black box, but in our testing, we found many differences between the original and the updated part. These are not just simple bugs but significant technology changes that underlie the system. It would be nice to treat this component as a drop-in replacement and not worry about this, but what I have seen thus far does not inspire confidence. I do see their point that the API is the same, but I somehow do not think this is sufficient. When is a component truly drop-in and when should I be more paranoid?

Dropped In and Out

Dear Dropped,

Your letter brings up two thoughts: One about recent events and one about the eternal question, "When should a black box be transparent?" While we all know the pandemic has caused incredible amounts of death and destruction to the planet, and the past two years have brought unprecedented attention on the formerly very boring area of supply chains, the sun comes up and the world still spins—which is to say the world has not ended, yet. Supply-chain issues are both real and the world's latest excuse for everything. It is as if children were telling their teachers, "The supply chain ate my homework."

At this point, KV is quite skeptical when a vendor's first excuse is supply-chain issues. Of course, that skepticism will not help unless you have a second supplier for whatever you are buying, which you can use to bludgeon your errant vendor.

Another eternal question, "When is a replacement not a replacement?" is one that will plague us in technology forever. The number of people who believe they can treat whatever they are providing as an opaque box with a fixed API is, unfortunately, legion. This belief comes from the physical world, in which a box is a box, and a brick is a brick, and why would you care if your brick is made from a different material anyway?

Here you see the problem: The metaphor breaks down in the physical world as quickly as it would in the realm of software and hardware. Two bricks may both be red, and therefore present an identical look and feel to the external user, but if they are made of different materials, then they have different qualities—for example, in strength, but let's also consider something less obvious, such as their weight. The number of bricks that can be stacked on top of each other to build a wall depends on their weight, as well as their strength. If you use heavy but weak bricks, well, you can imagine how this goes, and if you cannot, try it—just do not tell your health-insurance plan KV suggested this. And let's say you do not build the wall out of weak and heavy bricks, but years later you replace some damaged bricks with newer, heavier, and weaker bricks. The key here is you would not want to stand near that wall.

A topic KV keeps coming back to is the malleability of software. I keep returning to this because it is this malleability that often results in the catastrophic failures of software and systems engineering. You mentioned you saw timing problems with the new component. I can imagine few situations more treacherous than a change in the timing of a critical component. Timing bugs are already some of the most difficult to track down and fix, and if the timing is off in a critical component, that is likely to affect the system, so good luck debugging that. Those who wish to stand on the "API as a contract" quicksand are welcome to do so, but I am not willing to throw them a rope.  ... ' 

Can Computers Be Mathematicians?

Would make sense, hypotheses could be automatically spun off into tests as needed. 

Can Computers Be Mathematicians?    By The Joy of Why  in ACM Opinion, July 5, 2022

"I think that one of the things that's happening in this collaboration is that computer scientists are beginning to learn more about the nature of what modern mathematics actually looks like." -Kevin Buzzard

For the last few years, researchers and amateurs all over the world have worked together to translate the essential axioms of mathematics into a programming language called Lean. Armed with this knowledge, theorem-proving programs that understand Lean have begun helping some of the world's greatest mathematicians verify their work.

In an interview, Kevin Buzzard, professor of pure mathematics at Imperial College London, talks about the effort to "teach" math to Lean—and how projects like this one could shape the future of mathematics.

From The Joy of Why   

View Full Article

Saturday, July 23, 2022

Forestry Management Efforts at Purdue

More on Forestry and LiDAR, Purdue work there. integrated with AI.  Considerable detail.

Integration Leads to Leap in Tech for Forest Inventory, Management

Purdue University News,  Elizabeth K. Gardner, June 6, 2022

Purdue University researchers located, tallied, and measured more than 1,000 trees in just hours by combining aerial and ground-based mobile mapping sensors and systems. The project, part of Purdue's Digital Forestry Initiative, employs manned aircraft, unmanned aerial drones, and backpack-mounted systems that integrate cameras with light detection and ranging (LiDAR) units, along with sensors including global navigation satellite systems and inertial navigation systems. The researchers also created a machine learning algorithm to analyze the data. Purdue's Songlin Fei called the scheme "a groundbreaking development on our path to using technology for a quick, accurate inventory of the global forest ecosystem, which will improve our ability to prevent forest fires, detect disease, perform accurate carbon counting, and make informed forest management decisions."  ... 

Intensive Drone Use in War

ACM TECHNEWS

In Ukraine War, a Race to Acquire Smarter, Deadlier Drones

By Associated Press, July 19, 2022  in  ACM

Ukrainian servicemen correcting artillery fire by drone at the front line near Kharkiv, Ukraine, on Saturday, July 2, 2022.   ... 

Never in the history of warfare have drones been used as intensively as in Ukraine, where they often play an outsized role in who lives and dies.

Drones are in high demand among both Ukrainians and Russians as the war continues, with both sides seeing their supply depleted, using crowdfunding to replenish their losses, and rushing to build or buy advanced drones that are more resistant to radio interference and GPS jamming.

Hundreds of drones have been shipped to Ukraine by U.S. and other Western allies, including Switchblade 600s that can fly at 70 mph, track targets using artificial intelligence, and carry tank-piercing warheads.

However, these "kamikaze" drones can fly only for about 40 minutes.

In May, the U.S. acquired 121 advanced military drones for Ukraine. Called Phoenix Ghosts, the drones can fly for six hours, destroy armored vehicles, and feature infrared cameras.

Thorsten Chmielus of the German drone company Aaronia said, "Everybody now wants drones, special drones, unjammable and whatsoever," but warns that "everybody will have millions of drones that can't be defeated."

From Associated Press

View Full Article    

Top Phone Security Threats

Useful overview, security an increasing concern.    

Here are the top phone security threats in 2022 and how to avoid them  As reported in ZDNet

Your handset is always at risk of being exploited. Here's what to look out for.

Written by Charlie Osborne, Contributing Writer on July 23, 2022

Our mobile devices are now the keys to our communication, finances, and social lives -- and because of this, they are lucrative targets for cybercriminals. 

Whether or not you use a Google Android or Apple iOS smartphone, threat actors are constantly evolving their tactics to break into them. 

This includes everything from basic spam and malicious links sent over social media to malware capable of spying on you, compromising your banking apps, or deploying ransomware on your device. 

The top threats to Android and iOS smartphone security in 2022:  ... '

Radio Waves for the Detection of Hardware Tampering

Protection opportunity. 

 Radio Waves for the Detection of Hardware Tampering

Ruhr-Universit├Ąt Bochum (Germany),  June 7, 2022

Scientists at Germany's Ruhr-Universit├Ąt Bochum (RUB), the Max Planck Institute for Security and Privacy, and information technology company PHYSEC have developed a technique that uses radio waves to monitor hardware for tampering. The radio waves can be used to detect the slightest changes in ambient conditions via a system with a sender and a receiver antenna. The transmitter emits a special signal that is reflected by walls and computer components; these reflected signals have a unique signature when they reach the receiver, which even the smallest of tampering can disrupt. RUB's Johannes Tobisch said the antennas should be placed "as close as possible to the components that require a high degree of protection," because the source of tampering is easier to identify when it is closer to the receiving antenna.... 

Friday, July 22, 2022

Amazon Goes Healthcare

Natural next step?

Is Amazon on the verge of reinventing American healthcare?

Jul 22, 2022

Amazon.com yesterday said that it has reached a deal to acquire One Medical, a “technology-powered national primary care organization” that combines in-person, digital and telehealth services to care for patients.

The $3.9 billion all-cash deal, if approved by One Medical’s shareholders and federal regulators, makes clear that Amazon is serious about being a major disruptive force in the consumer healthcare market. One Medical CEO Amir Dan Rubin will remain in that position should Amazon take control.

“We think health care is high on the list of experiences that need reinvention,” said Neil Lindsay, SVP of Amazon Health Services. “Booking an appointment, waiting weeks or even months to be seen, taking time off work, driving to a clinic, finding a parking spot, waiting in the waiting room then the exam room for what is too often a rushed few minutes with a doctor, then making another trip to a pharmacy — we see lots of opportunity to both improve the quality of the experience and give people back valuable time in their days.”

Amazon has been inexorably pushing into healthcare going back to 2018 with its acquisition of Pillpack, an online pharmacy that delivers pre-sorted doses of prescribed medicines in envelopes to customers.

Two years later brought the launch of Amazon Pharmacy, which offers free unlimited two-day deliveries of prescriptions to Prime members and discounts on medications not covered by member’s insurance. ... ' 

by George Anderson in  Retailwire   .... '

Drone Superhighway?

New to me.  A likely future?   A new kind of infrastructure.

U.K. Project Will Create 165-Mile Drone Superhighway,   By New Atlas, July 21, 2022 

Skyway is expected to enable time-sensitive transportation of goods by automated drones.

The U.K. government has approved plans for a 165-mile drone superhighway that will connect towns and cities across England. Skyway, part of a broader next-generation aviation initiative, will initially be used to survey infrastructure such as roads and ports.

Centered on the town of Reading, Skyway will connect to Oxford, Cambridge, Milton Keynes, and other cities. The U.K. firm Altitude Angel is spearheading the initiative, which, like its 2020 Arrow Drone Zone project, could allow flights from any drone operator meeting specific criteria.

Skyway, which is expected to open within two years, will prevent mid-air collisions using detect-and-avoid technology from Altitude Angel. It also will allow autonomous aircraft to fly beyond the line of sight.

From New Atlas   

View Full Article

Evolution of Cybercrime

Thoughtful looks at the direction of cybercrime. With link to HP study ....

Understanding the Evolution of Cybercrime to Predict its Future  By Kevin Townsend  in Securityweek  on July 21, 2022

An analysis of the evolution of cybercrime from its beginnings in the 1990s to its billion-dollar presence today has one overriding theme: the development of cybercrime as a business closely mimics the evolution of legitimate business, and will continue to evolve to improve its own ROI.

In the early days, hacking was more about personal prestige and kudos than about making money – but the dotcom made people realize there's money to be made on the internet. This first phase of cybercrime loosely fits the period from 1990 to 2006.

From this simple realization, HP Wolf Security's study of The Evolution of Cybercrime (PDF report) shows an underground business that follows and mimics the overground business ecosystem – digital transformation included. "Digital transformation has supercharged both sides of the attack-defense divide – shown, for instance, by the increasing popularity of ‘as a service’ offerings," said Alex Holland, senior malware analyst and author of the report. "This has democratized malicious activity to the point where complex attacks requiring high levels of knowledge and resources – once the preserve of advanced persistent threat (APT) groups – are now far more accessible to a wider group of threat actors."   ...  '

Model Structure and States

Interesting research, technical.   with potential linkages to AI. 

Technical Perspective: Model Structure Takes Guesswork Out of State Estimation  By Sayan Mitra

Communications of the ACM, February 2022, Vol. 65 No. 2, Page 110    10.1145/3505268

Communication can often be exchanged with computation in control systems. A car's computer needing to know the speed can either get the data from the speed sensor over the vehicle's communication network (bus); or it can calculate the speed from the initial speed, the history of throttle commands, using the laws of physics driving the car. In a fully deterministic world with powerful enough computers, communication may be redundant. In the real world, the degree of uncertainty in the physics can say something about the level of communication necessary. Quantifying this communication need can help principled design and allocation of network bandwidth and other resources in vehicles and other control systems.

Uncertainty or lack of information is usually measured by entropy of some flavor. Claude Shannon developed a definition of entropy in the context of engineering telephone networks. That definition uses probability distributions, not coincidentally, capturing noise in telephone channels. In contrast, topological entropy, used in studying evolution of worstcase uncertainty in safety-critical systems, does not use probabilities at all. Instead, it measures the rate of growth of uncertainty in a system's state with time. Topological entropy of a stable system like a pendulum will be smaller than that of an unstable system like an inverted pendulum.

Why do we care about topological entropy? First, as entropy describes the rate of growth of state uncertainty (without new measurements), it should also somehow relate to the rate of measurements necessary to accurately estimate the state. In the speed sensor-estimator example, the entropy of the system would give the minimal channel capacity necessary for connecting the two, so the computer can construct accurate speed estimates with worst-case error bounds. These lower bounds hold across all algorithms and codes, and therefore, can take the guesswork out of communication network design. As more devices feed into shared networks, entropy bounds can guide allocation of bandwidth to different processes.  .... ' 

Affiliation Influences Our Fear of Data Collection

 Sciam talks fear of data collection.

Political Affiliation Influences Our Fear of Data Collection, By Scientific American, June 7, 2022

Instead of trusting or distrusting government surveillance based on which party is in power, Americans need to demand transparency into how the government as a whole is gaining access to and using their data.

Over the past year, the Federal Bureau of Investigation (FBI) made more than three million warrantless queries on the data of U.S. residents collected by both the government and private companies. A shrinking share of Americans support such warrantless government surveillance, yet we have not effectively advocated against the growing surveillance of our personal data.

That is because we are not taking a principled view on government surveillance as a whole. Instead, we are starting to see viewpoints devolve into ostracization and hatred of the "other." Our research suggests that Americans' fears about government surveillance change based on who is in power and what we fear that political party may do with our data.

From Scientific American   

View Full Article    

On Matterport

 Interesting view

Matterport:  3D for Architecture, Engineering & Construction   from Matterport

Whether you work in architecture, engineering, or construction, you'll be able to streamline documentation, 3D scan as-builts, and collaborate with ease. With Matterport, you can also reduce costs and help save the most precious commodity — your time.

Our 3D data platform is the most powerful, accurate, and quickest way to document a building or property. With a compatible camera, you can capture anything from small to large spaces — both inside and outside — with the highest level of detail.

3D scanning that fits your workflow

Our platform automatically stitches all your data together and lets you export your data into other platforms easily, saving you significant amounts of manual work and time.Create 3D walkthroughs and guide anyone immediately to a virtual site tour.

Communicate key milestones quickly and effectively by eliminating travel time and by sharing and annotating in the model to get sign-offs.

Perform remote inspections, measure while offsite, and reduce site visits by capturing all data the first time.

Learn how Swinerton used Matterport to reduce client travel time by 100 percent and eliminate four weeks of potential project delays.

How 3D scan to BIM can reduce design and construction costs

Incorporating Matterport into your BIM (Building Information Modelling) process can help reduce virtual design and construction costs and help you win more bids.

Export your point cloud and easily share and render it within Revit® and other BIM tools.

Generate OBJ files and point clouds for as-builts and construction documentation.

Scan tight areas 10–15 times faster than with a typical lidar scanner.

Overlay your point cloud onto your BIM model to conduct verification.

Take measurements in difficult to access areas such as pipes, trusses, and ceiling beams.

Find out how we helped Perkins&Will work efficiently and remotely.

Get the accuracy you need from a single source

Matterport provides incredible accuracy and easy measurement in the platform.

Capture spaces that are accurate within 1% using a Matterport Pro2 camera and 0.1% with Leica BLK360.

Replace thousands of photos by capturing all imagery and data at once, and save time by eliminating the need to document, arrange, and label photos.

Get reflected ceiling plan images and schematic floor plans.

Eliminate registration markers or manual alignment.

Learn how the University of Wolverhampton created online campus tours for prospective students and improved building reconstruction.

Building Information Modeling

Matterport BIM files enable fast, easy, precision 3D as-builts for any space. Architects, designers and building engineers can now accurately capture the current state of any building and its contents in CAD and design software more quickly, easily, and cost-effectively than ever before.

Thursday, July 21, 2022

The Blog is Acting Strange

 Have been noting a number of odd responses and blocking posts on this blog.  Never seen that before.   Don't know if this will continue.    Let me know of your experiences.      Franz  

Webb Space Telescope Has Profound Data Challenges

Useful example to consider.

The Webb Space Telescope's Profound Data Challenges

By IEEE Spectrum, July 15, 2022

The Lagrange points are equilibrium locations where competing gravitational tugs on an object net out to zero. The James Webb Space Telescope is one of two other craft currently occupying L2.  The James Webb Space Telescope (JWST)'s data-collecting operations present a number of challenges, including dependence on a reliable communications subsystem.

The spacecraft is sending data up to 57 gigabytes (GB) of data per day back to Earth on a 25.9-gigahertz channel at up to 28 megabits per second.  Data recorded by the JWST's scientific instruments is stored in the spacecraft's 68-GB solid-state drive, which can collect data for about 24 hours before reaching its limit.

Only after the spacecraft receives confirmation of a data file's receipt will it delete its onboard copy of the data to clear space.   The telescope will stay connected to Earth through the Deep Space Network, sharing limited antenna time with other deep-space missions.

From IEEE Spectrum

View Full Article      

Car Charging Robot

I have been looking at the supply chain aspects of EV Charging


Lawrence Ulrich,   IEEE Spectrum

The vision of Ziggy, a mobile EV charger designed to tool around parking lots like a plug-wielding valet, raises an important question: Do robots take tips?

EV owners may be happy to toss Ziggy a fiver if it can hold an open parking spot and deliver electrons to fill up the car’s battery, with no worries over fixed chargers being occupied by other cars. As importantly, site operators could lease turnkey robo-chargers without the pricey hassles of charger installation, or setting aside precious real estate for chargers that are often underutilized.

The flexible, scalable robots are the brainchild of Ziggy’s inventor, Caradoc Ehrenhalt; he’s a big David Bowie fan and founder and CEO of EV Safe Charge. The Los Angeles company plans to bring its four-wheeled robots to parking facilities, hotels, shopping and entertainment centers, fleet operators, and residential properties. Early adopters, slated for late 2023 or early 2024, include The William Vale hotel in Williamsburg, Brooklyn; Opera Plaza in San Francisco; and a Holiday Inn Express in Redwood City, Calif.

Ehrenhalt says Ziggy can help quell consumer worries over charger availability that some surveys show to be the leading barrier to EV adoption. The black-and-white robot is built around a beefy lithium-ion battery, with GPS, camera vision, sensors, speakers and microphone. From its home base, Ziggy slurps up a 50-kilowatt-hour charge from the grid, batteries, or solar power. Users summon the robot via a mobile app or in-vehicle infotainment system. Ziggy then motors over to hold a reserved parking space where owners plug in and charge at up to a 19.2-kWh pace, roughly the peak of current Level 2 onboard capabilities. (The company plans to develop DC fast-charging capability at roughly 50 kilowatts, well below the 150-to-350-kWh peaks of the most powerful ultrafast chargers, but still a useful jolt).

The first Ziggys will be remotely piloted by humans, using a camera feed and wireless communications. Brakes and redundant ultrasonic and lidar sensors ensure Ziggy doesn’t ding cars or bump into pedestrians, and the robot can announce its presence with audible or visual warnings. Like a superpowered Roomba, the roughly 450-kilogram robot can turn on a dime or squeeze through tricky spots. And Ziggy could solve a vexing issue for EV owners who don’t plan to drive away after a battery fill-up: The need to move their car within minutes after charging is completed, to avoid additional fees and the burning anger of waiting users.  ...    '

BCI Startup Implants First Device in U.S. Patient

Do I recall this having been done elsewhere?  Checking.  Note the many kinds of brain tasks mentioned. Considerable capabilities implies. 

BCI Startup Implants First Device in U.S. Patient    By Bloomberg, July 20, 2022

Brain-computer interface (BCI) startup Synchron implanted a wire-electrode combination in the brain of a U.S. patient with amyotrophic lateral sclerosis, attempting to enable the patient to perform thought-powered Web surfing, email, and texting.

A catheter is used to insert the stentrode implant into the brain through the jugular vein.

As the catheter is removed, the stentrode's mesh opens and fuses with the outer edges of a blood vessel in the brain’s motor cortex; the surgeon then wires the implant to a computing device in the patient's chest.   The stentrode interprets signals detected by electrodes in the implant when neurons fire in the brain, which the chest device amplifies and transmits to a computer or smartphone via Bluetooth.  ... 

Full Article in Bloomberg 

Hacker GoldRush: Business Mail Compromise better than Ransomware?

From Wired via ACM

Ransomware attacks, including those of the massively disruptive and dangerous variety, have proved difficult to combat comprehensively. Hospitals, government agencies, schools, and even critical infrastructure companies continue to face debilitating attacks and large ransom demands from hackers. But as governments around the world and law enforcement in the United States have grown serious about cracking down on ransomware and have started to make some progress, researchers are trying to stay a step ahead of attackers and anticipate where ransomware gangs may turn next if their main hustle becomes impractical.

At the RSA security conference in San Francisco on Monday, longtime digital scams researcher Crane Hassold will present findings that warn it would be logical for ransomware actors to eventually convert their operations to business email compromise (BEC) attacks as ransomware becomes less profitable or carries a higher risk for attackers. In the US, the Federal Bureau of Investigation has repeatedly found that total money stolen in BEC scams far exceeds that pilfered in ransomware attacks—though ransomware attacks can be more visible and cause more disruption and associated losses. 

In business email compromise, attackers infiltrate a legitimate corporate email account and use the access to send phony invoices or initiate contract payments that trick businesses into wiring money to criminals when they think they're just paying their bills.  ... '

From Wired  

View Full Article

Wednesday, July 20, 2022

What AI Still Can't Do

Thoughtful Piece

What AI Still Doesn't Know How to Do, By The Wall Street Journal, July 18, 2022

A few weeks ago a Google engineer got a lot of attention for a dramatic claim: He said that the company's LaMDA system, an example of what's known in artificial intelligence as a large language model, had become a sentient, intelligent being.

Large language models like LaMDA or San Francisco-based Open AI's rival GPT-3 are remarkably good at generating coherent, convincing writing and conversations—convincing enough to fool the engineer. But they use a relatively simple technique to do it: The models see the first part of a text that someone has written and then try to predict which words are likely to come next. If a powerful computer does this billions of times with billions of texts generated by millions of people, the system can eventually produce a grammatical and plausible continuation to a new prompt or a question.

It's natural to ask whether large language models like LaMDA (short for Language Model Dialogue Application) or GPT-3 are really smart—or just double-talk artists in the tradition of the great old comedian Prof. Irwin Corey, "The World's Greatest Authority." (Look up Corey's routines of mock erudition to get the idea.) But I think that's the wrong question. These models are neither truly intelligent agents nor deceptively dumb. Intelligence and agency are the wrong categories for understanding them.

Instead, these AI systems are what we might call cultural technologies, like writing, print, libraries, internet search engines or even language itself. They are new techniques for passing on information from one group of people to another. Asking whether GPT-3 or LaMDA is intelligent or knows about the world is like asking whether the University of California's library is intelligent or whether a Google search "knows" the answer to your questions. But cultural technologies can be extremely powerful—for good or ill.

From The Wall Street Journal

View Full Article