/* ---- Google Analytics Code Below */

Tuesday, November 30, 2021

Cisco on Zero Trust Security

Future of Zero Trust

Security

An Open Security Ecosystem with Shared Signals is the Future of Zero Trust

By Nancy Cam-Winget  Cisco

Zero Trust: as the name implies, is the strategy by which organizations trust nothing implicitly and verify everything continuously. This industry north star is driving different architectures, frameworks, and solutions to reduce an organization’s risk and improve their security posture.   Beyond the need to enforce strong authentication and authorization to establish trust of an endpoint, how can we verify continuously? Often, the zero-trust approach today uses strong authentication and tools that evaluate the security of the user and device at the point of access, but what happens when the security posture of the user and device change after its initial access request is granted?

With many vendors offering impressive security capabilities in cybersecurity, there is a wealth of information that can be shared. Unfortunately, this information is fragmented and lacks standardization and thus interoperability. Getting all these best-in-class vendors to talk to each other is an expensive and time-consuming task, leaving organizations with disparate signal silos and a serious lack of visibility and control across their environment.

This is the problem the OpenID Foundation’s Shared Signals and Events working group is poised to address. For the unfamiliar, the OpenID Foundation is a non-profit organization that promotes open, interoperable standards with OpenID at its core, most notably the standardization of a simple identity layer on top of Oauth 2.0: OpenID Connect. The Shared Signals and Events working group lives within the OpenID Foundation and is comprised of industry leaders and innovators working to promote more open communication between systems. Shared Signals and Events standards like CAEP and RISC have the goal of enabling federated systems with well-defined mechanisms for sharing security events, state changes and other signals. This communication in turn simplifies interoperability and allows organizations to get closer to the Zero Trust ideal of continuously evaluating and enforcing security.

In its first ratified standard, the Shared Signals and Events working group created an open standard through which multiple services can communicate by publishing or subscribing relevant event streams. The standard drastically simplifies communication between applications with security context.  For example, a cloud application might subscribe to events from an endpoint detection and response solution to quickly remove access from infected systems. Alternatively, an IAM solution might publish a change of user context used by a SIEM tool to start an investigation.  An example shown below demonstrates how a device or an application performs an HTTPS service request in step 1 can trigger an update to a change in state to a policy server in step 2.  Further, a policy service can determine whether that change in state needs to be broadcasted to other subscribers (step 3).  A subscriber to that event can process the information and determine if a remediation response (step 4) is needed.  ... '

Monday, November 29, 2021

3-D Printed Living Ink can Release Drugs

 New possibilities for more directly efficient delivery methods.  Continue to be impressed by the new capabilities introduced by 3D printing. 

3D-Printed 'Living Ink' Full of Microbes Can Release Drugs

New Scientist, Carissa Wong, November 23, 2021

A “living ink” made entirely from bacterial cells can be used in a three-dimensional (3D) printer to create structures that discharge drugs or absorb toxins. Researchers at the Massachusetts Institute of Technology genetically engineered the printable gel from proteins known as curli nanofibers, which are generated by E.coli cells; the nanofibers possess one of two oppositely charged modules attached to them, which crosslink. Filtering the bacteria through a nylon membrane concentrates the crosslinked fibers, making the gel printable. "The beauty of the work lies in the ability to genetically program the functional response of the printed living material," says André Studart at the Swiss Federal Institute of Technology in Zürich (ETH Zürich).

AWS Does a RoboRunner

Will be interesting to see what such services will include.

At a keynote during its Amazon Web Services (AWS) re:Invent 2021 conference today, Amazon launched AWS IoT RoboRunner, a new robotics service designed to make it easier for enterprises to build and deploy apps that enable fleets of robots to work together. Alongside IoT RoboRunner, Amazon announced the AWS Robotics Startup Accelerator, an incubator program in collaboration with nonprofit MassRobotics to tackle challenges in automation, robotics, and industrial internet of things (IoT) technologies.

The adoption of robotics — and automation more broadly — in enterprises has accelerated as the pandemic prompts digital transformations. A recent report from Automation World found that the bulk of companies that embraced robotics in the past year did so to decrease labor costs, increase capacity, and navigate a lack of available workers. The same survey found that 44.9% of companies now consider the robots in their assembly and manufacturing facilities to be an integral part of daily operations.

How Mondelez International uses low code tech from DronaHQ for sales enablement at speed, across geographies_

Amazon — a heavy investor in robotics itself — hasn’t been shy about its intent to capture a larger part of a robotics software market that is anticipated to be worth over $7.52 billion by 2022. In 2018, the company unveiled AWS RoboMaker, a product to assist developers with deploying robotics applications with AI and machine learning capabilities. And Amazon earlier this year rolled out SageMaker Reinforcement Learning Kubeflow Components, a toolkit supporting the RoboMaker service for orchestrating robotics workflows.

IoT RoboRunner

IoT RoboRunner, currently in preview, builds on the technology already in use at Amazon warehouses for robotics management. It allows AWS customers to connect robots and existing automation software to orchestrate work across operations, combining data from each type of robot in a fleet and standardizing data types like facility, location, and robotic task data in a central repository.

The goal of IoT RoboRunner is to simplify the process of building management apps for fleets of robots, according to Amazon. As enterprises increasingly rely on robotics to automate their operations, they’re choosing different types of robots, making it more difficult to organize their robots efficiently. Each robot vendor and work management system has its own, often incompatible control software, data format, and data repository. And when a new robot is added to a fleet, programming is required to connect the control software to work management systems and program the logic for management apps.

Developers can use IoT RoboRunner to access the data required to build robotics management apps and leverage prebuilt software libraries to create apps for tasks like work allocation. Beyond this, IoT RoboRunner can be used to deliver metrics and KPIs via APIs to administrative dashboards.

IoT RoboRunner competes with robotics management systems from Freedom Robotics, Exotec, and others. But Amazon makes the case that IoT RoboRunner’s integration with AWS — including services like SageMaker, Greengrass, and SiteWise — gives it an advantage over rivals on the market.

“Using AWS IoT RoboRunner, robotics developers no longer need to manage robots in silos and can more effectively automate tasks across a facility with centralized control,” Amazon wrote in a blog post. “As we look to the future, we see more companies adding more robots of more types. Harnessing the power of all those robots is complex, but we are dedicated to helping enterprises get the full value of their automation by making it easier to optimize robots through a single system view.”  ..... ' 

Detecting Sarcasm

 Computers Detecting Sarcasm  An important component of modern communication

How Computers Can Finally Detect Sarcasm Ramya Akula and the tech that lets sentiment analysis spot mocking words3119:2901 JUL 2021.   Podcast and transcript 

Hi and welcome to Fixing the Future, IEEE Spectrum’s podcast series on the technologies that can set us on the right path toward sustainability, meaningful work, and a healthy economy for all. Fixing the Future is sponsored by COMSOL, makers of of COMSOL Multiphysics simulation software. I’m Steven Cherry.

Leonard: Hey, Penny. How’s work?

Penny: Great! I hope I’m a waitress at the Cheesecake Factory for my whole life!

Sheldon: Was that sarcasm?

Penny: No.

Sheldon: Was that sarcasm?

Penny: Yes.

Steven Cherry That’s Leonard, Penny and Sheldon from season two of the Big Bang Theory. Fans of the show know there’s some question of whether Sheldon understands sarcasm. In some episodes he does, and in others he’s just learning it. But there’s no question that computers don’t understand sarcasm or didn’t until some researchers at the University of Central Florida started them on a path to learning it. Software engineers have been working on various flavors of sentiment analysis for quite some time. Back in 2005, I wrote an article in Spectrum about call centers automatically scanning conversations for anger either by the caller or the service operator. One of the early use cases behind messages like this call may be monitored for quality assurance purposes. Since then, software has been getting better and better at detecting joy, fear, sadness, confidence and now, finally, sarcasm. My guest today, Ramia Akula, is a PhD student and a graduate research assistant at the University of Central Florida is Complex Adaptive Systems Lab.. She has at least 11 publications to her name, including the most recent interpretable multiheaded self attention architecture for Sarcasm Detection in Social Media, published in March in the journal Entropy with her advisor, Ivan Garibay Ramia. Welcome to the podcast.   ... ' 

Sunday, November 28, 2021

Exotic Material for Superconductors

 New materials for quantum computing  are a big deal. 

Exotic New Material Could Be Two Superconductors in One – With Serious Quantum Computing Applications

TOPICS:Materials ScienceMITQuantum ComputingQuantum MaterialsSuperconductor

By ELIZABETH A. THOMSON, MIT MATERIALS RESEARCH LABORATORY NOVEMBER 21, 2021

Work has potential applications in quantum computing, and introduces new way to plumb the secrets of superconductivity.

MIT physicists and colleagues have demonstrated an exotic form of superconductivity in a new material the team synthesized only about a year ago. Although predicted in the 1960s, until now this type of superconductivity has proven difficult to stabilize. Further, the scientists found that the same material can potentially be manipulated to exhibit yet another, equally exotic form of superconductivity.

The work was reported in the November 3, 2021, issue of the journal Nature.

The demonstration of finite momentum superconductivity in a layered crystal known as a natural superlattice means that the material can be tweaked to create different patterns of superconductivity within the same sample. And that, in turn, could have implications for quantum computing and more.

The material is also expected to become an important tool for plumbing the secrets of unconventional superconductors. This may be useful for new quantum technologies. Designing such technologies is challenging, partly because the materials they are composed of can be difficult to study. The new material could simplify such research because, among other things, it is relatively easy to make.

Three Different Patterns of Superconductivity

Diagram illustrating three different patterns of superconductivity realized in a new material synthesized at MIT. Credit: Image courtesy of the Checkelsky lab

“An important theme of our research is that new physics comes from new materials,” says Joseph Checkelsky, lead principal investigator of the work and the Mitsui Career Development Associate Professor of Physics. “Our initial report last year was of this new material. This new work reports the new physics.”

Checkelsky’s co-authors on the current paper include lead author Aravind Devarakonda PhD ’21, who is now at Columbia University. The work was a central part of Devarakonda’s thesis. Co-authors are Takehito Suzuki, a former research scientist at MIT now at Toho University in Japan; Shiang Fang, a postdoc in the MIT Department of Physics; Junbo Zhu, an MIT graduate student in physics; David Graf of the National High Magnetic Field Laboratory; Markus Kriener of the RIKEN Center for Emergent Matter Science in Japan; Liang Fu, an MIT associate professor of physics; and Efthimios Kaxiras of Harvard University.  ... 

Reference: “Signatures of bosonic Landau levels in a finite-momentum superconductor” by A. Devarakonda, T. Suzuki, S. Fang, J. Zhu, D. Graf, M. Kriener, L. Fu, E. Kaxiras and J. G. Checkelsky, 3 November 2021, Nature.

DOI: 10.1038/s41586-021-03915-3

Blog: AI in Libraries

Like to be involved.

Sunday, November 28, 2021

New directions in AI: formation of an IFLA Special Interest Group on Artificial Intelligence

There is an online meeting on New directions in AI: formation of an IFLA Special Interest Group on Artificial Intelligence on 6 December 2021 at 4pm UTC (UK time); 5pm CET; 11am US EST. This exploratory meeting will: "give an overview of the current state of AI in libraries; discuss the goals and objectives; gather 25 signatories who intend to actively participate in the activities of the SIG for a petition to be submitted to the Professional Council; propose a satellite meeting and main session at IFLA WLIC 2022 in Dublin, Ireland." "Artificial intelligence applications are increasingly a part of the library space: in chatbots, embedded in library systems, used for automated indexing and classification, and integral to robots. The IT Section is sponsoring the formation of a Special Interest Group in AI (AI SIG). ... If the SIG is approved we will also hold the first business meeting to nominate a Convenor and seek volunteers to serve in roles including Secretary and Communications Coordinator. Registration at 

Posted by Sheila Webber at 16:07   ...... ' 

Predicting Jet Engine Stability

Passing this along to my former colleagues at GE who may have comment, I am sure they already know of this simulation research. 

Tool Can Detect Precursor of Engine-Destroying Combustion Instability

By Tokyo University of Science (Japan), November 23, 2021

Combustion engines, like those in aircrafts, are at risk of fatal damage by a phenomenon called "combustion oscillations," where pressure fluctuations inside the engine become large.

Researchers at Japan's Tokyo University of Science (TUS) and the Japan Aerospace Exploration Agency (JAXA) have designed a tool for detecting a precursor of thermoacoustic combustion oscillation, which damages combustion engines.

The researchers performed combustion experiments with varying fuel flow rates in a staged multisector combustor from JAXA, and fed the resulting data to a support vector machine (SVM) algorithm.

The SVM classified the combustion as stable, transitional, and combustion oscillations; the transitional state's pressure fluctuations are crucial to forecasting future combustion oscillations.

TUS' Hiroshi Gotoda said, "The methodology combining dynamical systems theory and machine learning can be useful for detecting predictive combustion oscillations in multisector combustors, such as those in aircraft engines."  Full article.

From Tokyo University of Science (Japan)

Saturday, November 27, 2021

Testing Clearview Face Recognition

Would expect such methods to ultimately be well calibrated and put to common use.

Clearview AI Does Well in Another Round of Facial Recognition Accuracy Tests

By The New York Times, November 24, 2021

After Clearview AI scraped billions of photos from the public web — from websites including Instagram, Venmo and LinkedIn — to create a facial recognition tool for law enforcement authorities, many concerns were raised about the company and its norm-breaking tool. Beyond the privacy implications and legality of what Clearview AI had done, there were questions about whether the tool worked as advertised: Could the company actually find one particular person's face out of a database of billions?

Clearview AI's app was in the hands of law enforcement agencies for years before its accuracy was tested by an impartial third party. Now, after two rounds of federal testing in the last month, the accuracy of the tool is no longer a prime concern.

In results announced on Monday, Clearview, which is based in New York, placed among the top 10 out of nearly 100 facial recognition vendors in a federal test intended to reveal which tools are best at finding the right face while looking through photos of millions of people. Clearview performed less well in another version of the test, which simulates using facial recognition for providing access to buildings, such as verifying that someone is an employee.

From The New York Times

View Full Article  

AI Helping Data Management

Not unexpected, we often used analytic methods to provide predictive patterns to generate useful data streams.   But makes much sense to explore this direction to manage assets.

AI will soon oversee its own data management in Venturebeat

AI thrives on data. The more data it can access, and the more accurate and contextual that data is, the better the results will be.

The problem is that the data volumes currently being generated by the global digital footprint are so vast that it would take literally millions, if not billions, of data scientists to crunch it all — and it still would not happen fast enough to make a meaningful impact on AI-driven processes.

AI helping AI

This is why many organizations are turning to AI to help scrub the data that is needed by AI to function properly.

According to Dell’s 2021 Global Data Protection Index, the average enterprise is now managing ten times more data compared to five years ago, with the global load skyrocketing from “just” 1.45 petabytes in 2016 to 14.6 petabytes today. With data being generated in the datacenter, the cloud, the edge, and on connected devices around the world, we can expect this upward trend to continue well into the future.

In this environment, any organization that isn’t leveraging data to its full potential is literally throwing money out the window. So going forward, the question is not whether to integrate AI into data management solutions, but how.

AI brings unique capabilities to each step of the data management process, not just by virtue of its capability to sift through massive volumes looking for salient bits and bytes, but by the way it can adapt to changing environments and shifting data flows. For instance, according to David Mariani, founder of, and chief technology officer at AtScale, just in the area of data preparation, AI can automate key functions like matching, tagging, joining, and annotating. From there, it is adept at checking data quality and improving integrity before scanning volumes to identify trends and patterns that otherwise would go unnoticed. All of this is particularly useful when the data is unstructured.

One of the most data-intensive industries is health care, with medical research generating a good share of the load. Small wonder, then, that clinical research organizations (CROs) are at the forefront of AI-driven data management, according to Anju Life Sciences Software. For one thing, it’s important that data sets are not overlooked or simply discarded, since doing so can throw off the results of extremely important research.

Machine learning is already proving its worth in optimizing data collection and management, often preserving the validity of data sets that would normally be rejected due to collection errors or faulty documentation. This, in turn, produces greater insight into the results of trial efforts and drives greater ROI for the entire process.  .... ' 

How to Find Hidden Spy Cameras with a Smartphone

How to Find Hidden Spy Cameras with a Smartphone

By Help Net Security, November 24, 2021

How it works. 

During the scan process, the Time-of-Flight sensor emits laser pulses and captures the reflected light off of an object and its surroundings. Hidden cameras embedded in objects reflect incoming laser pulses at a higher intensity than its surroundings, a result of lens-sensor retro-reflection.

Scientists at the National University of Singapore and South Korea's Yonsei University developed a smartphone application that can find tiny spy cameras concealed in everyday objects, using smartphones' time-of-flight (ToF) sensor.

The researchers said the Laser-Assisted Photography Detection (LAPD) app spots hidden cameras better than commercial camera detectors, and much better than the human eye.

The app, which works on any smartphone handset equipped with a time of flight (ToF) sensor, can only scan a single object at a time, and requires about a minutes to scan that object.

The researchers said the app could be made more accurate by taking advantage of the handset’s flashlight and RGB cameras.

From Help Net Security

Friday, November 26, 2021

Can a Free Internet Survive?

What steps, what cost, what limits?

Can a Free Internet Survive?,  By Samuel Greengard, Commissioned by CACM Staff, November 23, 2021

In the beginning, Internet pioneers dreamed of creating an open framework for global communication and interaction. It would be a place where free thinking and information could flourish. Over the last half century, despite a few potholes and speedbumps, the Internet has largely lived up to that promise.

However, there's evidence that attitudes and values are shifting. Governments around the world are taking steps to limit access to information, or even shut it down using tactics like site blocking, URL throttling, restricting mobile data, and regulatory and legal threats.

"This is in the face of governments, business and industry, and popular movements responding to perceived threats to dominant institutions and traditional sources of information," observes William H. Dutton, Emeritus Professor at the University of Southern California and co-author of the UNESCO report   Freedom of Connection, Freedom of Expression.

Washington D.C.-based democracy advocacy group Freedom House reported in 2021 that Internet freedom declined for the fifth year in a row in the U.S. and the 11th consecutive year internationally. Officials in at least 20 countries suspended Internet access, and 20 regimes blocked access to social media platforms, the report noted.

Principles for Innovative Engineers

Very useful principles below at the link, we worked with Rosalind Picard long ago ... 

What Every Engineer and Computer Scientist Should Know: The Biggest Contributor to Happiness

By Rosalind Picard     in CACM

Communications of the ACM, December 2021, Vol. 64 No. 12, Pages 40-42    10.1145/3465999

My teams at MIT and our spin-out companies have worked for years to create technology that is both intelligent and able to improve people's lives. Through research drawing from psychiatry, neuroscience, psychology, and affective computing, I have learned some surprising things. In some cases, they are principles we have embedded into technology that interacts with people. Guess what? People like it. After one year of the COVID-19 pandemic, I realize that the principles we learned apply not only to making smart robots or software agents, but also to the people around us. They give us lessons for how to live happier lives, and happier engineers are better at solving creative problems and have more fun.

Researchers have studied what brings happiness in life, and what, at the end of life, people wish they had done. While many factors contribute, do you know the biggest one?

Almost never late in life do people say: "I wish I had invented a smarter or faster device," "I wish I had made more money," "I wish I had given more TED talks," "I wish I had climbed higher in my business," or "I wish I had authored more books." Even this pinnacle of achievement is not uttered: "I wish I had written an article for an ACM magazine." Instead, almost always, people wish that they had done a better job at building meaningful authentic human relationships, and spending time in those relationships.

This finding is a general one, whether studying human happiness or end-of-life reflections. They apply to hard-working, well-educated computer scientists or engineers and also to many kinds of people, different races and cultures, rich and poor, male and female, uneducated or over-educated.

All of the patents, publications, presentations, and personal technical achievements can be amazing: They can literally save lives and bring immense delight, win us world acclaim, fill our shelves with awards, tally up clicks online, and even make our resumes impressively long. However, they all pale in comparison to something that is even more joy-giving: Achieving deeply satisfying, personally-significant human relationships.

How do you engineer great relationships? Here are three helpful principles you can test in your own life and relationships. If you build AI that interacts directly with people, you can build these principles into those interactions too. I learned these principles while trying to engineer more intelligence in machines, specifically computers with skills of social-emotional intelligence. The skills derive from studies of human relationships and they apply not only when the interactions involve two people, but also when one is a computer (including chatbots, software agents, robots, and other things programmed to talk with us). The three principles below can help improve relationships, human or AI.   ..... ....

(Full principles below)

SAS Customer Intelligence Blog

 I have been reviewing reviewing SAS's blogs.  They nicely have a 'customer intelligence blog:

Customer Intelligence Blog

Evolving relationships for business growth

Welcome to Customer Intelligence, a blog for anyone who is looking for ways to improve the business of marketing and communicating with customers.

We strive to prompt new thinking in the way you tackle customer-related business issues. And we hope to inspire the use of analytics for everything from multi-level marketing to social media campaigns.  ... ' 


New NVIDIA AI Brain

 NVidia pushes on.

NVIDIA's new AI brain for robots is six times more powerful than its predecessor

And it can still fit in the palm of your hand

By M. Moon, @mariella_moon   in Engadget

The chipmaker says Orin is capable of 200 trillion operations per second. It's built on the NVIDIA Ampere architecture GPU, features Arm Cortex-A78AE CPUs and comes with next-gen deep learning and vision accelerators, giving it the ability to run multiple AI applications. Orin will give users access to the company's software and tools, including the NVIDIA Isaac Sim scalable robotics simulation application, which enables photorealistic, physically-accurate virtual environments where developers can test and manage their AI-powered robots. For users in the healthcare industry, there's NVIDIA Clara for AI-powered imaging and genomics. And for autonomous vehicle developers, there's NVIDIA Drive. .... ' 

Interpol Arrests Cybercrime Suspects

Don't know much about Interpol, and how useful it can be regarding security,  but was once involved with data that they had gathered  regarding supply chain practices. Now how might we leverage this data? 

Interpol arrests over 1,000 suspects linked to cyber crime   By Bill Toulas  in Bleepingcomputer

Interpol has coordinated the arrest of 1,003 individuals linked to various cyber-crimes such as romance scams, investment frauds, online money laundering, and illegal online gambling.

This crackdown results from a four-month action codenamed ‘Operation HAEICHI-II,’ which took place in twenty countries between June and September 2021.

These were Angola, Brunei, Cambodia, Colombia, China, India, Indonesia, Ireland, Japan, Korea (Rep. of), Laos, Malaysia, Maldives, Philippines, Romania, Singapore, Slovenia, Spain, Thailand, and Vietnam.  ... '


Wednesday, November 24, 2021

Nike in the Metaverse

 A look at the potential view of a maketing future.

RT COMMENTS

Will fans visit Nike in the metaverse?

Source: Nike, by Tom Ryan

Nike has created a virtual world on the Roblox online gaming platform, NIKELAND, that enables fans of the brand “to connect, create, share experiences and compete.”

Players dress up their avatars in Nike-branded footwear, apparel and backpacks while competing in mini-games, such as “Tag,” “Dodgeball” and “The Floor Is Lava.” A NIKELAND tool kit enables creators to design their own mini-games from interactive sports materials.

The applications take advantage of the accelerometers built into mobile devices to transfer users’ offline movement to online play. Nike writes, “For example, you can move your device and body IRL to pull off cool in-game moves like long jumps or speed runs.”  .... ' 

Should We Trust Computers?

 Good talk  given by Prof Martyn Thomas  CBE in the Gresham College lecture series  published Oct 29 2015.   https://www.youtube.com/watch?v=8SZfjvlbpMw

Metaverse Wearable Device

A Meta-Facebook device on every face?

Metaverse wearable devices ‘could be as big as phones,’ Qualcomm CEO says  in Yahoo Finance

by Julie Hyman·Anchor

Wed, November 17, 2021, 2:57 PM·2 min read

The company whose chips power Facebook’s Oculus virtual-reality headset has already tapped into the metaverse opportunity. Now Qualcomm’s CEO says glasses could eventually be as widespread as smartphones. 

Qualcomm CEO Cristiano Amon knows smartphones well. The company’s main business is providing semiconductors for those devices. As Apple turns toward manufacturing its own chips for its phones, Qualcomm has been working on diversifying its revenue into products and services for the automotive industry, the Internet of Things — and the metaverse. They’re not new entrants into the market. 

“For all of the devices that are being commercially deployed, you have one thing in all of them — which is Snapdragon XR,” Amon told Yahoo Finance Live, referencing Qualcomm’s extended reality version of its main Snapdragon platform. “So we’re very excited about that opportunity, and it could be as big as phones, if you have companion glasses that you carry together with your smartphone.” 

Amon pointed to Facebook’s name change as one marker of success for that company’s XR device: "Even looking at how the company is now called Meta is a reflection of how successful Oculus has been,” he said.

Metaverse products were one aspect of the diversification strategy Amon laid out during Qualcomm’s investor day on Tuesday. He outlined how Qualcomm is working on reducing its reliance on Apple, and is targeting $46 billion in revenue by 2024, including $9 billion from the Internet of Things. ... ' 


Wal-Mart Retail by Drone

Have my doubts about the last 100yards, but they seem to be serious in their testing. 

Walmart is now delivering diapers and food by drone (if you live close to this Arkansas store)  in TheVerge

Zipline delivered from one store; DroneUp partnership adds three more

By Sean Hollister@StarFire2258 

Just last week, Walmart made headlines by launching its first commercial US drone delivery service within a 50-mile radius of Pea Ridge, Arkansas — dropping parachute-laden packages from an autonomous Zipline plane to a “hand-selected group of recipients.” Now, it’s already expanding those drone deliveries in another, small way: customers who live in Farmington, Arkansas, can now order small items like cans of tuna, baby supplies, and paper plates starting today. That’s thanks to a partnership with DroneUp, which will also provide drone delivery stations in Rogers, AR and Bentonville, AR “in the coming months.”

While it’s kind of exciting for drone delivery watchers, you should know that it’s also very much baby steps. DroneUp’s traditional quadcopters only deliver within a one-mile radius of a Walmart store — and all three of these stores are within the same region that Zipline already announced it would serve. They do, however, lower their packages down on cables rather than dropping them from the sky.  .... ' 

The AI Cloud Landscpe

Intro to the concept that is worth examining for developers.  Intro below for common Cloud usage.  Includes detail links.

Artificial Intelligence #31 - Understanding the AI cloud landscape

Published on November 23, 2021  By Ajit Jaokar  in Linkedin,

Course Director: Artificial Intelligence: Cloud and Edge Implementations - University of Oxford

To provide some context, when we refer to the cloud, the discussion is often confined to Iaas – Paas – Saas – but to understand the AI cloud landscape, the flow of thought is:

Understanding the Cloud landscape

Understanding Cloud native development

Understanding MLOps and Model Ops

Understanding cloud native AI architectures for AWS, Azure, GCP

We consider the wider meaning of ModelOps as defined by Gartner

ModelOps (or AI model operationalization) is focused primarily on the governance and life cycle management of a wide range of operationalized artificial intelligence (AI) and decision models, including machine learning, knowledge graphs, rules, optimization, linguistic and agent-based models. Core capabilities include continuous integration/continuous delivery (CI/CD) integration, model development environments, champion-challenger testing, model versioning, model store and rollback.  .... ' 

Continued Work on Autonomous Apple Car

More care effort from Apple.

Apple Accelerates Work on Car Project, Aiming for Fully Autonomous Vehicle

By Mark Gurman +Follow   in Bloomberg

November 18, 2021, 12:26 PM EST Updated on November 18, 2021, 12:38 PM EST

 Company looks to refocus project on self-driving capabilities

 New car chief Kevin Lynch pushing for debut as early as 2025 

Apple Inc. is pushing to accelerate development of its electric car and is refocusing the project around full self-driving capabilities, according to people familiar with the matter, aiming to solve a technical challenge that has bedeviled the auto industry. 

For the past several years, Apple’s car team had explored two simultaneous paths: creating a model with limited self-driving capabilities focused on steering and acceleration -- similar to many current cars -- or a version with full self-driving ability that doesn’t require human intervention.

US-IT-APPLE-NEWS-FEED

Kevin Lynch, Apple’s vice president of technology.Photographer: Brittany Hosea-Small/AFP/Getty Images

Under the effort’s new leader -- Apple Watch software executive Kevin Lynch -- engineers are now concentrating on the second option. Lynch is pushing for a car with a full self-driving system in the first version, said the people, who asked not to be identified because the deliberations are private.   ... '


Tuesday, November 23, 2021

Governance of AI: Podcast Use and Misuse of AI

 Podcast interview with  Miles Brundage head of Policy Research at OpenAI responsible for governance of AI,  See the gradientPub.substack.com.  AI misuse and Trustworthy AI.   In  The Gradient Podcast

Improving Machine Learning

Integrating knowledge Graphs with derived knowledge.  Useful but rarely done.  Our early links with Stanford addressed this.

Improving Machine Learning: How Knowledge Graphs Bring Deeper Meaning to Data

Posted by Kendall Clark on November 22, 2021

Enterprise machine learning deployments are limited by two consequences of outdated data management practices widely used today. The first is the protracted time-to-insight that stems from antiquated data replication approaches. The second is the lack of unified, contextualized data that spans the organization horizontally.

Excessive data replication and the resulting "second-order effects" are creating enormous efficiencies and waste for data scientists in most organizations. According to IDC, over 60 zettabytes of data were produced last year, and this is forecast to increase at a CAGR of 23 percent until 2025. Worse, the ratio of unique to replicated data is 1:10, which implies that most organizations’ data management methods are based on copying data.

When creating machine learning models, firms usually section off relevant data by replicating them from different sources. Models are typically trained on 20 percent of this data, while the other 80 percent remain for testing. The rigors of data cleansing, feature engineering, and model evaluation can take six months or more, making data stale during this process while delaying time-to-insight and compromising findings.

The second repercussion of traditional, outdated data management approaches is the reduced quality of insights. This effect is not only attributed to building models with stale data, but also to the inadequate relationship awareness, disconnected vertical data silos, poor contextualization, and schema limitations of relational data management techniques.

Properly implementing knowledge graphs in a modern data fabric corrects these data management issues while increasing machine learning’s value. Deploying data virtualization within a knowledge graph empowered data fabric enables data scientists to bring machine learning to their data—instead of the opposite, which wastes time and resources.

Moreover, the inherent flexibility of graph models and their ability to leverage inter-connected relationships make preparing data for machine learning much easier as they provide capabilities like improved feature engineering, root cause analysis, and graph analytics. This functionality is also key to helping knowledge graphs transition to be the dominant data management construct for the next 20 years as data management and AI converge. In short, knowledge graphs will help AI as much as AI will help knowledge graphs.

Data Scientists Need Strategic Data Management

The growing volumes and varieties of data organizations are dealing with prolonged machine learning deployments. Varying data formats, schemas, and terminologies across silos or data lakes delay machine learning initiatives requiring this training data. The lack of context and semantic annotations makes it difficult to understand data’s meaning and use for specific models. Even when data is sufficiently contextualized, this information rarely persists, so organizations must start over for subsequent projects. The months of training required when replicating this varied data is made even more difficult by fast-moving data, like information collected by IoT devices, for example. Organizations are forced to deal with this obstacle by replicating fresh data again, restarting this time-consuming process that impairs models’ functionality.

A far better approach is to train models at the data fabric layer instead of replicating data into silos. Organizations can easily create training and testing datasets without moving data. They can even specify, for example, a randomized 20 percent sample of their data with a query that extracts features and delivers a training dataset via this data virtualization approach underpinned by knowledge graphs. This methodology illustrates the connection between data management and machine learning to accelerate time-to-insight with the added benefit of training models on more current data.

Achieving Quality Machine Learning Insights

Knowledge graphs provide a richer, superior foundation for understanding enterprise data compared with relational or other approaches. They offer contextualized understanding and relationship detection between the edges of nodes, which is how graphs store data. This capability is significantly enhanced by semantic graph data models that standardize business-specific terminology as a hierarchical set of vocabularies or taxonomies. Thus, data scientists can innately understand data’s meaning and relation to any use case, such as machine learning. Semantic graph data models also align data at the schema level, provide intelligent inferences about concepts or business categories, and eschew conventional problems with terminology or synonyms while delivering a complete view of enterprise data.  .... '

Preparing for Quantum

 Good piece out of ACM, prepare for the quantum.   Passing this on for review for usefulness.  Thoughts?

Exploring the Promise of Quantum Computing By Leah Hoffmann

Communications of the ACM, December 2021, Vol. 64 No. 12, Pages 120-ff

10.1145/3490319

We have not yet have realized—or, perhaps, even fully understood—the full promise of quantum computing. However, we have gotten a much clearer view of the technology's potential, thanks to the work of ACM Computing Prize recipient Scott Aaronson, who has helped establish many of the theoretical foundations of quantum supremacy and illuminated what quantum computers eventually will be able to do. Here, Aaronson talks about his work in quantum complexity.

Let's start with your first significant result in quantum computing: your work on the collision problem, which you completed in graduate school.

The collision problem is where you have a many-to-one function, and your task is just to find any collision pair, meaning any two inputs that map to the same output. I proved that even a quantum computer needs to access the function many times to solve this problem.

It's a type of problem that shows up in many different settings in cryptography. How did you come to it?

When I entered the field, in the late 1990s, I got very interested in understanding quantum computers by looking at how many queries they have to make to learn some property of a function. This is a subject called query complexity, and it's actually the source of the majority of what we know about quantum algorithms. Because you're only counting the number of accesses to an input, you're not up against the P vs. NP problem. But you are fighting against a quantum computer, which can make a superposition of queries and give a superposition of answers. And sometimes quantum computers can exploit structure in a function in order to learn something with exponentially fewer queries than any classical algorithm would need.

So, what kind of structure does your problem need before a quantum computer can exploit it to get this exponential speed-up?

That's exactly what we've been working on for the past 30 years. In 1995, Peter Shor showed that quantum computers are incredibly good at extracting the period of a periodic function. Others showed that, if you're just searching for a single input that your function maps to a designated output, then quantum computers give only a modest, square-root improvement. The collision problem was interesting precisely because it seemed poised between these two extremes: it had less structure than the period-finding problem, but more structure than the "needle in a haystack" problem.

When my advisor, Umesh Vazirani, told me that the collision problem was his favorite problem in quantum query complexity, I said, "Okay, well, I'll know not to work on that one, because that one's too large." But it kept coming up in other contexts that I cared about. I spent a summer at Caltech and I decided to try to attack it.

I had a colleague, Andris Ambainis, who had invented this amazing lower bound technique—what's now called the Ambainis adversary method—a couple years prior. I didn't know it at the time, but he had actually invented it to try to solve the collision problem, though he was not able to make it work for that. But he could solve some problems that I couldn't solve using this method that I understood really well, called the polynomial method. I started trying to use Ambainis' method to attack the collision problem. I worked on it probably more intensely than I've worked on anything before or since, and what I finally realized was that the polynomial method would work to prove a lower bound for the problem and show that even a quantum computer needs at least N1/5 queries to solve it, where N is the number of outputs. Shortly afterward, Yaoyun Shi refined the technique and was able to show, first, that you need N1/4 queries, and then that you need N1/3.

You have since gone on to produce groundbreaking results in quantum supremacy.

Around 2008 or 2009, I got interested in just how hard quantum computations can be to simulate classically. Forget whether the quantum computer is doing anything useful; how strong can we make the evidence that a quantum computation is hard to simulate? It turns out—and there were others who came to the same realization around the same time—if that is your goal, you can get enormous leverage by switching attention from problems like factoring, which have a single right answer, to sampling problems, where the goal of your computation is just to output a sample from some probability distribution over strings of N bits.

There are certain probability distributions that a quantum computer can easily sample from.

Not only that, but a pretty rudimentary quantum computer. If a classical computer could efficiently sample the same distribution in polynomial time, then the polynomial hierarchy would collapse to the third level, which we use as a kind of standard yardstick of implausibility.

But if you want to do an experiment to demonstrate quantum supremacy, it's not enough to have a distribution that's hard for a classical computer to sample exactly. Any quantum computer is going to have a huge amount of noise, so the most you could hope for is to sample from some distribution that's close to the ideal one. And how hard is it for a classical computer to approximately sample from the same distribution?

To answer that, we realized you needed a quantum computation where the amplitudes (related to probabilities) for the different possible outputs are what's called "robustly #P-complete," meaning that if you could just approximate most of them, then you could solve any combinatorial counting problem. ..... ' 

Watch Your Algorithms

Watched this closely, family being involved in the space,  Prices seemed odd, but were they somehow magical?   Always compare algorithm results for reasonableness.  In particular, prediction in varying contexts is always suspect. 

What Went Wrong With Zillow? A Real Estate Algorithm Derailed Its Big Bet   By The Wall Street Journal, November 19, 2021

Real estate firm Zillow Group had looked to its digital home-flipping business Zillow Offers to lead its growth in the future, but the company has acknowledged that will not happen because the unit’s underlying algorithm could not reliably predict housing prices.

That failure, the company said, was rooted in the technology's inability to understand the real estate market and predict housing prices, which are shaped by fluctuating factors like aesthetics and regional factors that influence buyers' decisions.

Zillow CEO Richard Barton admitted to shareholders that the algorithm could not accurately predict swings in home prices, and the company is closing Zillow Offers.   ....

From The Wall Street Journal   

Monday, November 22, 2021

Third Wave of Biomaterials

Clear need in our future.   A kind of biomimicry we have known for a long time.  Now at a much lower  level.  an it be constructed in new ways?  Petrochemicals will still be needed.

The third wave of biomaterials: When innovation meets demand 

November 18, 2021 | Article,  in McKinsey

By Tom Brennan, Michael Chui, Wen Chyan, and Axel Spamann

The third wave of biomaterials: When innovation meets demand

How corporate sustainability commitments could catalyze the next generation of bio-based chemicals and materials.

Biomaterials have long been a part of our daily lives, from wooden houses to woolen clothes. More recently, biotech advances have brought us sugar-derived first-generation biofuels and high-performance enzymes to power our laundry detergents. Now, we see the emergence of nylon made using genetically engineered microbes instead of petrochemicals, alternative leather from mushroom roots, and cement from bacteria.

These advances in biological science are bolstered by accelerating innovations in computing, automation, and artificial intelligence (AI), resulting in a new wave of innovation known as the Bio Revolution. McKinsey Global Institute research has found that as much as 60 percent of the physical inputs to the global economy today are either biological (wood or animals bred for food) or nonbiological (cement or plastics) but could in principle be produced or substituted using biological means. Over the next ten to 20 years, advances in the use of biology in the production of materials, chemicals, and energy could amount to $200 billion to $300 billion in global market growth.

What will drive this growth? Historically, adoption of bio-based materials has been the result of a unique technical or cost advantage, the latter of which is difficult to gain against highly developed and large-scale incumbent technologies. But this equation is changing because of accelerating corporate commitments to sustainability and the ability of biomaterials to help companies meet their targets. As biological innovation meets downstream demand, a new wave of the Bio Revolution in chemicals and materials is unfolding—with enormous potential impact.

As biological innovation meets downstream demand, a new wave of the Bio Revolution in chemicals and materials is unfolding—with enormous potential impact.

The three waves of innovation in biomaterials" 

The first wave of biomaterials spanned the millennia before the age of petroleum. Bio-based materials from plants or animals became a fixture of society and still surround us today in the form of wood, paper, leather, textiles, and numerous other derivatives that are used for adhesives, soaps, pigments, and other substances.

The second wave was catalyzed by the birth of biotech and recombinant DNA technology in the 1980s. These developments gave rise to companies like California-based Genencor and the modern industrial enzyme industry, which has led to dramatic improvements in products ranging from laundry detergent to animal feed.

This second wave reached its zenith when further advances in biotechnology collided with high prices for fossil-based chemical feedstock (oil, gas) and dot-com-era excitement in the mid-2000s, driving a boom in cleantech and biotech investments focused on commodity biofuels and biomaterials. Yet high fossil-based feedstock prices proved fleeting while rising prices on the biomaterials side and high volatility for renewable feedstocks such as corn and sugar through the 2000s further diminished any potential cost advantage. With the subsequent rise of fracking and electric vehicles, sustained fossil-based feedstock prices—more than $100/barrel for oil, for example—looked increasingly unrealistic.

As cost superiority to petrochemical production routes became a less attractive investment, many companies in the biomaterials sector went back to the drawing board and pivoted to specialty applications for which bio-based production could yield unique chemistries. Although the second wave ended with some disappointment, it taught critical lessons in techno-economic discipline while illustrating the enormous potential of biotechnology. .....'

Wearable Tech and Work

 Performance and Commuting, with some prediction of performance from machine learning. 

Wearable Tech Confirms Wear-and-Tear of Work Commute, By Dartmouth College, November 22, 2021

Researchers analyzed data collected from commuting workers, close to 95% of whom drove.

A study of commuting and job performance shows how wearable sensing technology can predict individual work quality based on the daily grind of commuting.

"Traveling to and from the office remains an important part of life that affects the quality of work that people produce," says Andrew Campbell, a professor of computer science at Dartmouth College.

"Assessing the Impact of Commuting on Workplace Performance Using Mobile Sensing," published in IEEE Pervasive Computing, analyzed data from activity trackers and smartphones to capture physiological and behavioral patterns during commuting. The study indicates that high performing workers may be more physically fit and stress resilient. Low performers showed higher stress levels in the times before, during, and after commutes.

"We were able to build machine learning models to accurately predict job performance," says lead author Subigya Nepal.

From Dartmouth College

View Full Article  

Healthcare AI Models from Microsoft

Healthcare AI innovation with Zero Trust technology

Facebook Twitter LinkedIn

Posted on October 26, 2021

John Doyle Chief Technology Officer, Microsoft Health & Life Sciences

From research to diagnosis to treatment, AI has the potential to improve outcomes for some treatments by 30 to 40 percent and reduce costs by up to 50 percent. Although healthcare algorithms are predicted to represent a $42.5B market by 2026, less than 35 algorithms have been approved by the FDA, and only two of those are classified as truly novel.1 Obtaining the large data sets necessary for generalizability, transparency, and reducing bias has historically been difficult and time-consuming, due in large part to regulatory restrictions enacted to protect patient data privacy. That’s why the University of California, San Francisco (UCSF) collaborated with Microsoft, Fortanix, and Intel to create BeeKeeperAI. It enables secure collaboration between algorithm owners and data stewards (for example, healthy systems, etc.) in a Zero Trust environment (enabled by Azure Confidential Computing), protecting the algorithm intellectual property (IP) and the data in ways that eliminate the need to de-identify or anonymize Protected Health Information (PHI)—because the data is never visible or exposed.

Enabling better healthcare with AI

By uncovering powerful insights in vast amounts of information, AI and machine learning can help healthcare providers to improve care, increase efficiency, and reduce costs. For example:

AI analysis of chest x-rays predicted the progression of critical illness in COVID-19 patients with a high degree of accuracy.2

An image-based deep learning model developed at MIT can predict breast cancer up to five years in advance.3

An algorithm developed at the University of California, San Francisco can detect pneumothorax (collapsed lung) from CT scans, helping prioritize and treat patients with this life-threatening condition—the first algorithm embedded in a medical device to achieve FDA approval.4

At the same time, the adoption of clinical AI has been slow. More than 12,000 life-science papers described AI and machine learning in 2019 alone.5 Yet the U.S. Food and Drug Administration (FDA) has only approved a little over 30 AI- and machine learning-based medical technologies to date.6 Data access is a major barrier to clinical approval. The FDA requires proof that a model is generalizable, which means that it will perform consistently regardless of patients, environments, or equipment. This standard requires access to highly diverse, real-world data so that the algorithm can train against all the variables it will face in the real world. However, privacy protections and security concerns make such data difficult to access.


Breaking through barriers to model approval

As both an AI innovator and a healthcare data steward, UCSF wanted to break through these challenges. “We needed to find a way that allowed data owners and algorithm developers to share so we could develop bigger data sets, more representative data sets, as well as allowing [data owners] to get exposed to algorithm developers without risking the privacy of the data,” says Dr. Michael Blum, Executive Director of the Center for Digital Health Innovation (CDHI) at UCSF.7

With support from Microsoft, Intel, and Fortanix, UCSF created a platform called BeeKeeperAI. It allows data stewards and algorithm developers to securely collaborate in ways that provide access to real-world, highly diverse data sets from multiple institutions, where AI models are validated and tested without moving or sharing the data or revealing the algorithm. The result is a Zero Trust environment that can dramatically accelerate the development and approval of clinical AI.  ..... ' 

Ahead for Augmented Reality

Heads up for the road ahead.  Good piece.

Home/News/The Road Ahead for Augmented Reality/Full Text

The Road Ahead for Augmented Reality, By Keith Kirkpatrick

Communications of the ACM, December 2021, Vol. 64 No. 12, Pages 20-22   10.1145/3490317

Automotive head-up displays (HUDs), systems that transparently project critical vehicle information into the driver's field of vision, were developed originally for military aviation use, with the origin of the name stemming from a pilot being able to view information with his or her head positioned "up" and looking forward, rather than positioned "down" to look at the cockpit gauges and instruments. The HUD projects and superimposes data in the pilot's natural field of view (FOV), providing the added benefit of eliminating the pilot's need to refocus when switching between the outside view and the instruments, which can impact reaction time, efficiency, and safety, particularly in combat situations.

In cars, the main concern is distracted driving, or the act of taking the driver's attention away from the road. According to the National Highway Transportation Safety Administration, distracted driving claimed 3,142 lives in 2019, the most recent year for which statistics have been published. Looking away from the road for even five seconds at a speed of 55 mph is the equivalent of driving the length of a football field with one's eyes closed.

As such, the desire to ensure that drivers keep their eyes focused on the road, instead of looking down at the gauges on the dashboard, was the impetus for the development of HUDs suitable for use in production automobiles. The first automotive HUD that was included as original equipment was found on the 1988 Oldsmobile Cutlass Supreme and Pontiac Grand Prix; both were monochromatic, and displayed only a digital readout of the speedometer.

Thanks to the increasing inclusion of a variety of automotive sensors and cameras, advanced driver assistance system (ADAS) features and functions (such as automatic braking, forward collision avoidance, lane-keeping assist, and blind-spot monitoring, among others), and more powerful on-vehicle processors, automakers have been installing HUD units in commercial vehicles that provide more essential driving data, such as speed, engine RPMs, compass heading, directional signal indicators, fuel economy, and other basic information, allowing the driver to concentrate on the road instead of looking down to check the dash or an auxiliary screen.

The technology enabling most types of HUD is based on the use of a processor to generate a digital image of data coming from sensors. These images then are digitally projected from a unit located in the dash of the car onto a mirror or mirrors, which then reflect that image onto either a separate screen located behind the steering wheel, or onto the vehicle's windshield, directly in the driver's forward view. Common projection and display technologies used include liquid crystal display (LCD), liquid crystal on silicon (LCoS), digital micromirror devices (DMDs), and organic light-emitting diodes (OLEDs), which have replaced the cathode ray tube (CRT) systems used in the earliest HUDs, as they suffered from brightness degradation over time.

The HUDs that project the information onto a separate transparent screen are called combiner HUDs; these were popular because the physical space required to install the system was modest, and because the system was fully integrated, OEMs did not need to design a system that accounted for each vehicle's unique windshield angle or position. However, this type of HUD was limited by several factors; namely, the optical viewing path of a combiner HUD is shorter than looking through a windshield, and the driver's eyes must refocus slightly to the shorter visual distance when switching between looking out the windshield and checking the display. Furthermore, there is a practical limit to the size and field of vision (FOV) offered by combiner units; adding mirrors and a larger combiner screen would apper obtrusive and less elegant in a modern vehicle than simply using the windshield as a display surface.

Because HUDs were far from being standard automotive equipment in most vehicles, companies such as HUDWAY and Navdy had produced phone mounts and screens designed to allow a smartphone to operate as a head-up display. Essentially, these designs functioned as combiner systems, in that they required a separate screen on which to view the display and suffered from many of the same limitations as OEM-equipped combiner HUD systems. While Navdy went out of business in 2018, HUDWAY is still accepting orders for its HUDWAY Drive system at a cost of $229 per unit.

The technical limitations of combiner systems have driven most automotive OEMs to offer HUDs that project information directly onto the windshield and contain a far greater amount of data, known as W-type HUDs. These more-advanced systems incorporate ADAS system status information (such as displaying the status of adaptive cruise control systems, automatic braking systems, collision-avoidance technology, infrared night-vision technology, lane-keeping assist technology, and, eventually, semi-autonomous self-driving system data.

The most advanced systems include augmented reality technology, which involves superimposing specific enhanced symbols or images into the HUD onto real-world objects or roadways to provide more information, detail, and clarity to the driver. Some systems will also incorporate data from GPS navigation systems, such as clear directional graphics, street names, augmented lane markings, signposts and route numbers, and even representations of other vehicles/objects on the road. Examples of vehicles that include this technology today include the Audi Q4 E-tron, Mercedes Benz S Class, and the Hyundai IONIQ 5.   ..... ' 

Sunday, November 21, 2021

Robots Using Echo location

 Recall this being proposed as a solution in navigation, but noise in general being a problem., and the speed needed was not there, now solved?

Robots Can Use Their Own Whirring to Echolocate, Avoid Collisions

In New Scientist, November 19, 2021

Robots can navigate and avoid collisions using the sounds they produce through echolocation as bats do, according to researchers at Denmark's Aalborg University and France's Universite de Lorraine.  Aalborg's Jesper Rindom Jensen and colleagues suggested robots can detect obstacles like walls or other robots by picking up sounds reflected off those objects.

Onboard computers can measure the time it takes for noise from the robot to reach a surface and be reflected back to a microphone on the robot, detecting obstacles as far off as one meter (three feet). Earlier research yielded a device that beamed sound around itself to navigate, but laboratory experiments demonstrated that background noise created by the robot can accomplish the same task.  ... " 

Amazon Failing to Protect Your Data?

Implications for future nuse?

Amazon's Dark Secret: It Has Failed to Protect Your Data, By Wired, November 19, 2021  in ACM News

On September 26, 2018, a row of tech executives filed into a marble- and wood-paneled hearing room and sat down behind a row of tabletop microphones and tiny water bottles. They had all been called to testify before the U.S. Senate Commerce Committee on a dry subject—the safekeeping and privacy of customer data—that had recently been making large numbers of people mad as hell.

Committee chair John Thune, of South Dakota, gaveled the hearing to order, then began listing events from the past year that had shown how an economy built on data can go luridly wrong. It had been 12 months since the news broke that an eminently preventable breach at the credit agency Equifax had claimed the names, social security numbers, and other sensitive credentials of more than 145 million Americans. And it had been six months since Facebook was engulfed in scandal over Cambridge Analytica, a political intelligence firm that had managed to harvest private information from up to 87 million Facebook users for a seemingly Bond-villainesque psychographic scheme to help put Donald Trump in the White House.

To prevent abuses like these, the European Union and the state of California had both passed sweeping new data privacy regulations. Now Congress, Thune said, was poised to write regulations of its own. "The question is no longer whether we need a federal law to protect consumers' privacy," he declared. "The question is, what shape will that law take?" Sitting in front of the senator, ready to help answer that question, were representatives from two telecom firms, Apple, Google, Twitter, and Amazon.

From Wired   View Full Article  

Explosive sensing with insect-based biorobots

Smell again is something we examined in retail interaction,  here much more dramatic.

Explosive sensing with insect-based biorobots  in ScienceDirect

Highlights

A bio-robotic sensing system exploiting insect's sense of smell is demonstrated. Neural signals from the insect brain were tapped and used for odor recognition. Ability to detect and distinguish amongst explosive chemical vapors is demonstrated. Target recognition within a few hundred milliseconds of exposure was achieved.

Abstract

Stand-off chemical sensing is an important capability with applications in several domains including homeland security. Engineered devices for this task, popularly referred to as electronic noses, have limited capacity compared to the broad-spectrum abilities of the biological olfactory system. Therefore, we propose a hybrid bio-electronic solution that directly takes advantage of the rich repertoire of olfactory sensors and sophisticated neural computational framework available in an insect olfactory system. We show that select subsets of neurons in the locust (Schistocerca americana) brain were activated upon exposure to various explosive chemical species (such as DNT and TNT). Responses from an ensemble of neurons provided a unique, multivariate fingerprint that allowed discrimination of explosive vapors from non-explosive chemical species and from each other. Notably, target chemical recognition could be achieved within a few hundred milliseconds of exposure. In sum, our study provides the first demonstration of how biological olfactory systems (sensors and computations) can be hijacked to develop a cyborg chemical sensing approach.   ... '

Saturday, November 20, 2021

Moon Oxygen

But how usable?    Quite a claim.

The Moon’s Surface Has Enough Oxygen to Sustain 8 Billion People for 100,000 Years  in SingularityHub

By John Grant -Nov 14, 2021157,943       

Alongside advances in space exploration, we’ve recently seen much time and money invested into technologies that could allow effective space resource utilization. And at the forefront of these efforts has been a laser-sharp focus on finding the best way to produce oxygen on the moon.

In October, the Australian Space Agency and NASA signed a deal to send an Australian-made rover to the moon under the Artemis program, with a goal to collect lunar rocks that could ultimately provide breathable oxygen on the moon.

Although the moon does have an atmosphere, it’s very thin and composed mostly of hydrogen, neon, and argon. It’s not the sort of gaseous mixture that could sustain oxygen-dependent mammals such as humans.  ..... ' 

Securing Your Digital Life

Some good thoughts.

Securing your digital life, the finale: Debunking worthless “security” practices  in ArsTechnica

We tear down some infosec conventional wisdom—there's a lot of bad advice out there.

By SEAN GALLAGHER 

Information security and privacy suffer from the same phenomenon we see in fighting COVID-19: "I've done my own research" syndrome. Many security and privacy practices are things learned second- or third-hand, based on ancient tomes or stuff we've seen on TV—or they are the result of learning the wrong lessons from a personal experience.

I call these things "cyber folk medicine." And over the past few years, I've found myself trying to undo these habits in friends, family, and random members of the public. Some cyber folkways are harmless or may even provide a small amount of incidental protection. Others give you a false sense of protection while actively weakening your privacy and security. Yet some of these beliefs have become so widespread that they've actually become company policy.  ... ' 

Ethereum to Remove Miners?

Had looked at other non-mining approaches, but these had other issues.

Bye-Bye, Miners! How Ethereum’s Big Change Will Work

By The Washington Post, August 19, 2021

Ethereum currently handles about 30 transactions per second. With sharding, Vitalik Buterin, the inventor of Ethereum, thinks that could go to 100,000 per second.

Ethereum is making big changes. Perhaps the most important is the jettisoning of the "miners" who track and validate transactions on the world's most-used blockchain network. Miners are the heart of a system known as proof of work. It was pioneered by Bitcoin and adopted by Ethereum, and has come under increasing criticism for its environmental impact: Bitcoin miners now use as much electricity as some small nations. Along with being greener and faster, proponents say the switch, now planned to be phased in by early 2022, will illustrate another difference between Ethereum and Bitcoin: A willingness to change, and to see the network as a product of community as much as code.

1. How are Bitcoin and Ethereum transactions tracked?

Cryptocurrencies wouldn't work without a new type of technology called blockchain that performs an old-fashioned function: maintaining a ledger of time-ordered transactions. What's different from pen-and-paper records is that the ledger is shared on computers all around the world and operated not by a central authority, like a government or a bank, but by anyone who wants to take part. Satoshi Nakamoto is the mysterious and still-unknown creator of Bitcoin and its blockchain. What Nakamoto accomplished through the proof of work system was solving the so-called double-spend problem that plagued earlier digital cash projects: Because the blockchain records every single transaction on its network, someone trying to reuse a Bitcoin that has already been spent would be easily caught. ..... ' 

From The Washington Post

View Full Article  

Starbucks and Amazon Go Joint Concept Stores

 Seems a novel approach:

Starbucks and Amazon open first joint concept store with more to come

by George Anderson  in Retailwire   with expert comments

Starbucks and Amazon.com have teamed up to open the first of at least three concept stores between the two Seattle-based retailing giants. The location at 59th Street between Park and Lexington Avenues in New York City combines Starbucks’ Pickup concept with a cashierless Amazon Go convenience store.

AMAZON C-STORE CHANNEL COFFEE RETAILING IN-STORE ANALYTICS MOBILE APPS RETAIL CONCEPTS RETAIL TECH STARBUCKS STORE DESIGN

Nov 18, 2021  by George Anderson

Starbucks and Amazon.com have teamed up to open the first of at least three concept stores between the two Seattle-based retailing giants. The location at 59th Street between Park and Lexington Avenues in New York City combines Starbucks’ Pickup concept with a cashierless Amazon Go convenience store.

Starbucks and Amazon open first joint concept store with more to come

The store is designed with Starbucks branding in mind, incorporating the chain’s traditional green blended with wood furnishings and stone countertops. The Pickup concept emphasizes the convenience of placing and paying for orders ahead using the Starbucks app. A digital screen in the store displays the status of customers’ orders, which they can pick up from a barista when complete. Walk-in customers are also welcome and have the option of ordering at the counter to take out or consume on-premises at a counter, booth or table.

Accessing the Amazon Go area of the store will require customers to call up an “in-store code” on the  Amazon app. As customers pull desired sandwiches and other items off store shelves, the app places them in a virtual cart and then submits the charges upon exiting the area.  ... ' 

Friday, November 19, 2021

Novartis and Pharma AI

 Fairly broad look at how drugs can use AI:

Novartis empowers scientists with AI to speed the discovery and development of breakthrough medicines

By Bill Briggs

Here’s a cooking story unlike any you’ve heard before. That’s because the chefs are chemists, the ingredients are molecules, and the main course is a new medication designed to defeat illness.

At least, that’s Luca Finelli’s snackable description to explain in simple terms how scientists at Novartis are searching for breakthrough medicines powered by artificial intelligence (AI), part of a collaboration with Microsoft to get medicines to patients faster.

But that recipe hinges on the scientists’ ability to predict which blend of molecules can be transformed into medicines – a tedious process that traditionally takes decades and can cost billions.

“Creating the formulation to a drug is a bit like cooking,” says Finelli, vice president and head of insights, strategy and design at Novartis, a multinational pharmaceutical company headquartered in Basel, Switzerland.

“Typically, the formulation scientist needs to decide, ‘I will take this amount of this ingredient A and some amount of this ingredient B.’ They then try different combinations,” Finelli adds.

Each molecular combo must next be tested to gauge efficacy, stability, safety and more. Conducting those experiments can span years. And most promising drug candidates fail somewhere during that long journey.

But by leveraging the power of AI in collaboration with Microsoft, Novartis researchers may be able to shorten that process to weeks or even days.

How? Tools that use AI can sift quickly through stores of data and results from decades of laboratory experiments and suggest molecules with the desired characteristics that are optimized for the medicinal task at hand. Those drug leads might then be fast-tracked for additional testing and, if proven safe and effective, potentially be developed and manufactured as a remedy for illness. This AI-bolstered process could cut out years of trial-and-error experimenting with molecules that are less than ideal.

In fact, that functionality already has been “integrated into the decision-support system in front of our medicinal chemists,” says Shahram Ebadollahi, chief data and AI officer at Novartis.

The potential human impacts are vast, Ebadollahi says.

“If you look at every aspect of the pipeline – from early drug discovery and drug development to clinical trials and then on to manufacturing the drug at large scale – in 2020 alone, our medicines reached almost 800 million patients worldwide,” Ebadollahi says.

To accomplish this feat, Novartis scientists create molecules that never have been made, and these molecules will help develop new medicines to combat diseases for which there are no treatments, says Karin Briner, head of global discovery chemistry at Novartis Institutes of BioMedical Research.

The foundation for this work is the 2019 strategic partnership between Novartis and Microsoft to “reimagine medicine” by founding the Novartis AI Innovation Lab. The goal of that alliance is to help accelerate drug discovery for patients worldwide by augmenting scientists with cutting-edge technology platforms.

“Microsoft brings two things,” says Chris Bishop, lab director for Microsoft Research Europe.

“We bring our expertise in machine learning and our large-scale compute. Those don’t exist in the pharma world. And Microsoft can’t take this on (independently). We’re not a pharma company. So the partnership is absolutely crucial,” Bishop says. “That’s how the disruption will unfold. That collaboration is at the heart of this.”

Machine learning is a key part of AI, enabling computers to use algorithms to find patterns and trends within huge sets of data.

At Novartis, researchers can apply AI to comb through a trove of lab data from thousands of past drug-development experiments – findings buried in PDFs, Excel tables and written descriptions of the chemical properties of previously explored molecules.  .... " 

Imagine if Your Therapist Could Access Your Smartphone

Very interesting thought, will the contents of our phones provide maps to our psyche in the future?     Useful, but beware of possible uses, privacy?

Imagine if Your Therapist Could Access Data From Your Smartphone

The Wall Street Journal, by Laura Landro

Scientists are designing and testing applications that collect smartphone data to enhance psychiatric therapy and help therapists make more timely interventions. Researchers at the Harvard University-affiliated Beth Israel Deaconess Medical Center have developed an app called mindLAMP designed to provide therapists with an overview of patient behavior and mental status. It uses smartphone sensors to collect behavioral data like screen time and sleep, and information from patients through surveys and cognitive tests; doctors review the data to evaluate patients' mental states and to tailor therapy regimens with them. Boston University researchers created the Motivation and Skills Support app to deliver targeted social-goal assistance to patients based on location, movement, and audio data continuously collected by their phones. University of Washington scientists are exploring the use of online search-history data to better understand suicide risk and design methods to detect and prevent it.

Thursday, November 18, 2021

Developers are Better with Automation

 Will have to deal with this as automation progresses. How about the influence on say the security of the code being written?

Developers Are Better With Automation and Reusable

ACM CAREERS

Developers Are Better With Automation and Reusable Code, Report Says

By Tech Republic. November 18, 2021

Software teams are changing their coding processes to fit the new dynamics of remote work, according to GitHub's 2021 State of the Octoverse. That means reusing code, embracing automation, and getting better at documentation. The research combines telemetry from more than 4 million repositories and a survey of about 12,000 developers.

Automating software delivery is important to open source work and helps teams go faster at scale, the research found. Teams that use Actions "merge almost 2x more pull requests per day than before (61% increase) and they merge 31% faster," the report says. Automation helps teams communicate better and more clearly, which also helps build a better culture, according to the report.

Reuse is another key to make the development process go faster with performance increasing up to 87%. Reuse also helps open source projects with double the performance improvement compared to processes that are slow or have multiple approval layers.

Investing in documentation was found to have a direct impact on productivity. The research found that documentation gives developers a 50% increase in productivity.

From Tech Republic

View Full Article    

Towards Mind Mapping

 Have used mind mapping for a number of serious applications, both for documenting and building, but  it seems to be less mentioned of late.  Here Google talks its usefulness..

How mind mapping can help creators make better content

Nov 02, 2021, Sarah Han, Google Web Creators team

Creativity can be a messy process. Great ideas and inspiration don’t come easily on command, or in any organized way. And even when we’re in the creative zone, our brains can sometimes get too overloaded and overwhelmed to actually get anything done. That’s why some people use mind mapping, or visual brainstorming, to stay on top of their game.

Markus Müller-Simhofer, founder of the digital mind mapping app MindNode, saw major changes when he started visualizing his creative process. He recalls the first time he realized what a powerful tool mind mapping could be. While developing an app, Markus found that although he had tons of ideas, he wasn’t making any progress. “Out of this frustration, I started to look into techniques to sort my ideas and find focus. Mind mapping best fit how my brain works,” he says.

Mind mapping worked so well for Markus that he eventually scrapped his original app idea and started developing MindNode. “This was 14 years ago and today, I am still working on it — together with a team of 10 people.”

We recently chatted with Markus about how creators can use mind mapping to make better content.  .... ' 

AI Risk and Coming Regulation

More needs to be done here  here, though though predicting it wil be hsrd.  Consider risk analyses under varying scenarios.

Assess AI Risk to Prepare for Coming AI Regulations  

September 30, 2021 

AI regulations are coming, with multiple acts being proposed in the US Congress, and AI experts are sharing advice on how to prepare.  (Credit: European Commission) 

By John P. Desmond, AI Trends Editor 

Since the European Commission in April proposed rules and a legal framework in its Artificial Intelligence Act (See AI Trends, April 22, 2021), the US Congress and the Biden Administration have followed with a range of proposals that set the direction for AI regulation.  

“The EC has set the tone for upcoming policy debates with this ambitious new proposal,” stated authors of an update on AI regulations from Gibson Dunn, a law firm headquartered in Los Angeles.  

Unlike the comprehensive legal framework proposed by the European Union, regulatory guidelines for AI in the US are being proposed on an agency-by-agency basis. Developments include the US Innovation and Competition Act of 2021, “sweeping bipartisan R&D and science-policy regulation,” as described by Gibson Dunn, moved rapidly through the Senate.

“While there has been no major shift away from the previous “hands off” regulatory approach at the federal level, we are closely monitoring efforts by the federal government and enforcers such as the FTC to make fairness and transparency central tenets of US AI policy,” the Gibson Dunn update stated.  

Many in the AI community are acknowledging the lead role being taken on AI regulation by the European Commission, and many see it as the inevitable path.  

European Commission’s AI Act Seen as “Reasonable” 

Johan den Haan, CTO, Mendix

“Right now, every forward-thinking enterprise in the world is trying to figure out how to use AI to its advantage. They can’t afford to miss the opportunities AI presents. But they also can’t afford to be on the wrong side of the moral equation or to make mistakes that could jeopardize their business or cause harm to others,” stated Johan den Haan, CTO at Mendix, a company offering a model-driven, low-code approach for building AI systems, writing recently in Forbes.  .... ' 

Wednesday, November 17, 2021

Nokia to Launch Cloud-Based Software Subscription

Nokia to launch cloud-based software subscription service

 Finish telecommunications giant Nokia has announced several Software as a Service (SaaS) offerings aimed at communication service providers (CSP). 

The company says its software, some of which are already available , while others will be released next year, will provide solutions for analytics, security, and data management.

For early 2022, Nokia has a couple of cloud-based services lined up, including NetGuard Cybersecurity Dome, and Anomaly Detection. Both are related to cybersecurity and endpoint protecion, with the former aiming to reduce malicious actors’ network dwell time, cut down on manual tasks, as well as response time for 5G networks. The latter, on the other hand, does just as the name suggests - it’s a machine learning tool that learns what proper network behavior looks like and seeks to eliminate anomalies.  .... ' 

Seeing around the Corner

Mentioned a number of times here.  The ultimate privacy loss?

Real-Time Video of Scenes Hidden Around Corners Now Possible

University of Wisconsin-Madison News, Eric Hamilton, November 11, 2021

Researchers at the University of Wisconsin-Madison (UW) and Italy's Polytechnic University of Milan have created a non-line-of-sight imaging technique that can display video of hidden scenes in real time. The technique combines ultra-fast and highly sensitive light sensors with an advanced video reconstruction algorithm. The method captures information about a scene by bouncing light off a surface and detecting the echoes of that light as it bounces back; it sees around corners by detecting the reflections of those echoes. UW’s Andreas Velten said, “It’s basically echolocation, but using additional echoes—like with reverb.”  .... ' 

Open Source Intelligence

 Interesting podcast from Recorder Future, approach was new to me.

Maximizing the Value of Open Source Intelligence

Recorded Future Blog - Predictive Analyt...by Caitlin Mattingly 

Podcast Episode 230

Our guest this week is Harry Kemsley. He’s president of national security and government at defense intelligence organization, Janes. Prior to joining Janes, he spent 25 years in the Royal Air Force.

Harry Kemsley is author of a recent opinion piece published in The Hill, titled “In OSINT We Trust?” In it, he makes the case that many intelligence organizations around the world would do well to increase their use of open source intelligence. To do that, there are cultural issues regarding the reliance on classified sources that may need to be overcome, but in the end, he believes the benefits are worthwhile. .... ' 

AI Development in DOD, Government.

A brief examination of the problem.

Best Practices for Building the AI Development Platform in Government 

October 28, 2021    6959

The US Army and other government agencies are defining best practices for building appropriate AI development platforms for carrying out their missions. (Credit: Getty Images) 

By John P. Desmond, AI Trends Editor 

The AI stack defined by Carnegie Mellon University is fundamental to the approach being taken by the US Army for its AI development platform efforts, according to Isaac Faber, Chief Data Scientist at the US Army AI Integration Center, speaking at the AI World Government event held in-person and virtually from Alexandria, Va., last week.  

Isaac Faber, Chief Data Scientist, US Army AI Integration Center

“If we want to move the Army from legacy systems through digital modernization, one of the biggest issues I have found is the difficulty in abstracting away the differences in applications,” he said. “The most important part of digital transformation is the middle layer, the platform that makes it easier to be on the cloud or on a local computer.” The desire is to be able to move your software platform to another platform, with the same ease with which a new smartphone carries over the user’s contacts and histories.  

Ethics cuts across all layers of the AI application stack, which positions the planning stage at the top, followed by decision support, modeling, machine learning, massive data management and the device layer or platform at the bottom.  

“I am advocating that we think of the stack as a core infrastructure and a way for applications to be deployed and not to be siloed in our approach,” he said. “We need to create a development environment for a globally-distributed workforce.”   

The Army has been working on a Common Operating Environment Software (Coes) platform, first announced in 2017, a design for DOD work that is scalable, agile, modular, portable and open. “It is suitable for a broad range of AI projects,” Faber said. For executing the effort, “The devil is in the details,” he said.   

The Army is working with CMU and private companies on a prototype platform, including with Visimo of Coraopolis, Pa., which offers AI development services. Faber said he prefers to collaborate and coordinate with private industry rather than buying products off the shelf. “The problem with that is, you are stuck with the value you are being provided by that one vendor, which is usually not designed for the challenges of DOD networks,” he said.