/* ---- Google Analytics Code Below */

Wednesday, February 17, 2021

Fabricating Functional Devices and Drones

3D Printing and more.  Impressive too for spacecraft?  

Fabricating Fully Functional Drones  From MIT CSAIL

Researchers at the Massachusetts Institute of Technology's Computer Science and Artificial Intelligence Laboratory (CSAIL) developed a three-dimensional (3D) printing system to manufacture functional, custom-made devices and robots like drones, without human intervention. CSAIL's Martin Nisser said, "By leveraging widely available manufacturing platforms like 3D printers and laser cutters, LaserFactory is the first system that integrates these capabilities and automates the full pipeline for making functional devices in one system." LaserFactory combines a software toolkit for custom design with a hardware platform, enabling users to fabricate structural geometry, print traces, and build electronic components like sensors and actuators. Said Nisser, "Beyond engineering, we're also thinking about how this kind of one-stop shop for fabrication devices could be optimally integrated into today's existing supply chains for manufacturing, and what challenges we may need to solve to allow for that to happen."


Perseverance Rover Landing, Testing Robotics

 Am a long time follower of advanced robotics. The Mars rover is set to arrive tomorrow.  With new advances in Torque and Force arm sensing in a harsh environment.   Look forward to seeing more demonstrated.

NASA’s Mars Rover Required a Special Touch for Its Robotic Arms

ATI Industrial Automation brings its Force/Torque Sensor from the factory floor to the harsh environment of Mars’ surface.

In July, NASA launched the most sophisticated rover the agency has ever built: Perseverance. Scheduled to land on Mars in February 2021, Perseverance will be able to perform unique research into the history of microbial life on Mars in large part due to its robotic arms. To achieve this robotic capability, NASA needed to call upon innovation-driven contractors to make such an engineering feat a reality.

One of the company’s that NASA enlisted to help develop Perseverance was ATI Industrial Automation. NASA looked to have ATI adapt the company’s own Force/Torque Sensor to enable the robotic arm of Perseverance to operate in the environment of space. ATI Force/Torque sensors were initially developed to enable robots and automation systems to sense the forces applied while interacting with their environment in operating rooms or factory floors. ... " 

On the Ethics of AI

Concise piece on the topic from Cisco

Analytics & Automation

Ethics of Artificial Intelligence  in Cisco Blog

Utkarsh Srivastava

Intelligent machines have helped humans in achieving great endeavors. Artificial Intelligence (AI) combined with human experiences have resulted in quick wins for stakeholders across multiple industries, with use cases ranging from finance to healthcare to marketing to operations and more. There is no denying the fact that Artificial Intelligence (AI) has helped in quicker product innovation and an enriching user experience. However, few of these use cases include context-aware marketing, sales forecasting, conversational analytics, fraud detection, credit scoring, drug testing, pregnancy monitoring, self-driving cars – a never-ending list of applications.

But the very idea of developing smart machines (AI-powered systems) raises numerous ethical concerns. What is the probability that these smart machines won’t harm humans or other morally relevant beings? Matthew Hutson, a research scientist from the Massachusetts Institute of Technology mentioned in one of his studies that AI algorithms embedded in digital and social media technologies can reinforce societal biases, accelerate the spread of rumors and disinformation, amplify echo chambers of public opinion, hijack our attention, and impair mental well-being. Earlier discussions related to this concept of “Data and AI ethics” were only limited to non-profit organizations and academic institutions. But with the rapidly changing industry spectrum, global tech giants are putting together fast-growing teams to handle the ethics of AI. And as these companies have invested more due diligence into the challenge, they’ve discovered that the majority of these ethical issues arise during the lifecycle of data resulting from the widespread collection and processing of data to train AI models. ... '

GIS Data for Covid Vaccination Distribution

 Off to get my Covid vaccination this morning and saw this:

How GIS Data Can Help Fix Vaccine Distribution in Information Week

Esri Chief Medical Officer Este Geraghty explains how geographic data and maps can streamline COVID-19 vaccine distribution planning.

Chances are you know someone who has received a COVID-19 vaccine. About one in 10 Americans have been vaccinated so far. But as 50 states with 50 different plans scramble to get their populations vaccinated against the novel coronavirus, the race is on for the US to get to a point where all the people who want a shot get a shot.

Any effort of this magnitude is bound to run into logistical and execution challenges along the way. How do you allocate the correct number of doses to each state and to each facility delivering vaccines? How many workers do you need to administer the shots? How far do people have to travel to receive shots?

Another necessary factor that complicates vaccine delivery is that the US is phasing its approach, vaccinating healthcare workers and essential employees and older citizens first. Vaccines are being administered at veterans’ services centers, state sites, hospitals, and many other venues. The government is working to establish a retail pharmacy vaccination program. It's a complex delivery system.  ... "

Tuesday, February 16, 2021

IBM on AI Explainability

Think of how humans interact in a conversation.   We require an appropriate in-context level of explainability to support and trust what we hear.   We would expert an intelligent agent to answer questions like:  Tell me more about that.  Or how did it get to that conclusion?    What data was used?   How will the results of the AI outputs be used?   What are the risks involved using the results? ....

IBM’s Arin Bhowmick explains why AI trust is hard to achieve in the enterprise  By Michael Vizard  @mvizard,   in Venturebeat  February 16, 2021  

While appreciation of the potential impact AI can have on business processes has been building for some time, progress has not nearly been as quick as many initial forecasts led many organizations to expect.

Arin Bhowmick, chief design officer for IBM, explained to VentureBeat what needs to be done to achieve the level of AI explainability that will be required to take AI to the next level in the enterprise. ... ' 

US Cyber Command

 I did training at Ft Meade.  Have not taken a look at the US Cyber Command site for some time. Note their new training platform.  Worth following. 

https://www.cybercom.mil/

Fort Meade, Md. - PCTE is a training platform supporting standardized Joint Cyberspace Operations Forces individual sustainment training, team certification, mission rehearsal and provides the foundation for collective training exercises. It leverages existing connectivity to facilitate the sharing of resources, and provides additional cyber “maneuver space.” PCTE enables realistic training with variable conditions to increase readiness and lethality of our Cyberspace Operations Forces, while standardizing, simplifying, and automating the training management process.  ... 

AI Behind in Recognizing Emotions

Fairly clear, and also in our experience when the motion was of the 'satire' type.  Faces could be used to signal opposite emotions as well

Artificial intelligence still lags behind humans at recognising emotions  by University College London

When it comes to reading emotions on people's faces, artificial intelligence still lags behind human observers, according to a new study involving UCL.

The difference was particularly pronounced when it came to spontaneous displays of emotion, according to the findings published in PLOS One.

The research team, led by Dublin City University, looked at eight "out of the box" automatic classifiers for facial affect recognition (artificial intelligence that can identify human emotions on faces) and compared their emotion recognition performance to that of human observers.

The researchers found found that the human recognition accuracy of emotions was 72% whereas among the artificial intelligence tested, the researchers observed a variance in recognition accuracy, ranging from 48% to 62%.

Lead author Dr. Damien Dupré (Dublin City University) said: "AI systems claiming to recognize humans' emotions from their facial expressions are now very easy to develop. However, most of them are based on inconclusive scientific evidence that people are expressing emotions in the same way.  .... "

Boosting Inbound Sales

This happened to come up in a conversation last week.   Good summary below and more detail at the link.

4 Behaviors that Boost Inbound Sales  by Matthew Dixon, Ted McKenna, and Tom Shepherd  in  HBR

Summary.   

Particularly during the pandemic, when face-to-face visits with customers have been constrained, inbound selling in calls centers has become more important to company revenue. New research uses recordings of millions of such calls, analyzes the way salespeople drive the conversation, and record whether the call results in a sale. This analysis shows four behaviors that play the biggest role in converting callers into buyers: Disqualifying callers who shouldn’t be dealing with a salesperson, prescribing a solution to the customer problem, digging into objections, and de-risking the purchase so callers don’t get off the phone to “think it over.” Only 1% of calls contained all four of these behaviors — but when they did, 70% of calls resulted in a sale.   ... '

Post-Quantum Crypto

 Will Quantum be able to cut through Crypto?   Effects on other solution expectations?

The Scramble for Post-Quantum Cryptography

By Samuel Greengard,  Commissioned by CACM Staff,  February 4, 2021

Researchers are working to counter the threat to current communications posed by the nascent quantum computing arena, which could undermine almost all of the encryption protocols used today.

History has demonstrated that where there are people, there are secrets. From elaborately coded messages on paper to today's sophisticated cryptographic algorithms, a desire to maintain privacy has persisted. Of course, as technology has advanced, the ability to cipher messages but also crack the codes has grown.

"Today's encryption methods are excellent, but we are reaching an inflection point," says Chris Peikert, an associate professor in the Department of Science and Engineering at the University of Michigan Ann Arbor. "The introduction of quantum computing changes the equation completely. In principle, these devices could break any reasonably-sized public key."

Such an event would wreak havoc. "It would affect nearly everything we do with computers," says Dustin Moody, a mathematician whose focus at the U.S. National Institute of Standards and Technology (NIST) includes computer security. Within this scenario, he says, computing subsystems, virtual private networks (VPNs), and digital signatures would no longer be secure. As a result, personal data, corporate records, intellectual property, and online transactions would all be at risk.

Consequently, cryptographers are developing new encryption standards that would be resistant to the brute force power of quantum computing. At the center of this effort is an initiative at NIST to identify both lattice-based and code-based algorithms that could protect classical computing systems but also introduce new and more advanced capabilities.  ... '

Data Lineage Platform

First I had heard of this kind of platform,  we had used semantic web models. to create models of enterprise data.  Can this be used in conjunction with semantic models?    Also addresses the data as an asset measure. 

Solidatus raises $19.5 million to expand its enterprise data lineage platform

Paul Sawers @psawers  in Venturebeat,   February 15, 2021  

Solidatus, a U.K.-based data management and modeling platform for enterprises, has raised £14 million ($19.5 million) in a series A round of funding led by AlbionVC, with participation from HSBC Ventures and Citi — both Solidatus clients.

Founded out of London in 2011, Solidatus helps businesses monetize their data by charting the data journey from its origin while noting any transformations and presenting anything relevant in a visual format. This can be particularly pertinent for highly regulated industries, such as banking, where businesses may need to provide detailed accounts of all their data. Solidatus helps ensure that all that data is “cataloged and owned,” as the company puts it. ... ' 

US Infrastructure Attack

Broadly reported last week.  Was later pointed out that such local systems are universally poorly funded and protected.  And this is unlikely to be improved given the current business models of such systems.

A relatively rare SCADA attack on infrastructure occurs in the US.  By allowed remote access.  

Poor Password Security Led to Recent Water Treatment Facility Hack

By The Hacker News, February 11, 2021

New details emerging about the remote computer intrusion at a Florida water treatment facility last week highlight a lack of adequate security measures to protect critical infrastructure environments.

New details have emerged about the remote computer intrusion at a Florida water treatment facility last Friday, highlighting a lack of adequate security measures needed to bulletproof critical infrastructure environments.

The breach, which occurred last Friday, involved an unsuccessful attempt on the part of an adversary to increase sodium hydroxide dosage in the water supply to dangerous levels by remotely accessing the SCADA system at the water treatment plant. The system's plant operator, who spotted the intrusion, quickly took steps to reverse the command, leading to minimal impact.

Now, according to an advisory published on Wednesday by the state of Massachusetts, unidentified cyber actors accessed the supervisory control and data acquisition (SCADA) system via TeamViewer software installed on one of the plant's several computers that were connected to the control system.

From The Hacker News 

Monday, February 15, 2021

IBM Uses Continual Learning

Forgetting in Neural Networks.  Intro is good, well worth understanding the topic, then becomes technical.

IBM Uses Continual Learning to Avoid The Amnesia Problem in Neural Networks

Tags: IBM, Learning, Neural Networks, Training  in KDNuggets

Using continual learning might avoid the famous catastrophic forgetting problem in neural networks.

SAS AI/ML Training

By Jesus Rodriguez, Intotheblock.

I often joke that neural networks suffers from a continuous amnesia problem in the sense that they every time they are retrained they lost the knowledge accumulated in previous iterations. Building neural networks that can learn incrementally without forgetting is one of the existential challenges facing the current generation of deep learning solutions. Over a year ago, researchers from IBM published a paper proposing a method for continual learning  proposing that allow the implementation of neural networks that can build incremental knowledge.

Neural networks have achieved impressive milestones in the last few years from beating Go to multi-player games. However, neural network architectures remain constrained to very specific domains and unable to transition its knowledge into new areas. Furthermore, current neural network models are only effective if trained over large stationary distributions of data and struggle when training over changing non-stationary distributions of data. In other words, neural networks can effectively solve many tasks when trained from scratch and continually sample from all tasks many times until training has converged. Meanwhile, they struggle when training incrementally if there is a time dependence to the data received. Paradoxically, most real world AI scenarios are based on incremental, and not stationary, knowledge. Throughout the history of artificial intelligence(AI), there have been several theories and proposed models to deal with the continual learning challenge.  ... " 

Robot Swarming Satellites

Recall our interest in swarming cooperating solutions here.

Robot Satellite Swarms Increase Communication PossibilitiesBy ZDNet, February 11, 2021 in ACM

Carnegie Mellon University researchers prepare CubeSats for space. 

A National Aeronautics and Space Administration mission now underway is an experiment to maximize the effectiveness of CubeSats.  A U.S. National Aeronautics and Space Administration (NASA) mission currently underway aims to maximize the effectiveness of CubeSats—tiny, cost-effective satellites—by showing how they could track and communicate with each other.

The V-R3x mission will test three CubeSats and the underlying technologies that could pave the way for autonomous swarming satellites.  Said Carnegie Mellon University's Zac Manchester, "This mission is a precursor to more advanced swarming capabilities and autonomous formation flying."

Swarms of satellites, potentially numbering in the thousands and working cooperatively, could be used for communications, imaging, and forecasting tasks.  Manchester added, "The satellites will wake up and do their thing autonomously. We mainly need to make sure that we get their data downloaded." ... '

Try Solving with Optimization

With all the talk about AI in the air, some of the other fundamental methods are being forgotten.   I spent most of my career using and teaching direct optimization methods in the enterprise.   Below a quick overview.   I am not particularly recommending this particular company, but they did a good job presenting the description.  Consider optimization .... it can be be better, easier to use and more direct than AI for the right applications.  It often uses less data.  It is often used for very complex models.   Every decision problem solver should have it in their capabilities.   As a manager I always ask, have we tried an optimization?   Have said it here many times, here it is again. 

What Must I Do to Use a Solver?

To use a solver, you must build a model of your decision problem that specifies:

The decisions to be made, called decision variables,

The measure to optimize, called the objective,

Any logical restrictions on potential solutions, called constraints.

The solver will find values for the decision variables that satisfy the constraints while optimizing (maximizing or minimizing) the objective. ...  " 

Sunday, February 14, 2021

Ready for the Next Pandemic

Lets do this without destroying the education of the current generation. 

A number of articles on the proposition that we will be ready.  in IEEE Spectrum

COVID-19 has galvanized tech communities. The tens of billions we’re spending on vaccines, antivirals, tests, robots, and devices are transforming how we’ll respond to future outbreaks of infectious disease.

Here’s How We Prepare for the Next Pandemic

If we keep developing the tech that has been supercharged for COVID-19, it never has to be this bad again By Eliza Strickland and Glenn Zorpette  ... 

Quality Assurance and IOT

Been involved in the testing and utilizing of many IOT devices for the smarthome, and in the process found many bugs in process and software,     So quality assurance is a big deal to hope to effectively test and deliver robust systems.   If your IOT device is gathering data from sensors, a common approach, consider the implications for embedded downstream ML, analytics and decision making.

The relationship between QA and the success of IoT devices

Posted by Hemanth Kumar Yamjala on February 1, 2021 in IOT Central

In an increasingly tech-driven world, the Internet of Things holds a special place. It helps to connect devices and establish communication among them through the use of embedded software. According to Statista, the global revenue projection for IoT devices in 2021 will be worth 520 billion USD. This exemplifies how the Internet of Things is slowly but steadily taking the digital world by storm and is capable of adding economic value to diverse markets. At the core of such devices are the sensors with embedded software that help to automate processes, connect domains, and deliver superior user experiences. The terms like smart homes and smart cities are no longer in the realm of fiction but a reality where data mined from myriad sensors are processed to perform specific activities for delivering great user experiences.

The Internet of Things (IoT) is a network of connected devices through sensors or embedded technologies that interact with the external and internal environment to arrive at intelligent decisions. The IoT ecosystem comprises three core components:

Things: The real-world physical objects or devices containing sensors and embedded software to interact or communicate with the external environment.

Communication: The networking component allowing communication between IoT devices and the external environment comprises protocols. 4G for LAN, Wi-Fi for LAN, and Zigbee, BLE, and ANT+ for PAN.

Computing: It is executed on a computer or mobile device at two levels – to take intelligent decisions within the ecosystem and to create a vital link for data analysis. By analyzing mined data, the computing component makes intelligent decisions possible.

A real-life example related to the three components is the car’s navigation system. Here, the ‘thing’ is the actual hardware present in the console, which ‘communicates’ with satellite readings to ‘compute’ and deliver data for the driver to take notice.

Since the IoT ecosystem can have real-time implications for individuals, enterprises, and entities, IoT device testing should be accorded top priority. The critical role of the Internet of Things QA testing is based on validating the software and hardware components and checking if the transmitted data leads to real-time intelligence. Let us understand why it is important to apply QA to the IoT ecosystem?  ... " 

A Updated Look at No-Code/Lo-Code

 A long time interest of mine:  How do we make complex things with much less code ... with no code if possible?   More productive.   Making it more secure as well.    Also includes much automated coding.  Here an update of the space:  Its coming fast:

No-code/low-code: Why you should be paying attention

Setrag Khoshafian, Khosh Consulting  in VentureBeat,  February 14, 2021 6:20 AM

We’ve all been hearing the hype lately about low-code and no-code platforms. The promise of no-code platforms is that they’ll make software development just as easy as using Word or PowerPoint so that the average business user can move projects forward without the extra cost (in money and time) of an engineering team. Unlike no-code platforms, low-code platforms still require coding skills but promise to accelerate software development by letting developers work with pre-written code components.

According to Gartner, 65% of application development will be low code by 2024.

I was involved in an early comparative productivity benchmark test between traditional development (using Java) and a model-driven low-code/no-code development project back in 2017. The results were impressive: 5X to 7X productivity improvement with low-code/no-code development. A survey by No-Code Census in 2020 showed a 4.6X productivity gain over traditional programming.... '

Visualized Quantum Computing

Nicely done, largely non technical.   Visual look by an academicjust learning about the topic.:

Visualizing Quantum Computation  in TowardsdataScience

From Zero to Understand What the Hack is Happening!

Alessandro Berti

Hi there! I’m a Ph.D. student at University of Pisa and my research topic is Quantum Computing!

My First Approach to Quantum Computing

I have to be honest, the very first approach to Quantum Computing is struggling, especially for those who come from a CS degree ( just like me :) ). There is a little to which you can hang on based on your classical computer scientist experience.

One problem in quantum computation is for sure a visualization problem.

Can you understand what the following quantum circuit does?

Figure 1. What is this Quantum Cirtcuit? [Image by Author]

If you are a beginner, probably no. I had the same problem too!

I do believe that this way of representing the code of a quantum algorithm in quantum circuits misses “a dimension”. This kind of representation tricked me. It was implicitly forcing me to reason in a 2D dimension while the reality was in 3D, so I was unable to visualize it properly. ... " 

AI Cyber Defense for Zero-Day Threats

Don't see how this works without lots of operational data to leverage.   And might it not be thwarted by adapting the system in some way?  And a good understanding of system context.  I like the experimental thought though.   Like we predicted long ago, that such systems will ultimately adapt and defenses counter adapt.   Zero day cyber threats are those which are initially unknown to owners/developers of a system, and thus can be leveraged before an active defense is mounted.     

Algorithm May Be the Key to Timely, Inexpensive Cyber DefenseBy Penn State News, February 12, 2021

A team led by researchers at The Pennsylvania State University used a machine learning approach based on reinforcement learning to create an adaptive cyber defense against zero-day attacks.

A team of researchers led by The Pennsylvania State University (Penn State) has developed an adaptive cyber defense against zero-day attacks using machine learning.

The new technique offers a powerful, cost-effective alternative to the moving target defense method used to detect and respond to cyberattacks.

Reinforcement learning enables the decision maker to learn to make the right choices by choosing actions that maximize rewards.

Said Penn State's Peng Liu, "The decision maker learns optimal policies or actions through continuous interactions with an underlying environment, which is partially unknown. So, reinforcement learning is particularly well-suited to defend against zero-day attacks when critical information—the targets of the attacks and the locations of the vulnerabilities—is not available."

From Penn State News  .... 

Saturday, February 13, 2021

Light Driven 3D Printing

See images at the link.   Improving speed and precision.

Dynamic 3D Printing Process Features Light-Driven Twist

By Northwestern McCormick School of Engineering,   February 12, 2021

Northwestern University engineers developed a method that uses light to improve three-dimensional printing speed and precision.

Researchers at Northwestern University have a developed a three-dimensional (3D) printing technique that uses a liquid photopolymer activated by light and a high-precision robotic arm that allows each layer to be moved, rotated, or dilated as the structure is being built.

Said Northwestern's Cheng Sun, "Now we have a dynamic process that uses light to assemble all the layers but with a high degree of freedom to move each layer along the way.”

The continuous printing process allows 4,000 layers to be printed in about two minutes.

The researchers used their method to 3D-print a customized vascular stent, a soft pneumatic gripper made of one hard and one soft material, a double helix, and a mini Eiffel Tower.

From Northwestern McCormick School of Engineering  ...