/* ---- Google Analytics Code Below */

Monday, May 31, 2021

Cloud Services Attacks

The Misaligned Incentives for Cloud Security  in Schneier

Russia’s Sunburst cyberespionage campaign, discovered late last year, impacted more than 100 large companies and US federal agencies, including the Treasury, Energy, Justice, and Homeland Security departments. A crucial part of the Russians’ success was their ability to move through these organizations by compromising cloud and local network identity systems to then access cloud accounts and pilfer emails and files.

Hackers said by the US government to have been working for the Kremlin targeted a widely used Microsoft cloud service that synchronizes user identities. The hackers stole security certificates to create their own identities, which allowed them to bypass safeguards such as multifactor authentication and gain access to Office 365 accounts, impacting thousands of users at the affected companies and government agencies.

It wasn’t the first time cloud services were the focus of a cyberattack, and it certainly won’t be the last. Cloud weaknesses were also critical in a 2019 breach at Capital One. There, an Amazon Web Services cloud vulnerability, compounded by Capital One’s own struggle to properly configure a complex cloud service, led to the disclosure of tens of millions of customer records, including credit card applications, Social Security numbers, and bank account information .....  '

This essay was written with Trey Herr, and previously appeared in Foreign Policy.

The comments on this post arevery good and provoking.

GDPR Compliance

Examining this topic

Is there such a thing as “GDPR compliant”?  in Gartner

By Nader Henein | May 27, 2021 

Recent approvals for two codes of conduct by the European Data Protection Board – the body who oversees the GDPR – has reinvigorated this question. The short answer is “No”, the longer answer is “Not yet, but it should be coming soon”,  and you should be preparing. In this 5 minute read, I’ll take you through the story so far and what IT leaders and vendors can do to demonstrate compliance and prepare for formal certification.

Vendor marketing claims aside, as of the writing of this post, there is no formal certification for GDPR compliance. BUT the GDPR does set out a process in Article 42 so that certification bodies can submit their schemes for formal approval. Even though the GDPR came into effect in May of 2018, a process to operationalize Art. 42 did not exist till early 2020 with the publication of the approval procedures.  ... " 

Results in Taskbot Challenge

Instructive piece on the competitive challenge.     Like the idea. Would be interesting to track any of the contributions to actual public applications.  Nice to see one of my universities was a winner.  Will try to look at some of the entries/results for a later piece.

Amazon Picks Alexa TaskBot Challenge Finalists  in Voicebot.ai

ERIC HAL SCHWARTZ   By Eric Hal Schwartz

Amazon has chosen the 10 finalists for the Alexa Prize TaskBot Challenge https://voicebot.ai/2021/03/11/amazon-starts-new-alexa-prize-taskbot-challenge/  the newest contest run by the voice assistant developer. The chosen teams are based in universities in the U.S., Europe and Asia, each with a voice app designed to handle multi-step, complex tasks conceived in a conversation with a user.

TAKING ON TASKBOTS

Each team’s entry is supposed to be able to perform as an aide for carrying out cooking and home construction and repair projects. The idea is to develop an AI that goes beyond the limited, single task per order system most voice assistants use. A TaskBot is supposed to extend the conversation and gain enough information from a user to complete a longer, more varied project. In this case, those tasks relate to home improvement and cooking. Both are good examples of sometimes long lists of tasks hidden in a brief request like “cook a meal” or “repair a chair.” The challenge is also multi-modal, so the teams will need visual components to go with the voice app.

“Alexa already assists millions of customers in goal-directed interactions, such as ‘Alexa, play ‘Your Power’ by Billie Eilish’, or ‘Alexa, what’s the weather forecast for the weekend? With this new Alexa Prize challenge, we are now turning to multi-step and multi-modal task completion that can span hours if not days,” Alexa Shopping vice president of research and science Yoelle Maarek said in a statement. “I am delighted to see that so many quality university teams have expressed interest in addressing this hard AI challenge. This is a wonderful example of our customer-obsessed science approach where we join forces with academia to push the boundaries of science with the goal of delighting our customers.”

The 10 winning teams are from: Carnegie Mellon University, National Taiwan University (NTU), NOVA School of Science and Technology (Portugal), Ohio State University, Texas A&M University, University College London, University of California Santa Barbara, University of Glasgow, University of Massachusetts (Amherst), and the University of Pennsylvania.  ...  

Smart Speaker Market

I  consider the smart speaker phenomenon a means of testing how we can interact with computation, related intelligence and useful .   So the measure seen here is how well we are doing to date.  Still behind I believe,  but we are continuing to learn.

U.S. Smart Display User Base Grew by More Than 50% in 2020   By Bret Kinsella in Voicebot.ai

While the smart speaker market’s U.S. adoption rate barely grew in 2020, smart displays followed another path entirely. Voicebot has tracked smart display adoption multiple times per year since 2018 and 2020 witnessed a surge in device adoption. Just over 16% of U.S. adults that owned smart speakers at the beginning of 2020 had at least one smart display in their device collection. By September, that figure reached 24.1% and it was 25.8% in January 2021. That is a significant rise considering the previous 12-months saw only about a three percentage point rise. The 2020 growth rate was three times higher.

This trend data were first reported in Voicebot’s U.S. Smart Speaker Consumer Adoption Report 2021. The report includes over 30 pages of analysis, hundreds of individual data points, and 35 charts, including several that address smart speaker and smart display market share by vendor and installed user base.

VIDEO CHAT IS THE DRIVER

It may seem counterintuitive that smart displays grew so quickly and yet the overall smart speaker market did not. The answer is embedded in the chart above. New smart display owners in 2020 were not new smart speaker owners. Instead, they were existing smart speaker owners that were acquiring their first smart displays during the global COVID-19 pandemic. So, they didn’t expand the smart speaker user base very much but they did add a new voice interactive device type to many consumer homes.  ... '

Sunday, May 30, 2021

Use of Digital Humans Expand

 Recall our own experiments in the space.   Interesting to see these expanding ...... would like to see more data on how effective these are in varying contexts.     Are humans naturally warmer in these contexts

Cookie, Candy Companies Among Those Fielding Digital Humans in Marketing

May 20, 2021 

Ruth the Cookie Coach is a “digital human” incorporating AI to help Nestle connect to customers around its Toll House brand, offering recipes and support. (Credit: Nestle)

By AI Trends Staff

Ruth the Cookie Coach is a digital human being introduced by the Toll House brand of Nestle Global to provide baking assistance on a 24-7 basis, using an avatar incorporating AI that exhibits a degree of emotional intelligence, according to the company.

Ruth is named after the creator of the Nestle Toll House original chocolate chip cookie, Ruth Wakefield. Customers have the option to see, speak, and chat with Ruth while following the dynamic, on-screen content, according to an account on the website of Soul Machines,

The avatar is the culmination of two years of effort between Soul Machines, which offers a Human OS platform with a Digital Brain, and Nestle. The effort leveraged data from customer questions through the call center, social channels, multiple recipes across the web, and with the expertise of Nestle Corporate Pastry Chef Meredith Tomason.

Founded in 2016 in Auckland, New Zealand, Soul Machines has raised $65 million to date, according to Crunchbase. The company was spun out of the University of Auckland by Mark Sagar, CEO and Greg Cross, chief business officer. The company combines AI researchers, neuroscientists, psychologists and artists to create lifelike, emotionally responsive digital humans it calls Digital Heroes, with personality and character.   ... ' 

Early Warning System for Cars

Learning from real-world case where Humans have taken over. 

AI Recognizes Potentially Critical traffic situations Seven Seconds in Advance

New early warning system for self-driving cars

A team of researchers at the Technical University of Munich (TUM) has developed a new early warning system for vehicles that uses artificial intelligence to learn from thousands of real traffic situations. A study of the system was carried out in cooperation with the BMW Group. The results show that, if used in today’s self-driving vehicles, it can warn seven seconds in advance against potentially critical situations that the cars cannot handle alone – with over 85% accuracy.

To make self-driving cars safe in the future, development efforts often rely on sophisticated models aimed at giving cars the ability to analyze the behavior of all traffic participants. But what happens if the models are not yet capable of handling some complex or unforeseen situations?

A team working with Prof. Eckehard Steinbach, who holds the Chair of Media Technology and is a member of the Board of Directors of the Munich School of Robotics and Machine Intelligence (MSRM) at TUM, is taking a new approach. Thanks to artificial intelligence (AI), their system can learn from past situations where self-driving test vehicles were pushed to their limits in real-world road traffic. Those are situations where a human driver takes over – either because the car signals the need for intervention or because the driver decides to intervene for safety reasons.  ... " 

Saturday, May 29, 2021

Comparision of Sentiment Analyses Methods

An approach we coded long before neural methods.    Here an excellent look at current methods.  Pointers to code.

Sentiment Analysis — Comparing 3 Common Approaches: Naive Bayes, LSTM, and VADER

A Study on Strengths and Drawbacks for the Different Approaches (With Sample Code)

By Kevin C Lee

 Sentiment Analysis, or Opinion Mining, is a subfield of NLP (Natural Language Processing) that aims to extract attitudes, appraisals, opinions, and emotions from text. Inspired by the rapid migration of customer interactions to digital formats e.g. emails, chat rooms, social media posts, comments, reviews, and surveys, Sentiment Analysis has become an integral part of analytics organizations must perform to understand how they are positioned in the market. To be clear, Sentiment Analysis isn’t a novel concept. In fact, it has always been an important part of CRM (Customer Relationship Management) and Market Research — companies rely on knowing their customers better to evolve and innovate. The more recent rise is driven largely by the availability/accessibility of customer interaction records and well as improved computing capabilities to process these data. This advancement has really benefited consumers in meaningful ways. More than ever, organizations are listening to their constituents to improve. There are numerous approaches for Sentiment Analysis. In this article, we’ll explore three such approaches: 1) Naive Bayes, 2) Deep Learning LSTM, and 3) Pre-Trained Rule-Based VADER Models. We will focus on comparing simple out-of-the-box version of the models with the recognition that each approach can be tuned to improve performance. The intention is not to go into great details about how each methodology works but rather a conceptual study on how they compare to help determine when one should be preferred over another. .. "


Archaeological Classification via Deep Learning

 A kind of natural application.  Thought of it too watching some programs that described archaeological technique where experts had to be brought in for key finds identification.   Useful generalization.  I remember some examples of contamination classification on a packing line that could have been done similarly.

Archaeologists vs. Computers: Study Tests Who's Best at Sifting the Past

The New York Times, Heather Murphy, May 25, 2021

Computers can sort pottery shards into subtypes at least as accurately as human archaeologists, as demonstrated by Northern Arizona University researchers. The researchers pitted a deep learning neural network against four expert archaeologists in classifying thousands of images of Tusayan White Ware pottery among nine known types; the networks outperformed two experts and equaled the other two. The network also sifted through all 3,000 photos in minutes, while each expert's analysis took three to four months. The network also could more specifically communicate its reasoning for certain categorizations than its human counterparts, and offered a single answer for each classification.

Ohio's Vax a Million as Gamification

Followed this closely, good short description.   The embedded psych is well described.

Why Public Health & Civics Lotteries Are So Highly Effective: Gamification  By Gabe Zichermann in the Gamification Blog 

This year I got a pretty amazing birthday present: a public example of highly effective health gamification.

Namely, Ohio’s Vax a Million campaign, which is giving away $1M per week to residents that get vaccinated. Vaccinations jumped 28% total, with weekly vaccinations increasing by over 50% week over week, according to the state. Maryland and New York have followed suit, and several other states (and perhaps even the Federal government) are poised to follow suit.

Large scale social good gamification is not new, per se. And we’ve been talking about the importance of lotteries to incentivize good behavior for years, including in the fields of prize-linked savings and rescuing journalism. But with the COVID-19 pandemic looming large, and a sinking vaccination rate in the US, the idea has received some major new attention. So why do behavioral lotteries work so well, and how can we expand their use?

Behavioral Lotteries take advantage of several cognitive biases and psychological processes that are relevant for public health and social good:  ... ' 

Search is not a Conversation Yet

Search as AI,  and fundamentally conversation, putting it all together.    Google started with search, while the assistant providers started with the conversation.  Moving in the right direction. 

Google isn’t ready to turn search into a conversation

Despite Google’s whizzy AI demos at I/O, search is still served best by text  By James Vincent 

The future of search is a conversation — at least, according to Google.

It’s a pitch the company has been making for years, and it was the centerpiece of last week’s I/O developer conference. There, the company demoed two “groundbreaking” AI systems — LaMDA and MUM — that it hopes, one day, to integrate into all its products. To show off its potential, Google had LaMDA speak as the dwarf planet Pluto, answering questions about the celestial body’s environment and its flyby from the New Horizons probe.

GOOGLE’S DREAM IS TO SPEAK AND THE MACHINE WILL ANSWER

As this tech is adopted, users will be able to “talk to Google”: using natural language to retrieve information from the web or their personal archives of messages, calendar appointments, photos, and more.

This is more than just marketing for Google. The company has evidently been contemplating what would be a major shift to its core product for years. A recent research paper from a quartet of Google engineers titled “Rethinking Search” asks exactly this: is it time to replace “classical” search engines, which provide information by ranking webpages, with AI language models that deliver these answers directly instead?

There are two questions to ask here. First is can it be done? After years of slow but definite progress, are computers really ready to understand all the nuances of human speech? And secondly, should it be done? What happens to Google if the company leaves classical search behind? Appropriately enough, neither question has a simple answer.  ... 

Examining Hub and Spoke for Post Pandemic

In MIT Sloan Review.   Thoughts on Post-Pandemic business operations.

Why Companies Should Adopt a Hub-and-Spoke Work Model Post-Pandemic

By Ben Laker

As the COVID-19 pandemic upended the traditional model of a corporate headquarters where employees congregate daily, it has also highlighted how companies can more effectively use schedules, space, and technology to be more productive. Copresence is no longer essential for productivity because more jobs than ever can be conducted and monitored virtually. In the U.S., for example, remote working has doubled during the past 12 months, with 1 in 4 employees situated entirely at home.

But a significant majority of businesses — 77% — believe the lack of social contact during work hours has compromised employee wellness. As a result, many organizations believe it’s time to reinvent the working environment with a middle ground between packed offices and the isolation of working at home: the hub-and-spoke office model. This setup — in which a company operates a centralized main office (hub) with more localized satellite offices (spokes) — is a fundamental driver of workspace mobility. Offering an attractive yet accessible hybrid of both home and office work, the model increases the options and flexibility for employees by including the home as an essential spoke. ... 

The hub-and-spoke concept is not new. The term derives from the airport industry, where instead of sending half-empty flights directly between smaller spoke destinations, airlines have passengers change flights at a central hub between the two airports. More recently, the term has come to refer to a more flexible workspace and working style, given that hub-and-spoke offices allow employees to work from either their city hub; a dedicated, strategic spoke location such as a regional workspace; or a personal home-based spoke .... '

Friday, May 28, 2021

Google's Material You

 Was just introduced to Google's Material You: The Next stage for Material Design ... Where form follows feeling.  Looking to understand.  

Unveiling Material You

The next stage for Material Design ... 

Today at I/O we unveiled Material You, a radical new way to think about design. Material You will transform design for Android, for Google, and for the entire tech industry. Over the coming months, we plan to share more details about Material You, and how it is shaping everything we do at Google. Let’s start with the vision.

Material You embraces emotion and expressiveness

When we introduced Material Design in 2014, our vision was to help make technology simple and beautiful for everyone, and to rationalize experiences across mobile and the web. The challenge today has broadened. Computing continues to grow with more screens appearing in more areas of our lives. Also, users are demanding more expressiveness and control over their personal devices. They’re seeking experiences that are more than just practical and functional—experiences that also evoke emotion.

Designers across Google from Hardware, Android, and App teams came together to respond to this challenge, asking themselves, “What if form did not just follow function, but also followed feeling?” Material You explores a more humanistic approach to design. One that celebrates the tension between design sensibility and personal preference, and does not shy away from emotion. Without compromising the functional foundations of our apps, Material You seeks to create designs that are personal for every style, accessible for every need, alive and adaptive for every screen.  .... 

See more also on YouTube:     http://youtube.com/MaterialDesign

Germany Says Level 4 Driverless by 2022.

Seems quite broad, but further a large number of commercial vehicles mentioned.  Level 4  ... meaning  " ... Level 4 is considered to be fully autonomous driving, although a human driver can still request control, and the car still has a cockpit. In level 4, the car can handle the majority of driving situations independently. ...  " 

Germany Greenlights Driverless Vehicles on Public Roads

Tech Crunch; Rebecca Bellan, May 24, 2021

Legislation passed by the lower house of Germany's parliament would permit driverless vehicles on that nation’s public roads by 2022. The bill specifically addresses vehicles with the Society of Automobile Engineers' Level 4 autonomy designation, which means all driving is handled by the vehicle’s computer in certain conditions. The legislation also details possible initial applications for self-driving cars, including public passenger transport, business and supply trips, logistics, company shuttles, and trips between medical centers and retirement homes. Commercial driverless vehicle operators would have to carry liability insurance and be able to stop autonomous operations remotely, among other requirements. The bill still needs the approval of the upper chamber of parliament to be enacted into law.  ... " 

Autonomous Drones Attack

This will likely be a big piece of the defense and military future.

Drones May Have Attacked Humans Fully Autonomously for the First Time

New Scientist, David Hambling. May 27, 2021

  A recent report by the United Nations Security Council's Panel of Experts reveals that an incident in Libya last year may have marked the first time military drones autonomously attacked humans. Full details of the incident have not been released, but the report said retreating forces affiliated with Khalifa Haftar, commander of the Libyan National Army, were "hunted down" by Kargu-2 quadcopters during a civil war conflict in March 2020. The drones, produced by the Turkish firm STM, locate and identify targets in autonomous mode using on-board cameras with artificial intelligence, and attack by flying into the target and detonating. The report called the attack "highly effective" and said the drones did not require data connectivity with an operator.  .. " 

Exploring Interactions with Haptic Feedback in Virtual Reality

Will this mean that people will be able to immerse themselves in games and imulation?   I am not much of a gamer, but like the idea that people will more realistic  'digital twins' to engage with physical objects and spaces.   Consider the future of that.  Its not only game-like controllers, but 'immersive interactions' that can enable us to be part of our physically enabled world.   Inside a 'digital twin'?  A powerful illusion indeed 

Microsoft Research collaborates with KAIST in Korea to explore bimanual interactions with haptic feedback in virtual reality

Published May 6, 2021

By Michel Pahud , Principal Research Software Development Engineer  Mike Sinclair , Senior Principal Researcher  Andrea Bianchi , Associate Professor at KAIST

Editor’s Note: Bimanual controllers are frequently used to enhance the realism and immersion of virtual reality experiences such as games and simulations. Researchers have typically relied on mechanical linkages between the controllers to recreate the sensation of holding different objects with both hands. However, those linkages cannot quickly adapt to simulate dynamic objects. They also make for bulky controllers that can’t be disconnected to support free, independent movements. This is the problem that researchers seek to solve in the recent paper titled “GamesBond: Bimanual Haptic Illusion of Physically Connected Objects for Immersive VR Using Grip Deformation”.

GamesBond is the outcome of a recent collaboration between Michel Pahud and Mike Sinclair from Microsoft Research and Andrea Bianchi, associate professor at KAIST and director of the MAKinteract lab, with two of his students, Neung Ryu, the original author of the paper, and Hye-Young Jo. The paper was accepted at ACM CHI Conference on Human Factors in Computing Systems (CHI 2021), where it received an honorable mention award.

In this project, we explored a pair of novel 4-DoF controllers, without actual physical linkage between them, that can bend, twist, and stretch together in concert to create the illusion of being connected as a single device with a physical link. Each controller can bend from 0 to 30 degrees in any direction, twist from -70 degrees to 70 degrees and stretch from -2.5mm to 9.0mm (the paper provides all the details of the mechanism).  ... " 

Google Luaunches a new Operating System: Fuchsia

Interesting.   What is it, just a transition for the 'Nest Hub'?  Linkey to persist.  Lots more at the link, but still a bit unclear.  Does this mean that a google OS for the smart home is here to stay? See more below.  Following. 

Google launches its third major operating system, Fuchsia

The Google Nest Hub is the world's first commercial Fuchsia device.   By Ron Amadeo   in Arstechnica

Excel as a Programming Language

Intriguing Podcast.  The mere notion will  get considerable disdain from coders.   But an interesting point is made about the idea. There is power here.    Podcast and text transcript:  

Advancing Excel as a programming language with Andy Gordon and Simon Peyton Jones

Episode 120 | May 5, 2021   from Microsoft Research. 

Today, people around the globe—from teachers to small-business owners to finance executives—use Microsoft Excel to make sense of the information that occupies their respective worlds, and whether they realize it or not, in doing so, they’re taking on the role of programmer. 

In this episode, Senior Principal Research Manager Andy Gordon, who leads the Calc Intelligence team at Microsoft Research, and Senior Principal Researcher Simon Peyton Jones provide an inside account of the journey Excel has taken as a programming language, including the expansion of data types that has unlocked greater functionality and the release of the LAMBDA function, which makes the Excel formula language Turing-complete. They’ll talk specifically about how research has influenced Excel and vice versa, programming as a human-computer interaction challenge, and a future in which Excel is the first language for budding programmers and a tool for incorporating probabilistic reasoning into our decision-making.  

Learn more: 

Excel Blog: “Announcing LAMBDA: Turn Excel formulas into custom functions” 

Microsoft Research Blog: “LAMBDA: The ultimate Excel worksheet function” 

Research Collection: “Innovation by (and beyond) the numbers: A history of research collaborations in Excel”    ... " 

Towards a Quantum Workspace

Hmmm, why a 'Quantum Age', is it not just a computing age using new tools?     But as a structure for training in a new space, OK. 

Building a Workforce for the Quantum Age  By Purdue University

With the emergence of quantum technology, Purdue University is working to build a quantum workforce, using an array of tools including an upcoming summer school, online "micromasters" degree, adapted coursework, clubs, and seminars.

 "Quantum has the potential to be revolutionary technology," says David Stewart, managing director of the Purdue Quantum Science and Engineering Institute. "But right now, there just aren't enough people to do the work that's needed."

Purdue is pioneering ways to give engineers and other scientists a workable quantum background quickly.

"We are developing programs to train the next generation of quantum scientists," says Alexandra Boltasseva, workforce development lead for the Quantum Science Center. "The more education that's available, the more people and events students have access to, the more likely they are to connect and spark cross-institutional collaboration, which will lead to future advances. Our vision is to equip scientists and engineers from all sorts of different disciplines to participate in the quantum workforce and open more doors for our young people."

From Purdue University

Fake Job Offers

No longer in the market, but in between had received a number of 'too good to be true' offers to start employment.   Many via Linkedin.     Although they look targeted, a quick search finds them broadly scoped, and they quickly fall apart.   No legit company will ask for detailed personal data up front.  Caution is important.

How to Tell a Job Offer from an ID Theft Trap  in Krebs on Security

One of the oldest scams around — the fake job interview that seeks only to harvest your personal and financial data — is on the rise, the FBI warns. Here’s the story of a recent LinkedIn impersonation scam that led to more than 100 people getting duped, and one almost-victim who decided the job offer was too-good-to-be-true.

Last week, someone began posting classified notices on LinkedIn for different design consulting jobs at Geosyntec Consultants, an environmental engineering firm based in the Washington, D.C. area. Those who responded were told their application for employment was being reviewed and that they should email Troy Gwin — Geosyntec’s senior recruiter — immediately to arrange a screening interview.

Gwin contacted KrebsOnSecurity after hearing from job seekers trying to verify the ad, which urged respondents to email Gwin at a Gmail address that was not his. Gwin said LinkedIn told him roughly 100 people applied before the phony ads were removed for abusing the company’s terms of service.

“The endgame was to offer a job based on successful completion of background check which obviously requires entering personal information,” Gwin said. “Almost 100 people applied. I feel horrible about this. These people were really excited about this ‘opportunity’.”   ... ' 

Thursday, May 27, 2021

NVIDIA Predicting Earth Quake Intensity

Predicting earthquake and their intensity was something we proposed some years ago, mentioned here.  glad to see the idea taken much further.    Looking at this application further. 

AI of Earthshaking Magnitude: DeepShake Predicts Quake Intensity   By Isha Salian

Tags: Deep Learning, featured, Geoscience, News

In a major earthquake, even a few seconds of advance warning can help people prepare — so Stanford University researchers have turned to deep learning to predict strong shaking and issue early alerts.

DeepShake, a spatiotemporal neural network trained on seismic recordings from around 30,000 earthquakes, analyzes seismic signals in real time. By observing the earliest detected waves from an earthquake, the neural network can predict ground shaking intensity and send alerts throughout the area. 

Geophysics and computer science researchers at Stanford used a university cluster of NVIDIA GPUs to develop the model, using data from the 2019 Ridgecrest sequence of earthquakes in Southern Califonia. 

When tested with seismic data from Ridgecrest’s 7.1 magnitude earthquake, DeepShake provided simulated alerts to nearby seismic stations 7 to 13 seconds before the arrival of high intensity ground shaking.

Most early warning systems pull multiple information sources, first determining the location and magnitude of an earthquake before calculating ground motion for a specific area. 

“Each of these steps can introduce error that can degrade the ground shaking forecast,” said Stanford student Daniel Wu, who presented the project at the 2021 Annual Meeting of the Seismological Society of America. 

Instead, the DeepShake network relies solely on seismic waveforms for its rapid early warning and forecasting system. The unsupervised neural network learned which features of seismic waveform data best forecast the strength of future shaking. 

“We’ve noticed from building other neural networks for use in seismology that they can learn all sorts of interesting things, and so they might not need the epicenter and magnitude of the earthquake to make a good forecast,” said Wu. “DeepShake is trained on a preselected network of seismic stations, so that the local characteristics of those stations become part of the training data.”  ... ' 

Programmable Matter for Product Design

With a zap of light, system switches objects’ colors and patterns

“Programmable matter” technique could enable product designers to churn out prototypes with ease.

Watch Video  https://news.mit.edu/2021/light-colors-patterns-surface-0504#article-video-inline

Daniel Ackerman | MIT News Office

With Zap of Light, System Switches Objects' Colors, Patterns

MIT News, May 4, 2021

A programmable matter system developed by researchers at the Massachusetts Institute of Technology (MIT) and Russia's Skolkovo Institute of Science and Technology can update imagery on object surfaces rapidly by projecting ultraviolet (UV) light onto items coated with light-activated dye. The ChromoUpdate system's UV light pulse changes the dye's reflective properties, creating colorful new images in minutes. The system’s UV projector can vary light levels across the surface, granting the operator pixel-level control over saturation levels. MIT's Michael Wessley said the researchers are investigating the technology's application to flexible, programmable textiles, "So we could have clothing—t-shirts and shoes and all that stuff—that can reprogram itself."  ..' 

AI's competitive Advantage

Interesting podcast, and ongoing pieces I am now connected to:

Exponential View with Azeem Azhar / Season 5, Episode 32

Subscribe:  Apple Podcasts  Google Podcasts   Spotify    RSS

AI’s Competitive Advantage

AI can offer a new type of competitive advantage, but entrepreneurs need to know what it is and how to unlock it. Ash Fontana, author of The AI First Company and managing director at Zetta Venture Partners – a firm that exclusively invests in early-stage AI startups, joins Azeem Azhar to explore the risks and rewards of applying AI to business problems.

They also discuss:

Why the high up-front cost of developing AI models favors multi-sector businesses.

Which is more important for an AI-focused company: domain expertise or AI expertise?

How AI startups should assess the risk of being usurped by Big Tech.

Why Amazon’s sophisticated AI regularly offers nonsensical recommendations.

@ashfontana @azeem  @exponentialview

“Creating an AI-First Business with Andrew Ng” (Exponential View podcast, 2019)

“Businesses are finding AI hard to adopt” (The Economist, 2020)

“Competing in the Age of AI” (Harvard Business Review, 2020)

The AI First Company: How to Compete and Win With Artificial Intelligence (Ash Fontana, 2021)

HBR Presents is a network of podcasts curated by HBR editors, bringing you the best business ideas from the leading minds in management. The views and opinions expressed are solely those of the authors and do not necessarily reflect the official policy or position of Harvard Business Review or its affiliates.  ... " 

Wednesday, May 26, 2021

Amazon to Buy MGM

 As a long time follower of film and the studios ... this is a very big historical wow.  Its all about content.  True, prime does need a considerable boost.

Amazon to buy MGM for $8 billion in major boost to Prime Video library

Amazon announces purchase, promises "greater access" to historic studio's films.

Jon Brodkin  in Arstechnica

Amazon today announced a definitive agreement to buy MGM (Metro-Goldwyn-Mayer) for $8.45 billion. Amazon said that MGM's filmmaking prowess "complements the work of Amazon Studios, which has primarily focused on producing TV show programming."  ... " 

Getting to the Moon via the Cloud

Very high performance supercomputing on the cloud is adding to the ability to research, design, simulate, test, manufacture, deliver.

Going to the Moon via the Cloud

The New York Times, Craig S. Smith, May 25, 2021

The wide availability of high-performance computing accessed through the cloud is fostering creativity worldwide, allowing the Firefly Aerospace startup, for example, to build a rocket for lunar flights using high-performance computing simulations. Although the latest supercomputers can run 1 quadrillion calculations per second, they are prohibitively expensive and have huge space and power needs; less powerful but more nimble networked computer clusters can nearly equal supercomputers' capabilities. Moreover, most cloud computing firms supply access to high-performance computing hardware with more versatility than supercomputers. High-performance cloud computing company Rescale estimates roughly 12% of such computing is currently cloud-based, but that number—approximately $5.3 billion—is expanding 25% annually. Cloud services are growing increasingly popular among research and development groups and applied science fields, amid spiking demand for computing resources.

AI: A Taxonomy of Machine Learning and Deep Learning Algorithms

Once again an excellent post by Ajit Jaokar:   Thanks Ajit!

Below is just the intro overview, the much longer post comes through when you click through to Linkedin.   Nicely done, incudes as part of the taxonomy a number of typical usage descriptions.

Artificial Intelligence #5 : A taxonomy of machine learning and deep learning algorithms

Published on May 25, 2021   By Ajit Jaokar

Course Director: Artificial Intelligence: Cloud and Edge Implementations - University of Oxford

Like the Glossary I posted last week, there is no taxonomy for machine learning and deep learning algorithms.

Most ML/DL problems are classification problems, and a small subset of algorithms can be used to solve most of them (ex: SVM. Logistic regression or Xgboost). In that sense, a full taxonomy maybe an overkill. However, if you really want to understand something, you need to know acquire knowledge of a repertoire of algorithms – to overcome the known unknowns problem.

In this post, rather than present a taxonomy, I present a range of taxonomy approaches for machine learning and deep learning algorithms. Some of these are mathematical. If you are just beginning data science, start from the non-mathematical approaches to taxonomy. Don't be tempted to go for the maths approach. But if you have an aptitude towards maths, you should consider the maths approach because it gives you a deeper understanding. Also, I am a bit biased because many in my network in Oxford, MIT, Cambridge, Technion etc would also take a similar maths-based approach.

Finally, I suggest one specific approach to taxonomy which I like and find most complete. It is complex but it is free to download.

Taxonomy approaches

Firstly, the approach from Jason Brownlee is always a good place to start because its pragmatic and implementable in code in A tour of machine learning algorithms. Note that these are machine learning algorithms (not deep learning algorithms). A more visual approach is below source packt.  .... " 

Security Implications for 5G in IOT

Was unaware of these implications.  Intro below:

Is 5G Opening Security Holes in the Internet of Things?  By David Geer, Commissioned by CACM Staff, May 25, 2021

Market research company Research and Markets, looking at the intersection of the Internet of Things (IoT) and the increasingly popular fifth-generation cellular broadband technology 5G, said, "The global 5G IoT market size is expected to reach USD$11.35 billion by 2027."

The 5G technology and IoT devices are inextricably linked. According to the U.S. Government Accountability Office (GAO) report 5G Wireless Capabilities and Challenges for an Evolving Network, IoT devices are primary consumers of 5G networks.

In the 5G IoT market, IoT devices will multiply exponentially as 5G wireless connectivity enhances their capabilities. Smart factories in industry 4.0, for example, will leverage 5G and an abundance of industrial IoT to increase data visualization and enhance productivity while turning away from wired solutions, according to NetworkWorld .

Yet criminal hackers stand to benefit, too. With 5G wireless, sprawling IoT networks, and the flood of IoT device communications that follow, IoT becomes more vulnerable. As with all infant technologies, we hardly have an inkling about 5G wireless security flaws alone, and IoT is no less subject to attack as vendors trade native security capabilities for swift time-to-market.

Their combined shortcomings will open IoT to many more exploits.  ... '

MS Teams with Colaborative Apps

 Makes good sense.   Especially apps that aid in bringing in multiple opinions, decision and task oriented views.  See some of the work we did in this space, well before the focus now being seen with apps like Teams and Zoom.  See 'Business Sphere' tag.

Microsoft wants Teams to be your go-to for collaborative apps  in Engadget

What if Teams could be the Windows for real-time apps?

You may not have noticed, but Microsoft Teams is slowing evolving from a Slack-like workplace chat app to a collaborative platform of its own. In addition to just talking with your coworkers, you can also install apps within Teams to manage Asana projects, or build a custom app that's specifically tuned to your company's needs. At Build 2021, Microsoft is making a bigger push to make Teams a platform for collaborative apps.  ... " 

Nokia Launches first AI Use Case Library for CSPs

Always like representative use cases.  Especially if they have detailed and useful details.  Like the specifics of data being used and sources.

Nokia launches the first AI use case library on public cloud for CSPs     by TelecomLead

Nokia, in collaboration with Microsoft, today announced the world’s first deployment of multiple AI use cases delivered over public cloud.

Nokia AVA AI as a service integrates Nokia’s security framework with Microsoft Azure’s digital architecture, allowing communications service  providers (CSPs) to securely inject AI into their networks nine times faster than using private cloud and scale fast across their network.

AI use cases are essential for CSPs to manage the business complexity that 5G and cloud networks bring, and will help accelerate digital transformation, Nokia said. The AI as a service enables faster deployment while also eliminating the concerns around data sovereignty and security.

After the initial data set-up, CSPs can deploy additional AVA AI use cases within one week and ramp-up or ramp-down resources as needed within one day across multiple network clusters.

The Nokia security framework on Azure ensures data is segregated and isolated to provide the same level of security as a private cloud.

Australian mobile operator TPG was the first commercial adopter of Nokia AVA AI on public cloud, using a local instance of Microsoft Azure. This means TPG can deploy and scale additional AI use cases fast and has been able to optimize network coverage, capacity and performance.

Some of the capabilities include the following:

Detecting network anomalies with great accuracy.

Reducing radio frequency optimization cycle times by 50%, allowing them to be performed more frequently and at lower cost.

Decreasing CO2 emissions by eliminating drive-testing.

Friedrich Trawoeger, Vice President, Cloud and Cognitive Services, Nokia said: “CSPs are under constant pressure to reduce costs by automating business processes through AI and machine learning. To meet market demands, telcos are turning to us for Telco AI-as-a-Service and this launch represents an important milestone in our multi-cloud strategy.”

“Operators can achieve significantly faster implementation times and can access a library of AI use cases remotely to improve network performance, lower costs, and reduce environmental impact at the same time,” Trawoeger added.   .... '

Will AI ever be Smarter than a Baby?

Thoughtful piece, with many links ...

By Irving Wladawsky-Berger

A collection of observations, news and resources on the changing nature of innovation, technology, leadership, and other subjects.

Will AI Ever Be Smarter Than a Baby?

The Ultimate Learning Machines - WSJI recently listened to a fascinating podcast where NY Times columnist Ezra Klein interviewed Berkeley psychologist Aliston Gopney     . Professor Gopney is best known for her research in cognitive science, particularly the study of children’s learning and development. She’s written extensively on the developmental phases of the human brain from babies to adults.

Gopnik, a member of the Berkeley AI Research group, has also been exploring the differences between human and machine intelligence, more specifically, what babies can teach us about AI. She’s long argued that babies and young children are smarter than we might think. In some ways, they’re even smarter than adults, let alone way smarter than the most advanced AIs.  

What do we mean by intelligence?

In 1994 the Wall Street Journal published the Mainstream Science on Intelligence, an article that included a definition that was agreed to by 52 leading academic researchers in fields associated with intelligence:

“Intelligence is a very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly and learn from experience. It is not merely book learning, a narrow academic skill, or test-taking smarts. Rather, it reflects a broader and deeper capability for comprehending our surroundings - ‘catching on,’ ‘making sense’ of things, or ‘figuring out’ what to do.”

This is a very good definition of general intelligence, - the ability to effectively address a wide range of goals in different environments. It’s the kind of intelligence that’s long been measured in IQ tests, and that, for the foreseeable future, only humans have. On the other hand, specialized intelligence, - the ability to effectively address well-defined, specific goals in a given environment, - is the kind of task-oriented intelligence that’s part of many human jobs. Over the past decade, our increasingly capable AI systems have achieved or surpassed human levels of performance in selected applications including image and speech recognition, language translation, skin cancer classification, and breast cancer detection.

Psychologists have further identified two distinct types of human intelligence: fluid and crystallized. Fluid intelligence is the ability to quickly learn new skills, adapt to new environments and solve novel reasoning problems. It requires considerable raw processing power, generally peaks in our 20s and starts diminishing as we get older. Crystallized intelligence is the know-how and expertise which we accumulate over decades. It’s the ability to use our stocks of knowledge and experiences to make wise decisions. It generally increases through our 40s, peaks in our 50s, and does not diminish until late in life.  ... '

Tuesday, May 25, 2021

Hacker Resistant Cloud Software

From a former employer of mine.   Don't understand the details as yet.  See that it is to be presented shortly.   A proof?  See also the paper mentioned below for additional details. 

Columbia Team Builds Hacker-Resistant Cloud Software System   By Columbia University

Columbia University researchers have developed a system that guarantees — through a mathematical proof — the security of virtual machines in the cloud.

They discuss the system in "A Secure and Formally Verified Linux KVM Hypervisor,"   to be presented at the 42nd IEEE Symposium on Security & Privacy on Wednesday (May 26).

"This is the first time that a real-world multiprocessor software system has been shown to be mathematically correct and secure," says Jason Nieh, professor of computer science at Columbia. "This means that users' data are correctly managed by software running in the cloud and are safe from security bugs and hackers."

The work is the first to verify the widely-used KVM hypervisor, which is used to run virtual machines by cloud providers. "We've shown that our system can protect and secure private data and computing uploaded to the cloud with mathematical guarantees," says Xupeng Li, a Ph.D. student and co-lead author of the paper.

From Columbia University 

LaMDA: Next Generation Chatbots

Short intro to Google's LaMDA.   Looking for smarter conversations.  Hope to use this in  upcoming applications.

Google’s LaMDA: The Next Generation of Chatbots

First, we had GPT-3. Now we have LaMDA.,    By Alberto Romero

In mid-2020 OpenAI presented the all-powerful language system GPT-3. It revolutionized the world and landed headlines in very important media outlet magazines. This incredible technology can create fiction, poetry, music, code, and many other amazing things (I wrote a complete overview of GPT-3 for Towards Data Science if you want to check it out).

It was expected that other big tech companies wouldn’t fall behind. Indeed, some days ago at Google I/O annual conference, Google executives presented the last research and technologies of the big firm. One of them stole the show: LaMDA, a conversational AI capable of having human-like conversations.

In this article, I’m going to review the little we know today about this tech and how it works.

LaMDA — A conversational AI

LaMDA stands for “Language Model for Dialogue Applications.” Following from previous models such as BERT and GPT-3, LaMDA is also based on the transformer architecture, open-sourced by Google in 2017. This architecture allows the model to predict text focusing only on how previous words relate to each other (attention mechanism).  ... '


How Does Your AI Work? (Not just what is it called)

 Interesting piece in Datanami.  But I suggest that the implications are incomplete.   The chart shown only mentions the general type of 'AI' mentioned.   Not how it is used, or what data was used, or the completeness/bias of that data , or the contextual implications of how the system was applied.   All leading to an AI application hysteria. Yes, I do believe a C level exec should understand which kinds of AI were being used,  but there is much more than just that.   Classical statistical forecasting can be as easily misapplied as many forms of AI. 

How Does Your AI Work? Nearly Two-Thirds Can’t Say, Survey Finds    By Alex Woodie

Nearly two-thirds of C-level AI leaders can’t explain how specific AI decisions or predictions are made, according to a new survey on AI ethics by FICO, which says there is room for improvement.

FICO hired Corinium to query 100 AI leaders for its new study, called “The State of Responsible AI: 2021,” which the credit report company released today. While there are some bright spots in terms of how companies are approaching ethics in AI, the potential for abuse remains high.

For example, only 22% of respondents have an AI ethics board, according to the survey, suggesting the bulk of companies are ill-prepared to deal with questions about bias and fairness. Similarly, 78% of survey-takers say it’s hard to secure support from executives to prioritize ethical and responsible use of AI.

More than two thirds of survey-takers say the processes they have to ensure AI models comply with regulations are ineffective, while nine out of 10 leaders who took the survey say inefficient monitoring of models presents a barrier to AI adoption.

There is a general lack of urgency to address the problem, according to FICO’s survey, which found that, while staff members working in risk and compliance and IT and data analytics have a high rate of awareness of ethics concerns, executives generally are lacking awareness.

Government regulations of AI have generally trailed adoption, especially in the United States, where a hands-off approach has largely been the rule (apart from existing regulations in financial services, healthcare, and other fields).



Source: FICO’s “The State of Responsible AI: 2021”

Seeing as how the regulatory environment is still developing, it’s concerning that 43% of respondents in FICO’s study found that “they have no responsibilities beyond meeting regulatory compliance to ethically manage AI systems whose decisions may indirectly affect people’s livelihoods,” such as audience segmentation models, facial recognition models, and recommendation systems, the company said.  ... "

What should a Robot do when it Cannot Trust the Model it was Trained on?

Also a thing we expect of useful 'intelligence'... knowing its limitations.  Or do we?  How is this different?  

What should a robot do when it cannot trust the model it was trained on?

Helping Robots Learn What They Can and Can't Do in New Situations

The Michigan Engineer News Center, Dan Newman, May 19, 2021

University of Michigan researchers have developed a method of helping robots to predict when the model on which they were trained is unreliable, and to learn from interacting with the environment. Their approach involved creating a simple model of a rope's dynamics while moving it around an open space, adding obstacles, creating a classifier that learned when the model was reliable without learning how the rope interacted with the objects, and including recovery steps for when the classifier determined the model was unreliable. The researchers found their approach was successful 84% of the time, versus 18% for a full dynamics model, which aims to incorporate all possible scenarios. The approach also was successful in two real-world settings that involved grabbing a phone charging cable, and manipulating hoses and straps under a car hood. Michigan's Dmitry Berenson said, "This method can allow robots to generalize their knowledge to new situations that they have never encountered before."  ... ' 

Deceiving AI

 Made me think, usually models we create are to determine some state, current or future, to be more accurate.   But now we can make models that have more precisely deceptive results.     Yes, can see why DARPA is interested.  Includes  a visual overview.

Deceiving AI   By Don Monroe

Communications of the ACM, June 2021, Vol. 64 No. 6, Pages 15-16 10.1145/3460218

Over the last decade, deep learning systems have shown an astonishing ability to classify images, translate languages, and perform other tasks that once seemed uniquely human. However, these systems work opaquely and sometimes make elementary mistakes, and this fragility could be intentionally exploited to threaten security or safety.

In 2018, for example, a group of undergraduates at the Massachusetts Institute of Technology (MIT) three-dimensionally (3D) printed a toy turtle that Google's Cloud Vision system consistently classified as a rifle, even when viewed from various directions. Other researchers have tweaked an ordinary-sounding speech segment to direct a smart speaker to a malicious website. These misclassifications sound amusing, but they could also represent a serious vulnerability as machine learning is widely deployed in medical, legal, and financial systems.

The potential vulnerabilities extend to military systems, said Hava Siegelman of the University of Massachusetts, Amherst. Siegelman initiated a program called Guaranteed AI Robustness against Deception (GARD) while she was on assignment to the U.S. Defense Advanced Research Projects Agency (DARPA). To illustrate the issue to colleagues there, she said, "I showed them an example that I did, and they all started screaming that the room was not secure enough." The examples she shares publicly are worrisome enough, though, such as a tank adorned with tiny pictures of cows that cause an artificial intelligence (AI)-based vision system to perceive it to be as a herd of cows because, she said, AI "works on the surfaces."

The current program manager for GARD at DARPA, Bruce Draper of Colorado State University, is more sanguine. "We have not yet gotten to that point where there's something out there that has happened that has given me nightmares," he said, adding, "We're trying to head that off."

Researchers, some with funding from DARPA, are actively exploring ways to make machine learning more robust against adversarial attacks, and to understand the principles and limitations of these approaches. In the real world, these techniques are likely to be one piece of an ongoing, multilayered security strategy that will slow attackers but not stop them entirely. "It's an AI problem, but it's also a security problem," Draper said. ... '

Game Theory for Large Scale Data Analysis

Intriguing thought, technical.

Game theory as an engine for large-scale data analysis  By Brian McWilliams, Ian Gemp, Claire Vernade

EigenGame maps out a new approach to solve fundamental ML problems.

Modern AI systems approach tasks like recognising objects in images and predicting the 3D structure of proteins as a diligent student would prepare for an exam. By training on many example problems, they minimise their mistakes over time until they achieve success. But this is a solitary endeavour and only one of the known forms of learning. Learning also takes place by interacting and playing with others. It’s rare that a single individual can solve extremely complex problems alone. By allowing problem solving to take on these game-like qualities, previous DeepMind efforts have trained AI agents to play Capture the Flag and achieve Grandmaster level at Starcraft. This made us wonder if such a perspective modeled on game theory could help solve other fundamental machine learning problems.

Today at ICLR 2021 (the International Conference on Learning Representations), we presented “EigenGame: PCA as a Nash Equilibrium,” which received an Outstanding Paper Award. Our research explored a new approach to an old problem: we reformulated principal component analysis (PCA), a type of eigenvalue problem, as a competitive multi-agent game we call EigenGame. PCA is typically formulated as an optimisation problem (or single-agent problem); however, we found that the multi-agent perspective allowed us to develop new insights and algorithms which make use of the latest computational resources. This enabled us to scale to massive data sets that previously would have been too computationally demanding, and offers an alternative approach for future exploration. ... " 


Monday, May 24, 2021

Stanford Chatbot Study

Mostly obvious results, but useful characterizations.   We leveraged a 'concierge' model being most important, how do you get people to the right humans, the ones with the best answer in context?   I also recall also including a 'competence in context' rating we also measured.   We also tried to measure 'ongoing engagement', which was useful for future marketing connections

Do chatbots need to be more likable?    by Tom Ryan in Retailwire

A new Stanford university study finds people will more readily use a chatbot if they perceive it to be friendly and competent and less so if it projects overconfidence and arrogance. The challenge, the authors say, is finding the right balance.

Across three studies with 300 participants in the U.S., researchers tested reactions to AI-bots with the same underlying functionality but different descriptions.

Among the findings:

Low-competence descriptions (e.g., “this agent is like a toddler”) led to increases in perceived usability, intention to adopt and desire to cooperate relative to high-competence descriptions (e.g., “this agent is trained like a professional”). 

People are more likely to cooperate with and help an agent that projects higher warmth (e.g., “good-natured” or “sincere”).

Descriptions “are powerful,” helping drive user adoption and engagement with chatbots.

The authors suggested chatbots need to instill confidence that they are worthwhile to engage with. At the same time, acknowledging some errors may occur early on as the chatbot learns what users want will likely help people become more accepting of a chatbot’s mistakes. Pranav Khadpe, a co-author, told The Wall Street Journal, “You really want to manage the expectations you set before the first interaction.”  ... '

Microsoft in China Retail Tech?

Had visited Microsoft's retail tech facility in the past.

Microsoft Pushes into Growing Grocery Tech Market with Deal in China

CNBC, Evelyn Cheng, May 20, 2021  

Microsoft's Chinese branch last week announced its latest omnichannel retail push to develop cloud-based software for store operators, in partnership with Chinese retail technology provider Hanshow. Hanshow, whose clients are mainly Chinese and European supermarkets, said its products include electronic shelf labels that can display price changes in real time, a system that helps workers pack produce faster for delivery, and a cloud-based platform that lets retailers simultaneously view the temperatures of fresh produce in stores worldwide. The partnership also will develop Internet of Things technology, while Hanshow's Gao Bo said Hanshow will gain access to Microsoft Office 365 software such as Word, and Dynamics 365, a cloud-based customer relationship management system. Joe Bao at Microsoft's China unit said the partnership aims to extend the reach of China's grocery technology globally.

China to Ban Bitcoin Mining

Surprising, perhaps?   What are the details? 

China will likely ban all bitcoin mining soon

Country’s top financial regulator homes in on the source.    By Tim De Chantin   in  ArsTechnica

Bitcoin took investors on another rollercoaster ride over the weekend after a top regulator in China announced a crackdown on mining, a new tack in the country’s ongoing fight against the cryptocurrency.

The government will “crack down on bitcoin mining and trading behavior and resolutely prevent the transfer of individual risks to the society,” said the statement, which was issued by the Financial Stability and Development Committee of the State Council, the country’s cabinet equivalent. The committee is chaired by Vice Premier Liu He, who acts as President Xi Jinping’s top representative on economic and financial matters.  ..... " 

Virtual Events

 Don't like 'rules' in particular, but reasonable considerations in some context of use:

10 rules for any executive navigating the new world of virtual conferences

Virtual events are here to stay. Two experiential leaders offer advice on how to do them right.  By Erica Boeke  in Fastcompany

Even as the world emerges from the pandemic, new platforms are popping up everywhere to help us navigate our increasingly virtual world. And some of their eye-popping valuations—Hopin reportedly is worth $5.7 billion—suggest that investors believe technology will have an outsized impact on group gatherings for the foreseeable future.

We lead companies that sit at the intersection of experience and technology, and we’re here to tell you: events are still—and always will be—about making good sh*t. In other words, content still rules, and no matter how elaborately your brand dresses up your conference stages or how seamlessly your chosen virtual event platform runs on event day, if you’re not delivering good content—if it’s ho-hum or played out—you will lose your audience and your credibility along with it. ... " 

Explaining AI in Context

Not to say I am using the connection with autonomous cars, but am very much into how we explain with AI in all sorts of contexts.    We built some AI systems in our early days that could have used very precise explanatory capabilities, in order to keep its credibility over many maintenance cycles,  but it could only be crudely done at the time.   Here a nice case study in the here and now.  

The Rocky Road Toward Explainable AI (XAI) For AI Autonomous Cars 

The AI systems doing the piloting of autonomous cars will need to provide explanations to curious passengers about the route being undertaken    By Lance Eliot, the AI Trends Insider  

Our lives are filled with explanations. You go to see your primary physician due to a sore shoulder. The doctor tells you to rest your arm and avoid any heavy lifting. In addition, a prescription is given. You immediately wonder why you would need to take medication and also are undoubtedly interested in knowing what the medical diagnosis and overall prognosis are. 

So, you ask for an explanation. 

In a sense, you have just opened a bit of Pandora’s box, at least in regard to the nature of the explanation that you might get. For example, the medical doctor could rattle a lengthy and jargon-filled indication of shoulder anatomy and dive deeply into the chemical properties of the medication that has been prescribed. That’s probably not the explanation you were seeking.   

It used to be that physicians did not expect patients to ask for explanations. Whatever was said by the doctor was considered sacrosanct. The very nerve of asking for an explanation was tantamount to questioning the veracity of a revered medical opinion. Some doctors would gruffly tell you to simply do as they have instructed (no questions permitted) or might utter something rather insipid like your shoulder needs help and this is the best course of action. Period, end of story.   

Nowadays, medical doctors are aware of the need for viable explanations. There is specialized “bedside” training that takes place in medical schools. Hospitals have their own in-house courses. upcoming medical doctors are graded on how they interact with patients. And so on.   

Though that certainly has opened the door toward improved interaction with patients, it does not necessarily completely solve the explanations issue. 

Knowing how to best provide an explanation is both art and science. You need to consider that there is the explainer that will be providing the explanation, and there is a person that will be the recipient of the explanation.    ... '

Matrix Multiplication Inches Closer to Mythic Goal

Mentioned previously, here a little more succinctly.   Technical. 

Matrix Multiplication Inches Closer to Mythic Goal  By Quanta Magazine,  March 24, 2021

A paper posted in October describes the fastest-ever method for multiplying two matrices together.

For computer scientists and mathematicians, opinions about "exponent two" boil down to a sense of how the world should be.

"It's hard to distinguish scientific thinking from wishful thinking," said Chris Umans of the California Institute of Technology. "I want the exponent to be two because it's beautiful."

"Exponent two" refers to the ideal speed — in terms of number of steps required — of performing one of the most fundamental operations in math: matrix multiplication. If exponent two is achievable, then it's possible to carry out matrix multiplication as fast as physically possible. If it's not, then we're stuck in a world misfit to our dreams.

Matrices are arrays of numbers. When you have two matrices of compatible sizes, it's possible to multiply them to produce a third matrix. For example, if you start with a pair of two-by-two matrices, their product will also be a two-by-two matrix, containing four entries. More generally, the product of a pair of n-by-n matrices is another n-by-n matrix with n2 entries.

For this reason, the fastest one could possibly hope to multiply pairs of matrices is in n2 steps — that is, in the number of steps it takes merely to write down the answer. This is where "exponent two" comes from.

And while no one knows for sure whether it can be reached, researchers continue to make progress in that direction....

From Quanta magazine

Sunday, May 23, 2021

England Vaccine Passport

Some intriguing details of the approach in play

What England’s new vaccine passport could mean for covid tech’s next act

As more countries roll out systems for proving that people are immunized, they can learn from last year’s flood of covid apps.   by Lindsay Muscato

Almost exactly a year ago, software developers rushed to build technologies that could help stop the pandemic. Back then, the focus was on apps that could track whether you’d been near someone with covid. Today the discussion is about digital vaccine credentials, often called “vaccine passports,” designed to work on your smartphone and show that you’ve been inoculated. 

The latest launch came on May 17 in England, with the National Health Service’s new digital credential for crossing borders. Here’s what we know about it: 

It’s only for people going out of the UK from England (Scotland, Wales, and Northern Ireland are not yet using the app, although it could expand to them soon).

It’s just for crossing borders. Using it at places around town (like pubs) has been suggested by some, but that remains a controversial idea. 

Not many countries accept proof of vaccination as an alternative to quarantining or showing a negative covid test, so those using the app still need to check the rules for their particular destination.

It’s an upgrade of an NHS app that connects people to their doctors’ offices and medical records—and not an addition to the NHS’s much-debated contact tracing app.

Right now it can only show vaccination status, not other information such as negative test results, although that could be added.

People without smartphones can request a letter that verifies they’ve had both doses of the vaccine.   ....  '

A Technical Introduction to the Concept and Value of Batteries

What has kept us fluidly moving around, in all sorts of contexts,  by storing energy and providing it as needed to an increasing number of devices, large and small?  What are they now, and how will they progress?     Start with a Battery Day

Battery Day   By Jessie Frazelle   ACM

Communications of the ACM, May 2021, Vol. 64 No. 5, Pages 52-59 10.1145/3434222

Tesla held its first Battery Day on September 22, 2020. What a fantastic world we live in that we can witness the first Applelike keynote for batteries. Batteries are a part of everyday life; without them, the world would be a much different place. Your cellphone, flashlight, tablet, laptops, drones, cars, and other devices would not be portable and operational without batteries.

At the heart of it, batteries store chemical energy and convert it into electrical energy. The chemical reaction in a battery involves the flow of electrons from one electrode to another. When a battery is discharging, electrons flow from the anode, or negative electrode, to the cathode, or positive electrode. This flow of electrons provides an electric current that can be used to power devices. Electrons have a negative charge; therefore, as the flow of negative electrons moves from one electrode to another, an electrolyte is used to balance the charge by being the route for charge-balancing positive ions to flow.

Let's break this down a bit and uncover the chemical reactions at play within batteries. An electrical current requires a flow of electrons. Where do those electrons come from?  ... " 

Editorial Thoughts on Blog Posts Here

Editorial description of articles in this Blog.   

Most posts usually consist of these parts:  1.) A title that describes the essence of the article  2.)  A comment/review/expansion  of my opinion of the topic involved, which usually takes the viewpoint of the readers I know I have.  3.) An excerpt of the article quoted in Italics.  Usually short, quoted under 'fair use' review assumptions.   4.)   If available, a link to the original article.  5.) A set of text tags, which link directly to other articles on similar topics, that you can readily use to search for more   6.) Comments, if any.  All are moderated. 

Completeness cannot be assured. You many have to pay for the entire article to see it.   My opinions are my own.  Links to articles may stop working over time,  and I usually don't fix them.    I will usually update my outright errors if you inform me.  If you have content I find useful to my readers I may add it,  ask me. Will usually talk significant opportunities, but not just 'Ads'.

This blog has been around for a long time.   Previous years posts were done under somewhat differing assumptions. 

UK Navy Sub Commanded by AI

 An autonomous sub example, first I have seen of such size and operational autonomy.

The (UK) Navy sub commanded by artificial intelligence  By Michael Dempsey in the BBC

On 20 April, the Royal Navy's latest nuclear-powered hunter-killer submarine, HMS Anson, emerged from a vast construction hall at Barrow-in-Furness, travelled down a slipway and entered the water. All 7,400 tonnes of it.

Around 260 miles away in Plymouth, another submarine made its debut that same day. A minnow compared to HMS Anson, this secretive nine-tonne craft may have greater implications for the future of the navy than the £1.3bn nuclear boat.

MSubs of Plymouth, a specialist in autonomous underwater vehicles, won a £2.5m Ministry of Defence contract to build and test an Extra-Large Unmanned Underwater Vehicle (XLUUV) that should be able to operate up to 3,000 miles from home for three months.

The big innovation here is the autonomy. The submarine's movements and actions will be governed entirely by Artificial Intelligence (AI).  Ollie Thompson is a recent graduate who is studying for a master's degree in robotics at Plymouth University. He also works for MarineAI, the MSubs arm that is fitting out the XLUUV's brain. .... '

Zoom Rooms and Alexa for Business

 Alexa linking with Zoom Room Appliances.  Collaboration with Zoo,  Voice control of meeting management.  Expanding the uses of Amazon for Business. A good movement forward given the popularity of Zoom for meetings.   And the use of AI to integrate voice and logical management of increasingly complex meetings in the post-pandemic world.  Look forward to give this a try.  

Zoom Rooms and Alexa for Business to Empower Hands-Free Meetings  By Bobby Agarwal  in Amazon Developer

With employees around the country beginning to transition back into physical offices, many teams are making decisions related to how they can prepare their technical infrastructure for the post-pandemic world. One of the recurring themes has been the need for hands-free and voice-controlled meeting rooms. Even before the COVID-19 pandemic, Gartner predicted that by 2022, 40% of formal meetings will be facilitated by virtual AI and advanced analytics.

Simpler Voice-Controlled Meetings    

Starting today, Alexa will be integrated into Zoom Rooms Appliances and available for organizations of all sizes. This update extends support to Zoom Rooms Appliances manufactured by DTEN, Logitech, Neat, and Poly. By using natural language such as “Alexa, join my meeting,” or “Alexa, find me an available room” administrators can help employees focus on the meeting itself rather than the technology. No more touching screens, switching between technologies, or remembering how to use different devices.

This new offering enables administrators to set up Alexa for Business with a few clicks within the Zoom Rooms portal itself and without having to purchase additional hardware. Once enabled, for an individual room, floor, building, or campus, Zoom Rooms users can ask Alexa to join their scheduled meeting, start a new meeting, book a room for their ad-hoc meeting, or find an available room by simply using their voice. 

“How we work is changing,” said Jeff Smith, Head of Zoom Rooms at Zoom. “By integrating Alexa for Business into Zoom Rooms Appliances, we’re reducing friction for organizations as they look towards ways to bring back their employees safely.”    ... " 

More Robots in Retail

A topic we have covered here for a long time, both in surveys and actual lab trials and experiments.

Pandemic is Pushing Robots into Retail at Unprecedented Pace  By ZDNet  April 16, 2021

The results of a new survey by RetailWire and Brain Corp. support the conclusion that COVID-19 has hastened automation development and adoption.

The results of a survey by retail news and analysis firm RetailWire and commercial robotics company Brain Corp. indicate the Covid-19 pandemic has ramped up development and adoption of automation.

The poll estimated that 64% of retailers consider it important to have a clear, executable, and budgeted robotics automation strategy in place this year; almost half plan to participate in an in-store robotics project in the next 18 months.

Brain Corp.'s Josh Baylin said, "The global pandemic brought the value of robotic automation sharply into focus for many retailers, and we now see them accelerating their deployment timelines to reap the advantages now and into the future."

Heightened focus on cleanliness is one of the key drivers of adoption.  ... '

Saturday, May 22, 2021

IoT and Vision AI with NVIDIA AMA

Just brought to my attention:

AI, Robotics, and IoT video with NVIDIA, here is one episode: 

Everything you needed to know about IoT and vision AI with NVIDIA AMA video. Watch now.

Join the NVIDIA Jetson team for the latest episode of our AMA-style live stream, Jetson AI Labs.  This episode, we'll be talking all things IoT with guest panelist Paul DeCarlo, Principal Cloud Developer Advocate from Microsoft - along with our hosts Dustin Franklin, Dana Sheahen, and Jim Benson from JetsonHacks.  ... 

The stream will begin on Thursday, February 25, 2021 at 10am Pacific time:     https://www.youtube.com/watch?v=HQBqZEcMIrM

Please enter your questions in the live chat window, and we look forward to talking with you!  ... 

500K Jobs in Cybersecurity

Seems a very large number.  As I see it most of the jobs in this area today are broad and deep, and thus harder to fill.  Depends on the definition.  Perhaps they should be defined down to be less technical and more behavioral.

ACM TECHNEWS

U.S. Has Almost 500,000 Job Openings in Cybersecurity

The U.S. Commerce Department's Cyber Seek technology job-tracking database and the trade group CompTIA count about 465,000 current U.S. cybersecurity job openings.

Experts said private businesses and government agencies' need for more cybersecurity staff has unlocked a prime opportunity for anyone considering a job in that field.

The University of San Diego's Michelle Moore suggested switching to a cybersecurity career could be as simple as obtaining a Network+ or Security+ certification, while an eight-week online course could help someone gain an entry-level job earning $60,000 to $90,000 a year as a penetration tester, network security engineer, or incident response analyst.

Moore cited a lack of skilled cybersecurity personnel as a problem, while CompTIA's Tim Herbert said only a small percentage of computer science graduates pursue cybersecurity careers.  ... " 

Developing Digital Twins

Interesting, often this means not only the twin but also the context of its use.  Metadata and all. Like to see a full example.

Advanced Technique for Developing Digital Twins Makes Tech Universally Applicable  UT News, May 20, 2021

Researchers at the University of Texas at Austin (UT Austin), the Massachusetts Institute of Technology (MIT), and industry partner The Jessara Group have developed what they’re calling a universally applicable digital twin mathematical model. The framework was designed to facilitate predictive digital twins at scale. MIT's Michael Kapteyn said, "Using probabilistic graphical models, we create a mathematical model of the digital twin that applies broadly across application domains." The researchers used this technique to generate a structural digital twin of a custom-built unmanned aerial vehicle equipped with state-of-the-art sensors. Said Jacob Pretorius of the Jessara Group, “The value of integrated sensing solutions has been recognized for some time, but combining them with the digital twin concept takes that to a new level. We are on the cusp of an exciting future for intelligent engineering systems.”

Friday, May 21, 2021

Shape-Shifting Processor for Security

Now here  is a kind of remarkable thing.   Would it work for post-quantum computing?   Could it be used to ensure all kinds of threats are addressed?  What re coding would be required to make sure of that?  Nice idea. Like the approach of large scale, multi agent testing. 

ACM NEWS

Shape-shifting Computer Chip Thwarts Hackers  By The Conversation   May 20, 2021

We have developed and tested a secure new computer processor that thwarts hackers by randomly changing its underlying structure, thus making it virtually impossible to hack.

Last summer, 525 security researchers spent three months trying to hack our Morpheus processor as well as others. All attempts against Morpheus failed. (this link also gives a technical outline of the approach) This study was part of a program sponsored by the U.S. Defense Advanced Research Program Agency to design a secure processor that could protect vulnerable software. DARPA released the results on the program to the public for the first time in January 2021.

A processor is the piece of computer hardware that runs software programs. Since a processor underlies all software systems, a secure processor has the potential to protect any software running on it from attack. Our team at the University of Michigan first developed Morpheus, a secure processor that thwarts attacks by turning the computer into a puzzle, in 2019. ...

Adjusting Fare Algorithms

 This came up in discussion this week .. the algorithms are classic approaches.  But the algorithms are adaptive, so I would expect them to be continually adjusted. And if they are maintaining them, as I always suggest .... 

COVID-19 Wrecked the Algorithms That Set Airfares, but They Won't Stay Dumb

The Wall Street Journal, Jon Sindreau, May 17, 2021

The COVID-19 pandemic crippled the reliability of algorithms used to set air fares based on historical data and has accelerated a hybrid model that combines historical and live data. Before the pandemic, airlines used the algorithms to predict how strong ticket demand would be on a particular day and time, or exactly when people will fly to visit relatives before a holiday. Corporate travel constitutes a large share of airline profits, with business fliers avoiding Tuesdays and Wednesdays, favoring short trips over week-long ones, and booking late. The pandemic undermined historical demand patterns while cancellations undercut live data, causing the algorithms to post absurd prices. Overall, the pandemic has stress-tested useful advancements to the algorithms, like assigning greater weight to recent booking numbers, and applying online searches to forecast when and where demand will manifest.  ... '

Google Builds a Reader

Strangely and still only experimentally, Google has decided to unearth their RSS Reader.  I was an early test user of  Google Reader, and remember thinking how useful this was to make sense of an expanding Web, especially if you had many interests.  It had many followers.  Then they canned it.   Its back, or is it, for good?   I used Feedly ever since.

Undead Again, Google Brings Back Reader     in Techcrunch  By Frederic Lardinois   @fredericl 

Chrome, at least in its experimental Canary version on Android (and only for users in the U.S.), is getting an interesting update in the coming weeks that brings back RSS, the once-popular format for getting updates from all the sites you love in Google Reader and similar services.

In Chrome, users will soon see a “Follow” feature for sites that support RSS and the browser’s New Tab page will get what is essentially a (very) basic RSS reader — I guess you could almost call it a “Google Reader.”

Now we’re not talking about a full-blown RSS reader here. The New Tab page will show you updates from the sites you follow in chronological order, but it doesn’t look like you can easily switch between feeds, for example. It’s a start, though.  ...  '

Germany to Support Quantum Computing with €2 Billion

Germany to Support Quantum Computing with €2 Billion

U.S. News & World Report, Michael Nienaber, May 11, 2021

Germany's economy and science ministries announced an approximately €2-billion ($2.4-billion) allocation to develop the country's first competitive quantum computer and associated technologies in the next four years. The science ministry will invest €1.1 billion ($1.3 billion) by 2025 to support quantum computing research and development, while the economy ministry will spend €878 million ($1.06 billion) on practical applications. The economy ministry said most subsidies will go to Germany's Aerospace Center, which will partner with industrial companies, midsized enterprises, and startups to establish two consortia. Economy Minister Peter Altmaier cited management of supply and demand in the energy sector, improved traffic control, and faster testing of new active substances as areas that quantum computing could potentially revolutionize. ...  

Computing History: Looms and

Just a bit of history. Mostly pictures, at the link, but great if you were unaware of the background. 

BLOG@CACM

Charles Babbage and the Loom   By Herbert Bruderer

Charles Babbage's analytical engine (see Fig. 1), which already provided for conditional branching, is regarded as the ancestor of the modern-day computer. He wanted to control his programmable machine with punched cards similar to the automatic looms from France.

Punched tapes or punched cards joined to tapes simplified work on looms (pattern control). Among the pioneers were Basile Bouchon (see Fig. 2), Jean-Baptiste Falcon (see Fig. 3), and Joseph-Marie Jacquard (see Fig. 4). Their achievements are on view in the Musée des arts et métiers in Paris.  .... " 

Collaborating Robotic Teams

 Continuing to look at this space,   now that robotics is getting more advanced, the potential expands.  We examined some very early warehouse management approaches.  Mentioned previously:  

Helping Robots Collaborate to Get the Job Done, By MIT News

An algorithm developed by researchers at the Massachusetts Institute of Technology (MIT), the University of Pennsylvania, and the University of California, San Diego aims to foster cooperation between information-gathering robot teams.

The algorithm balances data collection and energy expenditure to avoid having robots perform maneuvers that waste energy to gain only a small amount of data.

Using the researchers' Distributed Local Search approach, each robot proposes a potential trajectory for itself as part of the team, and the algorithm accepts or rejects them based whether it will increase the likelihood of achieving the team's objective function.

A test involving a simulated team of 10 robots showed that the algorithm required more computation time, but guaranteed their mission would be completed successfully.

 .... Massachusetts Institute of Technology researchers have developed an algorithm that coordinates the performance of robot teams for missions like mapping or search-and-rescue in complex, unpredictable environments.

From MIT News

Thursday, May 20, 2021

What is a Public Interest Technologist?

I happened on this description and supporting information in Bruce Schneier's blog on security. Liked the idea.   A related discussion had come up when some colleagues talked about our roles as consultants, bloggers and historians of active and important areas of technology.  

( How about Collaborating Public Interest Technologists?   Want to develop the idea?  Support its development?   Contact me.)

Public-Interest Technology Resources

Maintained by Bruce Schneier. Last updated April 30, 2021

Introduction

As technology—especially computer, information, and Internet technology—permeates all aspects of our society, people who understand that technology need to be part of public-policy discussions. We need technologists who work in the public interest. We need public-interest technologists.

Defining this term is difficult. One Ford Foundation blog post described public-interest technologists as “technology practitioners who focus on social justice, the common good, and/or the public interest.” A group of academics in this field wrote that “public-interest technology refers to the study and application of technology expertise to advance the public interest/generate public benefits/promote the public good.” (continued)