Recently signed up for this and missed it, of interest:
Checking in to see if you watched the replay of our smart home webinar featuring iRobot.
It features iRobot's CTO, Chris Jones, talking in-depth about the future of the smart home, and how they were able to solve the dual challenges of customer experience and integrations by working with IFTTT.
Smart Home hardware companies are placing increased strategic importance on software services to address future opportunities.
Join IFTTT Founder and CEO, Linden Tibbets and iRobot Chief Technology Officer, Chris Jones, as they discuss the integration choices Smart Home companies face and the future of the Smart Home.
In this webinar we discuss:
The ongoing transition of hardware companies to software services companies
Connecting Smart Home products to other products in the Smart Home and beyond
Direct integrations, Developer platforms and iPaaS
IFTTT Data Insights – Integration trends in the Smart Home
Why customer experience will ultimately decide ....
Best, Skyler
Skyler Saulovich, Sales at IFTTT ... "
Tuesday, March 31, 2020
Procter Donates Face Masks, Hand Sanitizer
UPDATE: P&G to make and donate face masks, hand sanitizer in response to coronavirus By Barrett J. Brunsman – Staff reporter, Cincinnati Business Courier
Mar 26, 2020, 2:47pm EDT Updated Mar 26, 2020, 6:14pm EDT
Instead of selling the new products, Procter & Gamble will use them to safeguard its employees and to donate needed supplies of face masks and hand sanitizer to hospitals, health authorities and relief organizations. ... "
Mar 26, 2020, 2:47pm EDT Updated Mar 26, 2020, 6:14pm EDT
Instead of selling the new products, Procter & Gamble will use them to safeguard its employees and to donate needed supplies of face masks and hand sanitizer to hospitals, health authorities and relief organizations. ... "
Towards a Taxonomy for Automated Assistants
Like the idea of identifying and constructing tasks for assistants so they can be more readily be challenged and compared. This article suggests this be done and gives some examples.
A Taxonomy of Automated Assistants
By Jerrold M. Grochow
Communications of the ACM, April 2020, Vol. 63 No. 4, Pages 39-41 10.1145/3382746
Automated cars are in our future—and starting to be in our present. In 2014, the Society of Automotive Engineers (SAE) published the first version of a taxonomy for degree of automation in vehicles from Level 0 (not automated) to Level 5 (fully automated, no human intervention necessary).8 Since then, this taxonomy has gained wide acceptance—to the point where everyone from the U.S. government (used by the NHTSA5) to auto manufacturers to the popular press are talking in terms of "skipping level 3" or "everyone wants a level 5 car."1 As technology gets developed and improved, having an accepted taxonomy helps ensure people can talk to each other and know they are talking about the same thing. It is time for one of our computing organizations (perhaps ACM?) to develop an analogous taxonomy for automated assistants. With Siri, Alexa, Cortana, and cohorts selling in the "tens of millions"2 and with more than 20 competitors on the market,7 having an easily understandable taxonomy will help practitioners and end users alike.
There is already a significant body of literature aimed at improving the design and use of automated assistants in both industry and academic arenas (with a variety of category names for these devices and systems, using some combination of "automated," "digital," "smart," "intelligent," "personal," "agent," and "assistant"), as the bibliographies of cited works show. Some recent work focused on task content, use cases, and features. The task content of human activity has been widely studied over a long period of time, but Trippas et al.9 note that "how intelligent assistants are used in a workplace setting is less studied and not very well understood." While not presenting a taxonomy of assistants, this type of task content analysis could be used as an aid in intelligent assistant design. Similarly, Mehrotra et al.4 studied interaction with a desktop-based digital assistant with an eye to "help guide development of future user support systems and improve evaluations of current assistants." Knote et al.3 evaluated 115 "smart personal assistants" by literature and website review to create a taxonomy based on cluster analysis of design characteristics such as communications mode, direction of interaction, adaptivity, and embodiment (virtual character, voice), and so forth—a technology and features-based taxonomy. A commercial study of 22 popular "intelligent ... or automated personal assistants"7 reported "Intelligent Agents can be classified based on their degree of perceived intelligence and capability such as simple reflex agents, model-based reflex agents, goal-based agents, utility-based agents and learning agents." While this is an arguably useful taxonomy, it also primarily addresses the technology used and not the actual use of the automated assistant. The website additionally presents editor and user ratings of ease of use, features, and performance that may be of value to end users. .... '
A Taxonomy of Automated Assistants
By Jerrold M. Grochow
Communications of the ACM, April 2020, Vol. 63 No. 4, Pages 39-41 10.1145/3382746
Automated cars are in our future—and starting to be in our present. In 2014, the Society of Automotive Engineers (SAE) published the first version of a taxonomy for degree of automation in vehicles from Level 0 (not automated) to Level 5 (fully automated, no human intervention necessary).8 Since then, this taxonomy has gained wide acceptance—to the point where everyone from the U.S. government (used by the NHTSA5) to auto manufacturers to the popular press are talking in terms of "skipping level 3" or "everyone wants a level 5 car."1 As technology gets developed and improved, having an accepted taxonomy helps ensure people can talk to each other and know they are talking about the same thing. It is time for one of our computing organizations (perhaps ACM?) to develop an analogous taxonomy for automated assistants. With Siri, Alexa, Cortana, and cohorts selling in the "tens of millions"2 and with more than 20 competitors on the market,7 having an easily understandable taxonomy will help practitioners and end users alike.
There is already a significant body of literature aimed at improving the design and use of automated assistants in both industry and academic arenas (with a variety of category names for these devices and systems, using some combination of "automated," "digital," "smart," "intelligent," "personal," "agent," and "assistant"), as the bibliographies of cited works show. Some recent work focused on task content, use cases, and features. The task content of human activity has been widely studied over a long period of time, but Trippas et al.9 note that "how intelligent assistants are used in a workplace setting is less studied and not very well understood." While not presenting a taxonomy of assistants, this type of task content analysis could be used as an aid in intelligent assistant design. Similarly, Mehrotra et al.4 studied interaction with a desktop-based digital assistant with an eye to "help guide development of future user support systems and improve evaluations of current assistants." Knote et al.3 evaluated 115 "smart personal assistants" by literature and website review to create a taxonomy based on cluster analysis of design characteristics such as communications mode, direction of interaction, adaptivity, and embodiment (virtual character, voice), and so forth—a technology and features-based taxonomy. A commercial study of 22 popular "intelligent ... or automated personal assistants"7 reported "Intelligent Agents can be classified based on their degree of perceived intelligence and capability such as simple reflex agents, model-based reflex agents, goal-based agents, utility-based agents and learning agents." While this is an arguably useful taxonomy, it also primarily addresses the technology used and not the actual use of the automated assistant. The website additionally presents editor and user ratings of ease of use, features, and performance that may be of value to end users. .... '
Monday, March 30, 2020
Can Tech Disrupt the Virus?
Here is a challenge: Can AI and Tech confront and disrupt evolved biology in new ways? Can we think beyond methods that currently exist?
Tech's Next Disruption Target: The Coronavirus
The Wall Street Journal
Asa Fitch; Rolfe Winkler; Deepa Seetharaman
March 25, 2020
Silicon Valley technology experts are pursuing various projects to combat the coronavirus, with thousands of volunteers contributing to hundreds of hastily organized initiatives in their spare time. Projects range from developing applications to deliver groceries to vulnerable seniors to simulating the virus' spread and sharing findings with specialists. Instagram co-founder Kevin Systrom built a model that predicts virus propagation and publishing it online. Alphabet enlisted its DeepMind artificial intelligence unit to find a vaccine, and its Verily life-sciences research unit to develop virus-detection techniques. Alphabet's Brian McClendon sees the pandemic as an opportunity to design a smartphone app for tracking health status, using blockchain to protect privacy; he hopes it will give people confidence to return to normal life after the crisis passes..... "
Tech's Next Disruption Target: The Coronavirus
The Wall Street Journal
Asa Fitch; Rolfe Winkler; Deepa Seetharaman
March 25, 2020
Silicon Valley technology experts are pursuing various projects to combat the coronavirus, with thousands of volunteers contributing to hundreds of hastily organized initiatives in their spare time. Projects range from developing applications to deliver groceries to vulnerable seniors to simulating the virus' spread and sharing findings with specialists. Instagram co-founder Kevin Systrom built a model that predicts virus propagation and publishing it online. Alphabet enlisted its DeepMind artificial intelligence unit to find a vaccine, and its Verily life-sciences research unit to develop virus-detection techniques. Alphabet's Brian McClendon sees the pandemic as an opportunity to design a smartphone app for tracking health status, using blockchain to protect privacy; he hopes it will give people confidence to return to normal life after the crisis passes..... "
Hybrid AI Examined
Big proponent of the idea. Neural methods solve specific problems well, yet we solve many other problems symbolically, logically. Math gives us solutions with algorithms, but the applied use of these methods is logically driven. The next AI decade should seek the power of both methods.
The case for hybrid artificial intelligence By Ben Dickson in bdTechtalks
This article is part of our reviews of AI research papers, a series of posts that explore the latest findings in artificial intelligence.
Deep learning, the main innovation that has renewed interest in artificial intelligence in the past years, has helped solve many critical problems in computer vision, natural language processing, and speech recognition. However, as the deep learning matures and moves from hype peak to its trough of disillusionment, it is becoming clear that it is missing some fundamental components.
This is a reality that many of the pioneers of deep learning and its main component, artificial neural networks, have acknowledged in various AI conferences in the past year. Geoffrey Hinton, Yann LeCun, and Yoshua Bengio, the three “godfathers of deep learning,” have all spoken about the limits of neural networks.
The question is, what is the path forward?
At NeurIPS 2019, Bengio discussed system 2 deep learning, a new generation of neural networks that can handle compositionality, out of order distribution, and causal structures. At the AAAI 2020 Conference, Hinton discussed the shortcomings of convolutional neural networks (CNN) and the need to move toward capsule networks.
But for cognitive scientist Gary Marcus, the solution lies in developing hybrid models that combine neural networks with symbolic artificial intelligence, the branch of AI that dominated the field before the rise of deep learning. In a paper titled “The Next Decade in AI: Four Steps Toward Robust Artificial Intelligence,” Marcus discusses how hybrid artificial intelligence can solve some of the fundamental problems deep learning faces today.
Connectionists, the proponents of pure neural network–based approaches, reject any return to symbolic AI. Hinton has compared hybrid AI to combining electric motors and internal combustion engines. Bengio has also shunned the idea of hybrid artificial intelligence on several occasions.
But Marcus believes the path forward lies in putting aside old rivalries and bringing together the best of both worlds. .... "
The case for hybrid artificial intelligence By Ben Dickson in bdTechtalks
This article is part of our reviews of AI research papers, a series of posts that explore the latest findings in artificial intelligence.
Deep learning, the main innovation that has renewed interest in artificial intelligence in the past years, has helped solve many critical problems in computer vision, natural language processing, and speech recognition. However, as the deep learning matures and moves from hype peak to its trough of disillusionment, it is becoming clear that it is missing some fundamental components.
This is a reality that many of the pioneers of deep learning and its main component, artificial neural networks, have acknowledged in various AI conferences in the past year. Geoffrey Hinton, Yann LeCun, and Yoshua Bengio, the three “godfathers of deep learning,” have all spoken about the limits of neural networks.
The question is, what is the path forward?
At NeurIPS 2019, Bengio discussed system 2 deep learning, a new generation of neural networks that can handle compositionality, out of order distribution, and causal structures. At the AAAI 2020 Conference, Hinton discussed the shortcomings of convolutional neural networks (CNN) and the need to move toward capsule networks.
But for cognitive scientist Gary Marcus, the solution lies in developing hybrid models that combine neural networks with symbolic artificial intelligence, the branch of AI that dominated the field before the rise of deep learning. In a paper titled “The Next Decade in AI: Four Steps Toward Robust Artificial Intelligence,” Marcus discusses how hybrid artificial intelligence can solve some of the fundamental problems deep learning faces today.
Connectionists, the proponents of pure neural network–based approaches, reject any return to symbolic AI. Hinton has compared hybrid AI to combining electric motors and internal combustion engines. Bengio has also shunned the idea of hybrid artificial intelligence on several occasions.
But Marcus believes the path forward lies in putting aside old rivalries and bringing together the best of both worlds. .... "
Anomaly Response: How we Respond to the Unexpected
Intriguing piece. Made me think of our current predicament as well. Can we model human adaptation to expected and unexpected events? I like the notion of exploring different responses. Would add then we could simulate them via a properly an agent or twin model. Technical article.
Cognitive Work of Hypothesis Exploration During Anomaly Response
A look at how we respond to the unexpected
Marisa R. Grayson in Queue ACM
Web-production software systems operate at an unprecedented scale today, requiring extensive automation to develop and maintain services. The systems are designed to adapt regularly to dynamic load to avoid the consequences of overloading portions of the network. As the software systems scale and complexity grows, it becomes more difficult to observe, model, and track how the systems function and malfunction. Anomalies inevitably arise, challenging incident responders to recognize and understand unusual behaviors as they plan and execute interventions to mitigate or resolve the threat of service outage. This is anomaly response.1
The cognitive work of anomaly response has been studied in energy systems, space systems, and anesthetic management during surgery.9,10 Recently, it has been recognized as an essential part of managing web-production software systems. Web operations also provide the potential for new insights because all data about an incident response in a purely digital system is available, in principle, to support detailed analysis. More importantly, the scale, autonomous capabilities, and complexity of web operations go well beyond the settings previously studied.7,8
Four incidents from web-based software companies reveal important aspects of anomaly response processes when incidents arise in web operations, two of which are discussed in this article. One particular cognitive function examined in detail is hypothesis generation and exploration, given the impact of obscure automation on engineers' development of coherent models of the systems they manage. Each case was analyzed using the techniques and concepts of cognitive systems engineering.9,10 The set of cases provides a window into the cognitive work "above the line" (see "Above the Line, Below the Line" by Richard Cook in this issue) in incident management of complex web-operation systems (cf. Grayson, 2018). .... "
Cognitive Work of Hypothesis Exploration During Anomaly Response
A look at how we respond to the unexpected
Marisa R. Grayson in Queue ACM
Web-production software systems operate at an unprecedented scale today, requiring extensive automation to develop and maintain services. The systems are designed to adapt regularly to dynamic load to avoid the consequences of overloading portions of the network. As the software systems scale and complexity grows, it becomes more difficult to observe, model, and track how the systems function and malfunction. Anomalies inevitably arise, challenging incident responders to recognize and understand unusual behaviors as they plan and execute interventions to mitigate or resolve the threat of service outage. This is anomaly response.1
The cognitive work of anomaly response has been studied in energy systems, space systems, and anesthetic management during surgery.9,10 Recently, it has been recognized as an essential part of managing web-production software systems. Web operations also provide the potential for new insights because all data about an incident response in a purely digital system is available, in principle, to support detailed analysis. More importantly, the scale, autonomous capabilities, and complexity of web operations go well beyond the settings previously studied.7,8
Four incidents from web-based software companies reveal important aspects of anomaly response processes when incidents arise in web operations, two of which are discussed in this article. One particular cognitive function examined in detail is hypothesis generation and exploration, given the impact of obscure automation on engineers' development of coherent models of the systems they manage. Each case was analyzed using the techniques and concepts of cognitive systems engineering.9,10 The set of cases provides a window into the cognitive work "above the line" (see "Above the Line, Below the Line" by Richard Cook in this issue) in incident management of complex web-operation systems (cf. Grayson, 2018). .... "
New Rules for Remote Work
Reasonably done look at things that make sense, we have done much remote work for some time now, years in many cases, but now its just lots more of it.
HBRWK: Business Research for Business Leaders
The New Rules for Remote Work: Pandemic Edition by Dina Gerdeman
Welcome to the new world of remote work, where employees struggle to learn the rules, managers are unsure how to help them, and organizations get a glimpse into the future.
With many people working remotely for the first time, many of us have experienced a videoconference interrupted by barking dogs or hungry kids demanding snacks, punctuated, perhaps, by slamming cabinet doors and grinding ice makers in the background. We all understand, of course—we’re living it, too.
Welcome to the new world of remote work, pandemic style.
Before the coronavirus hit, 5.2 percent of US employees reported telecommuting most of the time, while 43 percent worked from home at least some of the time. Now, with the pandemic shuttering workplaces, that figure has skyrocketed globally.
But remote work during this bizarre time is certainly not business as usual, even for work-from-home veterans. While some of the typical remote work rules apply, others don’t. Business leaders need a new game plan.
We asked Harvard Business School professors to provide practical advice for managing large-scale, long-term remote work at a time when many employees are not only distracted by the commotion in their homes, but are shaken by the crisis unfolding outside their doors.
“MANAGERS SHOULD MAKE THE CALL ON HIGH-LEVEL PRIORITIES, SO EMPLOYEES CAN FOCUS ON THEIR BEST WORK.”
Here are 10 ways that leaders can support employees who are working remotely during an unprecedented and uncertain time: ...
HBRWK: Business Research for Business Leaders
The New Rules for Remote Work: Pandemic Edition by Dina Gerdeman
Welcome to the new world of remote work, where employees struggle to learn the rules, managers are unsure how to help them, and organizations get a glimpse into the future.
With many people working remotely for the first time, many of us have experienced a videoconference interrupted by barking dogs or hungry kids demanding snacks, punctuated, perhaps, by slamming cabinet doors and grinding ice makers in the background. We all understand, of course—we’re living it, too.
Welcome to the new world of remote work, pandemic style.
Before the coronavirus hit, 5.2 percent of US employees reported telecommuting most of the time, while 43 percent worked from home at least some of the time. Now, with the pandemic shuttering workplaces, that figure has skyrocketed globally.
But remote work during this bizarre time is certainly not business as usual, even for work-from-home veterans. While some of the typical remote work rules apply, others don’t. Business leaders need a new game plan.
We asked Harvard Business School professors to provide practical advice for managing large-scale, long-term remote work at a time when many employees are not only distracted by the commotion in their homes, but are shaken by the crisis unfolding outside their doors.
“MANAGERS SHOULD MAKE THE CALL ON HIGH-LEVEL PRIORITIES, SO EMPLOYEES CAN FOCUS ON THEIR BEST WORK.”
Here are 10 ways that leaders can support employees who are working remotely during an unprecedented and uncertain time: ...
Virtualitics for Insight Extraction
A company of interest for advanced data visualization. We worked with them in the Enterprise.
Virtualitics Selected fof the Air Force Strategic $7 Million Award
March 16, 2020 By Amy Gunzenhauser
Virtualitics is pleased to announce that it has been selected for the Air Force’s first ever Strategic Fund Increase (STRATFI) award through AF Ventures. The $7 million contract will enable Virtualitics to provide our groundbreaking AI data analytics software, Virtualitics Immersive Platform (VIP), to Air Force Global Strike Command to help airmen solve pressing data challenges in the U.S. strategic bomber fleet.
The award was announced by the Secretary of the Air Force, Barbara Barrett, and Assistant Secretary of the Air Force for Acquisition, Technology, and Logistics, Dr. Will Roper, at the Air Force’s virtual “Pitch Bowl” event.
Dr. Roper has described the winners of the STRATFI award as companies providing “game-changing” technologies to the Air Force. “The thing that we’re working on now is the big bets, the 30 to 40 big ideas, disruptive ideas that can change our mission and hopefully change the world,” Roper said. “We’re looking for those types of companies.”
Virtualitics is proud to be one of the “big bet” startups the Air Force is counting on to preserve the U.S. military’s technological advantage.
Receiving the STRATFI award at the Pitch Bowl culminates an impressive trend of recent Department of Defense contracts for Virtualitics. Virtualitics is the only commercial startup to win contract awards at the Air Force’s first ever Space Pitch Day and the F-35 Pitch Day, in addition to the STRATFI. Winning the “triple crown” of contract awards at the Air Force’s seminal innovation events is a clear indication of product-market fit for our solution in the DoD.
We are very proud of our work with the DoD. We have found great satisfaction in helping our men and women in uniform unlock actionable insights in their data...."
Virtualitics Selected fof the Air Force Strategic $7 Million Award
March 16, 2020 By Amy Gunzenhauser
Virtualitics is pleased to announce that it has been selected for the Air Force’s first ever Strategic Fund Increase (STRATFI) award through AF Ventures. The $7 million contract will enable Virtualitics to provide our groundbreaking AI data analytics software, Virtualitics Immersive Platform (VIP), to Air Force Global Strike Command to help airmen solve pressing data challenges in the U.S. strategic bomber fleet.
The award was announced by the Secretary of the Air Force, Barbara Barrett, and Assistant Secretary of the Air Force for Acquisition, Technology, and Logistics, Dr. Will Roper, at the Air Force’s virtual “Pitch Bowl” event.
Dr. Roper has described the winners of the STRATFI award as companies providing “game-changing” technologies to the Air Force. “The thing that we’re working on now is the big bets, the 30 to 40 big ideas, disruptive ideas that can change our mission and hopefully change the world,” Roper said. “We’re looking for those types of companies.”
Virtualitics is proud to be one of the “big bet” startups the Air Force is counting on to preserve the U.S. military’s technological advantage.
Receiving the STRATFI award at the Pitch Bowl culminates an impressive trend of recent Department of Defense contracts for Virtualitics. Virtualitics is the only commercial startup to win contract awards at the Air Force’s first ever Space Pitch Day and the F-35 Pitch Day, in addition to the STRATFI. Winning the “triple crown” of contract awards at the Air Force’s seminal innovation events is a clear indication of product-market fit for our solution in the DoD.
We are very proud of our work with the DoD. We have found great satisfaction in helping our men and women in uniform unlock actionable insights in their data...."
Sunday, March 29, 2020
Data Resources: Our World in Data
As part of a larger project that is looking at Data Sources, Open Source Data, Data as an Asset, Data Quality, Data for Machine Learning, Semantic Data, Knowledge Mapping, Metadata and related topics. This looks to be a great resource, just examining
Specific Data of the Coronavirus/COVID-19 (Updated frequently)
And via the Center For Disease Conrol: https://www.cdc.gov/coronavirus/2019-ncov/index.html
Our World in Data: (Used widely for teaching, research etc)
About:
Research and data to make progress against the world’s largest problems
Poverty, disease, hunger, climate change, war, existential risks, and inequality: The world faces many great and terrifying problems. It is these large problems that our work at Our World in Data focuses on.
Thanks to the work of thousands of researchers around the world who dedicate their lives to it, we often have a good understanding of how it is possible to make progress against the large problems we are facing. The world has the resources to do much better and reduce the suffering in the world.
We believe that a key reason why we fail to achieve the progress we are capable of is that we do not make enough use of this existing research and data: the important knowledge is often stored in inaccessible databases, locked away behind paywalls and buried under jargon in academic papers.
The goal of our work is to make the knowledge on the big problems accessible and understandable. As we say on our homepage, Our World in Data is about Research and data to make progress against the world’s largest problems. ... "
Specific Data of the Coronavirus/COVID-19 (Updated frequently)
And via the Center For Disease Conrol: https://www.cdc.gov/coronavirus/2019-ncov/index.html
Our World in Data: (Used widely for teaching, research etc)
About:
Research and data to make progress against the world’s largest problems
Poverty, disease, hunger, climate change, war, existential risks, and inequality: The world faces many great and terrifying problems. It is these large problems that our work at Our World in Data focuses on.
Thanks to the work of thousands of researchers around the world who dedicate their lives to it, we often have a good understanding of how it is possible to make progress against the large problems we are facing. The world has the resources to do much better and reduce the suffering in the world.
We believe that a key reason why we fail to achieve the progress we are capable of is that we do not make enough use of this existing research and data: the important knowledge is often stored in inaccessible databases, locked away behind paywalls and buried under jargon in academic papers.
The goal of our work is to make the knowledge on the big problems accessible and understandable. As we say on our homepage, Our World in Data is about Research and data to make progress against the world’s largest problems. ... "
Recent Posts relating to Pandemic Conditions and Related Emergent Tech
Recent posts that relate to Contravirus and COVID-19 in this blog. Primarily dealing with data, prediction, data visualization, Modeling, communication, AI and emergent technologies role in addressing pandemic issues. Please pass along anything you would like added to this stream.
IBM, Oracle Team for WHO Virus Data Hub: MiPasa
Now have looked at lots of sites claiming to provide accurate data in this space, and it is clear you can drive wrong results from much that is out there. So this effort addresses a need.
World Health Organization Teams With IBM, Oracle on Blockchain-Based Coronavirus Data Hub
In Coindesk
Big names including IBM, Oracle and the World Health Organization (WHO) are among the collaborators on an open-data hub that will use blockchain technology to check the veracity of data relating to the coronavirus pandemic.
The solution, dubbed MiPasa, is launching as a “COVID-19 information highway,” said Jonathan Levi, CEO of Hacera, the company that built the platform.
MiPasa, built on Hyperledger Fabric, is expected to evolve as a range of data analytics tools are added, followed by testing data and other information to assist with the precise detection of COVID-19 infection hotspots.
“We feel that there isn't enough information out there to make informed decisions,” said Levi. “How can we help all the people that would like to get access to data, analyze it and provide insights?” ... "
World Health Organization Teams With IBM, Oracle on Blockchain-Based Coronavirus Data Hub
In Coindesk
Big names including IBM, Oracle and the World Health Organization (WHO) are among the collaborators on an open-data hub that will use blockchain technology to check the veracity of data relating to the coronavirus pandemic.
The solution, dubbed MiPasa, is launching as a “COVID-19 information highway,” said Jonathan Levi, CEO of Hacera, the company that built the platform.
MiPasa, built on Hyperledger Fabric, is expected to evolve as a range of data analytics tools are added, followed by testing data and other information to assist with the precise detection of COVID-19 infection hotspots.
“We feel that there isn't enough information out there to make informed decisions,” said Levi. “How can we help all the people that would like to get access to data, analyze it and provide insights?” ... "
Labels:
accuracy,
Blockchain,
Coronavirus,
Data,
Hacera,
Hyperledger,
IBM,
MiPasa,
Oracle,
WHO
On Multisensory Adventures
Vint Cerf on a favorite topic. The senses. He points to two inspirational books on the topic. We are starting to 'understand' senses, at least if we use as a measure how we are able to mimic them.
Multisensory Adventures
By Vinton G. Cerf
Communications of the ACM, April 2020, Vol. 63 No. 4, Page 7
10.1145/3383671
Google Vice President and Chief Internet Evangelist Vinton G. Cerf
In this column, I want to draw your attention to two books. One has been published to great acclaim and the other is still in process. They resonate with a visceral intensity for which I was honestly unprepared and surprised. The first, Multisensory Experiences, Where the Senses Meet Technology, by Carlos Velasco and Marianna Obrist, is to be published by Oxford University Press. The authors explore concepts we experience every day but don't necessarily understand fully. We are familiar with the five senses (sight, sound, touch, taste, smell). Our brains transduce these physical phenomena into neural pulses that flood along many pathways and interact in many ways. Interestingly, all these senses are translated into essentially similar neural signals but they are processed in a complex and interconnected neural web producing what we call experience. .... "
Multisensory Adventures
By Vinton G. Cerf
Communications of the ACM, April 2020, Vol. 63 No. 4, Page 7
10.1145/3383671
Google Vice President and Chief Internet Evangelist Vinton G. Cerf
In this column, I want to draw your attention to two books. One has been published to great acclaim and the other is still in process. They resonate with a visceral intensity for which I was honestly unprepared and surprised. The first, Multisensory Experiences, Where the Senses Meet Technology, by Carlos Velasco and Marianna Obrist, is to be published by Oxford University Press. The authors explore concepts we experience every day but don't necessarily understand fully. We are familiar with the five senses (sight, sound, touch, taste, smell). Our brains transduce these physical phenomena into neural pulses that flood along many pathways and interact in many ways. Interestingly, all these senses are translated into essentially similar neural signals but they are processed in a complex and interconnected neural web producing what we call experience. .... "
Recognizing Objects
More advances in fast vision systems.
Optical System Could Lead to Devices That Can Recognize Objects Instantly
UCLA Newsroom
Matthew Chin
March 4, 2020
An optical neural network developed at the University of California, Los Angeles (UCLA) Henry Samueli School of Engineering that concurrently works with multiple wavelengths of light could potentially lead to devices that instantly recognize objects without additional computer processing, with potential applications for robots and autonomous vehicles. The network is a maze with an array of translucent wafers made of different materials like plastic or glass, engineered at a smaller scale than the wavelength of light to split beams into various directions. Said UCLA's Aydogan Ozcan, “There is richer information when you can see colors through different wavelengths of light. Most scenes naturally contain information in vivid color, so the more wavelengths that a network can ‘see,’ the more it increases the amount of information it can process.” .... '
Optical System Could Lead to Devices That Can Recognize Objects Instantly
UCLA Newsroom
Matthew Chin
March 4, 2020
An optical neural network developed at the University of California, Los Angeles (UCLA) Henry Samueli School of Engineering that concurrently works with multiple wavelengths of light could potentially lead to devices that instantly recognize objects without additional computer processing, with potential applications for robots and autonomous vehicles. The network is a maze with an array of translucent wafers made of different materials like plastic or glass, engineered at a smaller scale than the wavelength of light to split beams into various directions. Said UCLA's Aydogan Ozcan, “There is richer information when you can see colors through different wavelengths of light. Most scenes naturally contain information in vivid color, so the more wavelengths that a network can ‘see,’ the more it increases the amount of information it can process.” .... '
Saturday, March 28, 2020
Emergence of the AI Risk Manager
In our day this was integrated with other kinds of analysis and management. It may well make sense to focus this more generally.
The emergence of the professional AI risk manager
By Kenn So in Venturebeat
When the 1970s and 1980s were colored by banking crises, regulators from around the world banded together to set international standards on how to manage financial risk. Those standards, now known as the Basel standards, define a common framework and taxonomy on how risk should be measured and managed. This led to the rise of professional financial risk managers, which was my first job. The largest professional risk associations, GARP and PRMIA, now have over 250,000 certified members combined, and there are many more professional risk managers out there who haven’t gone through those particular certifications.
We are now beset by data breaches and data privacy scandals, and regulators around the world have responded with data regulations. GDPR is the current role model, but I expect a global group of regulators to expand the rules to cover AI more broadly and set the standard on how to manage it. The UK ICO just released a draft but detailed guide on auditing AI. https://ico.org.uk/about-the-ico/ico-and-stakeholder-consultations/ico-consultation-on-the-draft-ai-auditing-framework-guidance-for-organisations/ The EU is developing one as well. Interestingly, their approach is very similar to that of the Basel standards: specific AI risks should be explicitly managed. This will lead to the emergence of professional AI risk managers. ... "
The emergence of the professional AI risk manager
By Kenn So in Venturebeat
When the 1970s and 1980s were colored by banking crises, regulators from around the world banded together to set international standards on how to manage financial risk. Those standards, now known as the Basel standards, define a common framework and taxonomy on how risk should be measured and managed. This led to the rise of professional financial risk managers, which was my first job. The largest professional risk associations, GARP and PRMIA, now have over 250,000 certified members combined, and there are many more professional risk managers out there who haven’t gone through those particular certifications.
We are now beset by data breaches and data privacy scandals, and regulators around the world have responded with data regulations. GDPR is the current role model, but I expect a global group of regulators to expand the rules to cover AI more broadly and set the standard on how to manage it. The UK ICO just released a draft but detailed guide on auditing AI. https://ico.org.uk/about-the-ico/ico-and-stakeholder-consultations/ico-consultation-on-the-draft-ai-auditing-framework-guidance-for-organisations/ The EU is developing one as well. Interestingly, their approach is very similar to that of the Basel standards: specific AI risks should be explicitly managed. This will lead to the emergence of professional AI risk managers. ... "
Rand on Force Planning
Rand piece on the topic developed for Military scenarios, but could it be used for systems that include coopetition as well? Costs, resources, strategies.
Force Planning in the New Era of Strategic Competition
The RAND Blog by Jacob L. Heim
March 28, 2020
The U.S. Department of Defense announced (PDF) in 2018 that it was elevating the priority it placed on developing the capabilities necessary to deter Chinese and Russian aggression. That means that the Department needs new analytical frameworks to reassess what force development looks like during an era of peacetime military competition. In particular, analysts need techniques for estimating how much it costs each side to maintain a fixed military balance over time. ... "
Force Planning in the New Era of Strategic Competition
The RAND Blog by Jacob L. Heim
March 28, 2020
The U.S. Department of Defense announced (PDF) in 2018 that it was elevating the priority it placed on developing the capabilities necessary to deter Chinese and Russian aggression. That means that the Department needs new analytical frameworks to reassess what force development looks like during an era of peacetime military competition. In particular, analysts need techniques for estimating how much it costs each side to maintain a fixed military balance over time. ... "
Low-Code Taking Over
Ultimately this will take over for most coding.
Low Code And No Code: A Looming Trend In Mobile App Development By Nitin Nimbalkar -
Today’s businesses are implementing enriching their operations with capabilities little by little on a variety of different products. But the trend is clear before you know it, the distinction between tools that are powerful enough for professional development teams and to be simple enough for citizen developers.
At this point, Low-Code and No-Code will merge into a single market segment for both enterprise-class and user-friendly developers at the same time.
Before heading, let’s identify the distinct functions of low code and no code in app development, by bifurcating them.
Difference between low code and no code development
• Low Code: Low code is a development move that automates time-consuming manual processes, without manual coding, using a visual IDE environment, which is automation that connects to backends, and application management system.
• No code: In the same way as low-code platforms, no code platform uses a visual application system that allows users to create applications without coding. Usually this includes drag and drop functions. An example of this is Salesforce CRM, which enables people with coding skills to code, and those who don’t have those skills can build simple apps without using the system.
Further, as the need for low code and no code is surging due to increased requirement, trends depict how the picture of coding might get changed. ... "
Low Code And No Code: A Looming Trend In Mobile App Development By Nitin Nimbalkar -
Today’s businesses are implementing enriching their operations with capabilities little by little on a variety of different products. But the trend is clear before you know it, the distinction between tools that are powerful enough for professional development teams and to be simple enough for citizen developers.
At this point, Low-Code and No-Code will merge into a single market segment for both enterprise-class and user-friendly developers at the same time.
Before heading, let’s identify the distinct functions of low code and no code in app development, by bifurcating them.
Difference between low code and no code development
• Low Code: Low code is a development move that automates time-consuming manual processes, without manual coding, using a visual IDE environment, which is automation that connects to backends, and application management system.
• No code: In the same way as low-code platforms, no code platform uses a visual application system that allows users to create applications without coding. Usually this includes drag and drop functions. An example of this is Salesforce CRM, which enables people with coding skills to code, and those who don’t have those skills can build simple apps without using the system.
Further, as the need for low code and no code is surging due to increased requirement, trends depict how the picture of coding might get changed. ... "
The COVID-19 Virus and What the Recommendations Mean
Very nicely done, just eight minutes plus animated video about the topic. For the whole family to understand. By the German Kurzgesagt animation studio.
Kurzgesagt Animation Studio (with other language translations)
And via the Center For Disease Conrol: https://www.cdc.gov/coronavirus/2019-ncov/index.html
Kurzgesagt Animation Studio (with other language translations)
And via the Center For Disease Conrol: https://www.cdc.gov/coronavirus/2019-ncov/index.html
Friday, March 27, 2020
Watermarking Control Data for Safety from Hackers
Thoughtful and apparently simple idea.
Approach Could Protect Control Systems From Hackers
IEEE Spectrum
Michelle Hampson
March 26, 2020
Researchers at Siemens and Croatia’s University of Zagreb have developed a technique to more easily identify attacks against industrial control systems (ICS), like those used in the electric power grid, or to control traffic. The researchers applied the concept of "watermarking" data during transmission to ICS, in a manner that is broadly applicable without requiring details about the specific ICS. In such a scenario, when data is transmitted in real time over an unencrypted channel, it is accompanied by a specialized algorithm in the form of a recursive watermark (RWM) signal; any disruption to the RWM signal indicates an attack is underway. Said Siemens' Zhen Song, “If attackers change or delay the real-time channel signal a little bit, the algorithm can detect the suspicious event and raise alarms immediately."
Approach Could Protect Control Systems From Hackers
IEEE Spectrum
Michelle Hampson
March 26, 2020
Researchers at Siemens and Croatia’s University of Zagreb have developed a technique to more easily identify attacks against industrial control systems (ICS), like those used in the electric power grid, or to control traffic. The researchers applied the concept of "watermarking" data during transmission to ICS, in a manner that is broadly applicable without requiring details about the specific ICS. In such a scenario, when data is transmitted in real time over an unencrypted channel, it is accompanied by a specialized algorithm in the form of a recursive watermark (RWM) signal; any disruption to the RWM signal indicates an attack is underway. Said Siemens' Zhen Song, “If attackers change or delay the real-time channel signal a little bit, the algorithm can detect the suspicious event and raise alarms immediately."
Linden Labs Gives up on VR Spin off
Note our examination of virtual world retail, mentioned here earlier. This piece gives us a look at the current status of world tech with current supporting tech like VR. Last I looked SL was not heavily used.
Why 'Second Life' developer Linden Lab gave up on its VR spin-off
'We decided that Linden Lab wanted to become cash-positive.’
Nick Summers, @nisummers in Engadget
Second Life developer Linden Lab has sold Sansar, a platform for virtual 'scenes' that could be explored with a VR headset or traditional PC setup.
Back in 2016, I described the service as "WordPress for social VR." A foundation that allowed creators to import custom assets and quickly build their own shareable world. The company hoped that this mix would attract commercial clients — think museums, car manufacturers and record labels — that want their own VR experience but don't have the technical expertise to deal with game engines and digital distribution.
Similarly, Linden Lab hoped Sansar would attract users who crave diverse worlds — like those promised in movies such as Ready Player One — and, if they have a creative spark, possibly make their own assets that can be shared and sold to the rest of the community.
Sansar's VR compatibility was a big draw. At the time, there were many 3D chat room experiences — including Second Life — but few that allowed large groups to strap on a headset and freely converse. Linden Lab knew that the number of people with high-end VR headsets was small, though. And the team didn't want to dilute the experience so it could run on mobile-powered hardware like Google Cardboard and Samsung's Gear VR. ... "
Why 'Second Life' developer Linden Lab gave up on its VR spin-off
'We decided that Linden Lab wanted to become cash-positive.’
Nick Summers, @nisummers in Engadget
Second Life developer Linden Lab has sold Sansar, a platform for virtual 'scenes' that could be explored with a VR headset or traditional PC setup.
Back in 2016, I described the service as "WordPress for social VR." A foundation that allowed creators to import custom assets and quickly build their own shareable world. The company hoped that this mix would attract commercial clients — think museums, car manufacturers and record labels — that want their own VR experience but don't have the technical expertise to deal with game engines and digital distribution.
Similarly, Linden Lab hoped Sansar would attract users who crave diverse worlds — like those promised in movies such as Ready Player One — and, if they have a creative spark, possibly make their own assets that can be shared and sold to the rest of the community.
Sansar's VR compatibility was a big draw. At the time, there were many 3D chat room experiences — including Second Life — but few that allowed large groups to strap on a headset and freely converse. Linden Lab knew that the number of people with high-end VR headsets was small, though. And the team didn't want to dilute the experience so it could run on mobile-powered hardware like Google Cardboard and Samsung's Gear VR. ... "
Tesla Autopilot Detecting Traffic Lights
An example of systems including more environmental context for making decisions, ultimately essential.
A video shows a Tesla stopping autonomously at a red light. ....
By Christine Fisher, @cfisherwrites in Engadget
On Novel Risks in the Enterprise
Something we studied in some detail, solution was to have sufficient knowledge and resources, internal and access to external to be able to address the context of such problems. Making them less 'Novel'. Not sure how well that works in the current situation.
Novel Risks by Robert S. Kaplan, Dutch Leonard, and Anette Mikes in HBSWK
Companies can manage known risks by reducing their likelihood and impact. But such routine risk management often prevents them from recognizing and responding rapidly to novel risks, those not envisioned or seen before. Setting up teams, processes, and capabilities in advance for dealing with unexpected circumstances can protect against their severe consequences.
Author Abstract
All organizations now practice some form of risk management to identify and assess routine risks for compliance—in their operations, supply chains, and strategy, as well as from envisioned external events. These risk management policies, however, fail when employees do not recognize the potential for novel risks to occur during apparently routine operations. Novel risks—arising from circumstances that haven’t been thought of or seen before—make routine risk management ineffective, and, more seriously, delude management into thinking that risks have been mitigated when, in fact, novel risks can escalate to serious if not fatal consequences. The paper discusses why well-known behavioral and organizational biases cause novel risks to go unrecognized and unmitigated. Based on best practices in several organizations, the paper describes the processes that private and public entities can institute to identify and manage novel risks. These risks require organizations to launch adaptive and nimble responses to avoid being trapped in routines that are inadequate or even counterproductive when novel circumstances arise. ....
Paper: http://www.hbs.edu/faculty/pages/download.aspx?name=20-094.pdf
Novel Risks by Robert S. Kaplan, Dutch Leonard, and Anette Mikes in HBSWK
Companies can manage known risks by reducing their likelihood and impact. But such routine risk management often prevents them from recognizing and responding rapidly to novel risks, those not envisioned or seen before. Setting up teams, processes, and capabilities in advance for dealing with unexpected circumstances can protect against their severe consequences.
Author Abstract
All organizations now practice some form of risk management to identify and assess routine risks for compliance—in their operations, supply chains, and strategy, as well as from envisioned external events. These risk management policies, however, fail when employees do not recognize the potential for novel risks to occur during apparently routine operations. Novel risks—arising from circumstances that haven’t been thought of or seen before—make routine risk management ineffective, and, more seriously, delude management into thinking that risks have been mitigated when, in fact, novel risks can escalate to serious if not fatal consequences. The paper discusses why well-known behavioral and organizational biases cause novel risks to go unrecognized and unmitigated. Based on best practices in several organizations, the paper describes the processes that private and public entities can institute to identify and manage novel risks. These risks require organizations to launch adaptive and nimble responses to avoid being trapped in routines that are inadequate or even counterproductive when novel circumstances arise. ....
Paper: http://www.hbs.edu/faculty/pages/download.aspx?name=20-094.pdf
Attacks on Deep Reinforcement Learning
On the safety of Reinforcement Learning. Considerable,largely technical piece.
Physically Realistic Attacks on Deep Reinforcement Learning Bair Berkeley By Adam Gleave
Deep reinforcement learning (RL) has achieved superhuman performance in problems ranging from data center cooling to video games. RL policies may soon be widely deployed, with research underway in autonomous driving, negotiation and automated trading. Many potential applications are safety-critical: automated trading failures caused Knight Capital to lose USD 460M, while faulty autonomous vehicles have resulted in loss of life.
Consequently, it is critical that RL policies are robust: both to naturally occurring distribution shift, and to malicious attacks by adversaries. Unfortunately, we find that RL policies which perform at a high-level in normal situations can harbor serious vulnerabilities which can be exploited by an adversary.... "
Physically Realistic Attacks on Deep Reinforcement Learning Bair Berkeley By Adam Gleave
Deep reinforcement learning (RL) has achieved superhuman performance in problems ranging from data center cooling to video games. RL policies may soon be widely deployed, with research underway in autonomous driving, negotiation and automated trading. Many potential applications are safety-critical: automated trading failures caused Knight Capital to lose USD 460M, while faulty autonomous vehicles have resulted in loss of life.
Consequently, it is critical that RL policies are robust: both to naturally occurring distribution shift, and to malicious attacks by adversaries. Unfortunately, we find that RL policies which perform at a high-level in normal situations can harbor serious vulnerabilities which can be exploited by an adversary.... "
Thursday, March 26, 2020
Wal-Mart Joins Hyperledger
Wal-Mart's early looks at this seem to be identification and tracking related supply chain applications. Is this an indication of further dives into the space?
Walmart Joins Hyperledger Alongside 7 Other Companies By Samuel Haig
Walmart has become the latest major conglomerate to join open-source blockchain consortium Hyperledger. Walmart is among eight new members to join the platform. The new members were announced on March 3 at the Hyperledger Global Forum 2020 in Phoenix, Arizona.
Sanjay Radhakrishnan, the vice president of Walmart Global Tech, expressed excitement in joining the platform, stating:
“We've seen strong results through our various deployments of blockchain, and believe staying involved in open source communities will further transform the future of our business." ... '
Walmart Joins Hyperledger Alongside 7 Other Companies By Samuel Haig
Walmart has become the latest major conglomerate to join open-source blockchain consortium Hyperledger. Walmart is among eight new members to join the platform. The new members were announced on March 3 at the Hyperledger Global Forum 2020 in Phoenix, Arizona.
Sanjay Radhakrishnan, the vice president of Walmart Global Tech, expressed excitement in joining the platform, stating:
“We've seen strong results through our various deployments of blockchain, and believe staying involved in open source communities will further transform the future of our business." ... '
Positive Tools for Challenging Times
I see that long-time correspondent Sunnie Southern of ViableSynergy has a newsletter about 'Positive Tools for Challenging Times'. Check it out:
Positive Tools for Challenging Times
This is the second in a series of emails with resources our team has personally compiled to make life a little easier during this difficult time. We have added a couple of new categories based on your feedback and suggestions from last week's message. Please take our short survey and check-out new innovations. .....
If you have a resource or an inspiring story that you would like to share, please email us at Hello@ViableSynergy.com and we'll share in a future message. ....
It has been a while since connecting with many of you. We'd love to reconnect and exchange updates. Send us a message at Hello@ViableSynergy.com or via your favorite social media channel (click below) and let's get something on the calendar. ... '
Positive Tools for Challenging Times
This is the second in a series of emails with resources our team has personally compiled to make life a little easier during this difficult time. We have added a couple of new categories based on your feedback and suggestions from last week's message. Please take our short survey and check-out new innovations. .....
If you have a resource or an inspiring story that you would like to share, please email us at Hello@ViableSynergy.com and we'll share in a future message. ....
It has been a while since connecting with many of you. We'd love to reconnect and exchange updates. Send us a message at Hello@ViableSynergy.com or via your favorite social media channel (click below) and let's get something on the calendar. ... '
IBM Tracks Virus 'Weather'
A nicely done Covid-19 tracking app is part of the IBM 'Weather Channel' App. Shows location and trends continually updated, based on your location. Nicely shown on the bottom of the App with a red button to click. With other virus news and video. I see the IBM CEO talks about it below. I am following on my smartphone.
What are the best ways to make this influence behavior? Some sort of simple behavior-effect prediction?
Later I noted that the warning included: (Some locations do not currently provide all data). So we have the classic problem of incomplete and even faulty data.
IBM CEO: Covid-19 tracking app can help modify behavior
Ginni Rometty, CEO of IBM, explains what the company is doing to help during the coronavirus crisis. It launched a tool on its Weather Channel app that tracks the outbreak. ....
Read in CNN Business: https://apple.news/A_YGufs01RRiCOpb8GNl2eg\
What are the best ways to make this influence behavior? Some sort of simple behavior-effect prediction?
Later I noted that the warning included: (Some locations do not currently provide all data). So we have the classic problem of incomplete and even faulty data.
IBM CEO: Covid-19 tracking app can help modify behavior
Ginni Rometty, CEO of IBM, explains what the company is doing to help during the coronavirus crisis. It launched a tool on its Weather Channel app that tracks the outbreak. ....
Read in CNN Business: https://apple.news/A_YGufs01RRiCOpb8GNl2eg\
Neural Networks Search for New Materials
Mentioned previously here. Novel use of 'creativity' to search among possible solutions.
Neural networks facilitate optimization in the search for new materials
by David L. Chandler, Massachusetts Institute of Technology
An iterative, multi-step process for training a neural network, as depicted at top left, leads to an assessment of the tradeoffs between two competing qualities, as depicted in graph at center. The blue line represents a so-called Pareto front, defining the cases beyond which the materials selection cannot be further improved. This makes it possible to identify specific categories of promising new materials, such as the one depicted by the molecular diagram at right.
When searching through theoretical lists of possible new materials for particular applications, such as batteries or other energy-related devices, there are often millions of potential materials that could be considered, and multiple criteria that need to be met and optimized at once. Now, researchers at MIT have found a way to dramatically streamline the discovery process, using a machine learning system.
As a demonstration, the team arrived at a set of the eight most promising materials, out of nearly 3 million candidates, for an energy storage system called a flow battery. This culling process would have taken 50 years by conventional analytical methods, they say, but they accomplished it in five weeks.
The findings are reported in the journal ACS Central Science, in a paper by MIT professor of chemical engineering Heather Kulik, Jon Paul Janet Ph.D. '19, Sahasrajit Ramesh, and graduate student Chenru Duan. ... "
Neural networks facilitate optimization in the search for new materials
by David L. Chandler, Massachusetts Institute of Technology
An iterative, multi-step process for training a neural network, as depicted at top left, leads to an assessment of the tradeoffs between two competing qualities, as depicted in graph at center. The blue line represents a so-called Pareto front, defining the cases beyond which the materials selection cannot be further improved. This makes it possible to identify specific categories of promising new materials, such as the one depicted by the molecular diagram at right.
When searching through theoretical lists of possible new materials for particular applications, such as batteries or other energy-related devices, there are often millions of potential materials that could be considered, and multiple criteria that need to be met and optimized at once. Now, researchers at MIT have found a way to dramatically streamline the discovery process, using a machine learning system.
As a demonstration, the team arrived at a set of the eight most promising materials, out of nearly 3 million candidates, for an energy storage system called a flow battery. This culling process would have taken 50 years by conventional analytical methods, they say, but they accomplished it in five weeks.
The findings are reported in the journal ACS Central Science, in a paper by MIT professor of chemical engineering Heather Kulik, Jon Paul Janet Ph.D. '19, Sahasrajit Ramesh, and graduate student Chenru Duan. ... "
Augmented Analytics
For now not replacing but rather augmenting people. Unlike the article I would say this has been around for a long time.
Augmented Analytics Drives Next Wave of AI, Machine Learning, BI
Business intelligence will move beyond dashboards, and AI and machine learning will become easier for less skilled workers as augmented analytics are embedded into platforms.
Enterprises struggling to get their data management and machine learning practices up to speed in an era of more and more data may be in for a nice surprise. After years of bending under the weight of more data, more need for insights, and a shortage of data science talent, augmented analytics is coming to the rescue. What's more, it could also help with putting machine learning into production, something that has been an issue for many enterprises.
Identified as a major trend by Gartner at its Symposium event last year, augmented analytics has been around for several years already, according to Rita Sallam, distinguished research VP and Gartner fellow. But in recent years the concept has expanded to encompass automation of many of the processes that are required by the entire data pipeline. That includes tasks such as profiling, cataloging, storage, data management, generating insights, assisting with data science and machine learning models, and operationalization, according to Sallam, who was set to present a session about augmented analytics at the now postponed Gartner Data and Analytics Summit that has been rescheduled for September ... "
Augmented Analytics Drives Next Wave of AI, Machine Learning, BI
Business intelligence will move beyond dashboards, and AI and machine learning will become easier for less skilled workers as augmented analytics are embedded into platforms.
Enterprises struggling to get their data management and machine learning practices up to speed in an era of more and more data may be in for a nice surprise. After years of bending under the weight of more data, more need for insights, and a shortage of data science talent, augmented analytics is coming to the rescue. What's more, it could also help with putting machine learning into production, something that has been an issue for many enterprises.
Identified as a major trend by Gartner at its Symposium event last year, augmented analytics has been around for several years already, according to Rita Sallam, distinguished research VP and Gartner fellow. But in recent years the concept has expanded to encompass automation of many of the processes that are required by the entire data pipeline. That includes tasks such as profiling, cataloging, storage, data management, generating insights, assisting with data science and machine learning models, and operationalization, according to Sallam, who was set to present a session about augmented analytics at the now postponed Gartner Data and Analytics Summit that has been rescheduled for September ... "
Wednesday, March 25, 2020
Timing is the Thing for Modeling the Risk
Forecasting once again is essential for knowing how to react.
Supply chain outlook: The timing of the slowdown
MIT Professor David Simchi-Levi forecast the mid-March manufacturing pause. Now he looks ahead.
Peter Dizikes | MIT News Office
March 25, 2020
With the Covid-19 virus disrupting the global economy, what is the effect on the international supply chain? In a pair of articles, MIT News examines core supply-chain issues, in terms of affected industries and the timing of unfolding interruptions.
The rapid spread of the Covid-19 virus is already having a huge impact on the global economy, which is rippling around the world via the long supply chains of major industries.
MIT supply chain expert David Simchi-Levi has been watching those ripples closely in 2020, as they have moved from China outward to the U.S. and Europe. His tracking of supply chain problems provides insight into what is happening in the global economy — and what could happen in a variety of scenarios.
“This is a significant challenge,” says Simchi-Levi, who is a professor of engineering systems in the School of Engineering and in the Institute for Data, Systems, and Society within the MIT Stephen A. Schwarzman College of Computing. The global public health crisis, he adds, “is not only affecting the supply chain. There is a significant impact on demand, and as a result, a significant impact on the financial performance on all these businesses.” ... "
Supply chain outlook: The timing of the slowdown
MIT Professor David Simchi-Levi forecast the mid-March manufacturing pause. Now he looks ahead.
Peter Dizikes | MIT News Office
March 25, 2020
With the Covid-19 virus disrupting the global economy, what is the effect on the international supply chain? In a pair of articles, MIT News examines core supply-chain issues, in terms of affected industries and the timing of unfolding interruptions.
The rapid spread of the Covid-19 virus is already having a huge impact on the global economy, which is rippling around the world via the long supply chains of major industries.
MIT supply chain expert David Simchi-Levi has been watching those ripples closely in 2020, as they have moved from China outward to the U.S. and Europe. His tracking of supply chain problems provides insight into what is happening in the global economy — and what could happen in a variety of scenarios.
“This is a significant challenge,” says Simchi-Levi, who is a professor of engineering systems in the School of Engineering and in the Institute for Data, Systems, and Society within the MIT Stephen A. Schwarzman College of Computing. The global public health crisis, he adds, “is not only affecting the supply chain. There is a significant impact on demand, and as a result, a significant impact on the financial performance on all these businesses.” ... "
Crowdsource the Problem, Distribute Solutions.
Lots of Possibilities Here, find ways to find them
Folding@Home Network More Powerful Than World's Top 7 Supercomputers Combined
Tom's Hardware
by Paul Alcorn
The Folding@Home distributed computing network is currently churning out 470 petaflops of raw computing power, which is more powerful than the collective computing muscle of world's top seven supercomputers, in a push to defeat the coronavirus pandemic. That compares to the 149 petaflops of sustained output generated by the world's fastest supercomputer, the Oak Ridge National Laboratory (ORNL)'s Summit system. ORNL announced two weeks ago that Summit had been enlisted in the fight against COVID-19. Folding@Home said the number of contributors in its fight against the pandemic has risen 1,200%. ... "
Folding@Home Network More Powerful Than World's Top 7 Supercomputers Combined
Tom's Hardware
by Paul Alcorn
The Folding@Home distributed computing network is currently churning out 470 petaflops of raw computing power, which is more powerful than the collective computing muscle of world's top seven supercomputers, in a push to defeat the coronavirus pandemic. That compares to the 149 petaflops of sustained output generated by the world's fastest supercomputer, the Oak Ridge National Laboratory (ORNL)'s Summit system. ORNL announced two weeks ago that Summit had been enlisted in the fight against COVID-19. Folding@Home said the number of contributors in its fight against the pandemic has risen 1,200%. ... "
Generating Videos
Video synthesis to supplement with real world data.
IBM’s AI generates new footage from video stills
Kyle Wiggers @KYLE_L_WIGGERS in VentureBeat
A paper coauthored by researchers at IBM describes an AI system — Navsynth — that generates videos seen during training as well as unseen videos. While this in and of itself isn’t novel — it’s an acute area of interest for Alphabet’s DeepMind and others — the researchers say the approach produces superior quality videos compared with existing methods. If the claim holds water, their system could be used to synthesize videos on which other AI systems train, supplementing real-world data sets that are incomplete or marred by corrupted samples.
As the researchers explain, the bulk of work in the video synthesis domain leverages GANs, or two-part neural networks consisting of generators that produce samples and discriminators that attempt to distinguish between the generated samples and real-world samples. They’re highly capable but suffer from a phenomenon called mode collapse, where the generator generates a limited diversity of samples (or even the same sample) regardless of the input. ... "
IBM’s AI generates new footage from video stills
Kyle Wiggers @KYLE_L_WIGGERS in VentureBeat
A paper coauthored by researchers at IBM describes an AI system — Navsynth — that generates videos seen during training as well as unseen videos. While this in and of itself isn’t novel — it’s an acute area of interest for Alphabet’s DeepMind and others — the researchers say the approach produces superior quality videos compared with existing methods. If the claim holds water, their system could be used to synthesize videos on which other AI systems train, supplementing real-world data sets that are incomplete or marred by corrupted samples.
As the researchers explain, the bulk of work in the video synthesis domain leverages GANs, or two-part neural networks consisting of generators that produce samples and discriminators that attempt to distinguish between the generated samples and real-world samples. They’re highly capable but suffer from a phenomenon called mode collapse, where the generator generates a limited diversity of samples (or even the same sample) regardless of the input. ... "
Personality Key for Open Source Contribution?
Depends if we can really detect useful personality measures of this type, and if they will be stable under differing contexts and goals.
Personality Key in Whether Developers Can Contribute to Open Source Projects
Waterloo News
The results of a study by researchers at the University of Waterloo in Canada suggest a software developer's personality could affect their ability to contribute to open source projects. Although social factors are the primary determinant of acceptance or rejection of online contributors' work, Waterloo's Meiyappan Nagappan said personality also is important to consider because it governs how contributors' behaviors manifest in their interactions with others. The researchers assessed data from the GitHub open source platform to analyze the personality traits of 16,935 active developers from 1,860 projects, and extracted the five leading developer personalities—openness, conscientiousness, extraversion, agreeableness, and neuroticism—with the IBM Watson Personality Insights service. Waterloo's Alex Yun said the analysis suggested that biases may be involved in the acceptance or rejection of contributions to work on open source platforms. Said Yun, "Managers are more likely to accept a contribution from someone they know, or someone more agreeable than others, even though the technical contribution might be similar."
Personality Key in Whether Developers Can Contribute to Open Source Projects
Waterloo News
The results of a study by researchers at the University of Waterloo in Canada suggest a software developer's personality could affect their ability to contribute to open source projects. Although social factors are the primary determinant of acceptance or rejection of online contributors' work, Waterloo's Meiyappan Nagappan said personality also is important to consider because it governs how contributors' behaviors manifest in their interactions with others. The researchers assessed data from the GitHub open source platform to analyze the personality traits of 16,935 active developers from 1,860 projects, and extracted the five leading developer personalities—openness, conscientiousness, extraversion, agreeableness, and neuroticism—with the IBM Watson Personality Insights service. Waterloo's Alex Yun said the analysis suggested that biases may be involved in the acceptance or rejection of contributions to work on open source platforms. Said Yun, "Managers are more likely to accept a contribution from someone they know, or someone more agreeable than others, even though the technical contribution might be similar."
Defending Retail Against the Coronavirus
Useful approaches outlined.
Defending Retail against the Coronavirus
Companies can brace themselves for lasting changes to the sector even as they grapple with short-term disruption.....
By Marc-André Kamel and Joëlle de Montgolfier in Bain ...
Defending Retail against the Coronavirus
Companies can brace themselves for lasting changes to the sector even as they grapple with short-term disruption.....
By Marc-André Kamel and Joëlle de Montgolfier in Bain ...
Tires Get Embedded Tech
As the article suggests, a bit unexpected, but its where the system and its uses meets the real world, and its useful to know what is being sensed, in real time and over time.
The Humble Tire Gets Kitted Out with Technology
The Wall Street Journal
Sara Castellanos
March 19, 2020
Tire manufacturers are designing intelligent tires to improve the braking of self-driving vehicles. Goodyear Tire & Rubber is developing tires equipped with a sensor and proprietary machine learning algorithms, in the hope they will help autonomous vehicles brake at a shorter distance and communicate with self-driving systems. Goodyear currently sells tires that measure temperature and pressure, but the new intelligent tire incorporates a sensor that monitors wear, inflation, and road surface conditions; data from the sensor is tracked continuously and analyzed in real time with machine learning algorithms. Said Goodyear CEO Rich Kramer, "With the onset of autonomous vehicles, the role of the tire in the performance and safety of the vehicle would increase if we can make that tire intelligent."
The Humble Tire Gets Kitted Out with Technology
The Wall Street Journal
Sara Castellanos
March 19, 2020
Tire manufacturers are designing intelligent tires to improve the braking of self-driving vehicles. Goodyear Tire & Rubber is developing tires equipped with a sensor and proprietary machine learning algorithms, in the hope they will help autonomous vehicles brake at a shorter distance and communicate with self-driving systems. Goodyear currently sells tires that measure temperature and pressure, but the new intelligent tire incorporates a sensor that monitors wear, inflation, and road surface conditions; data from the sensor is tracked continuously and analyzed in real time with machine learning algorithms. Said Goodyear CEO Rich Kramer, "With the onset of autonomous vehicles, the role of the tire in the performance and safety of the vehicle would increase if we can make that tire intelligent."
Tuesday, March 24, 2020
China Launches Blockchain Network
Decreasing costs and increasing the ease of blockchain application building. Seems a considerable effort.
China to Launch National Blockchain Network in 100 Cities
IEEE Spectrum
Nick Stockton
An alliance of Chinese government groups, banks, and technology firms plans to launch the Blockchain-based Service Network (BSN), one of the first blockchain networks constructed and maintained by a central government, in April. Advocates say the BSN will slash the costs of blockchain-based business by 80%, with nodes hopefully installed in 100 Chinese cities by launch time. The network will allow programmers to develop blockchain applications more easily, but apps running on the BSN will have closed or "permissioned" membership by default. North Carolina State University's Hong Wan suggests China's government aims to make the BSN the core component of a digital currency and payment system that competes with other services. The BSN Alliance hopes the platform will become the global standard for blockchain operations, but the Chinese government's retention of the BSN's root key means it can monitor all transactions made via the platform. ... "
China to Launch National Blockchain Network in 100 Cities
IEEE Spectrum
Nick Stockton
An alliance of Chinese government groups, banks, and technology firms plans to launch the Blockchain-based Service Network (BSN), one of the first blockchain networks constructed and maintained by a central government, in April. Advocates say the BSN will slash the costs of blockchain-based business by 80%, with nodes hopefully installed in 100 Chinese cities by launch time. The network will allow programmers to develop blockchain applications more easily, but apps running on the BSN will have closed or "permissioned" membership by default. North Carolina State University's Hong Wan suggests China's government aims to make the BSN the core component of a digital currency and payment system that competes with other services. The BSN Alliance hopes the platform will become the global standard for blockchain operations, but the Chinese government's retention of the BSN's root key means it can monitor all transactions made via the platform. ... "
Running Simulations to Train Analyses
We did versions of the same thing to get data that would create more detailed and thus useful models of industrial scenarios. Especially useful if it hard to get enough running-sensing examples. One value of all simulations is to create training examples that are too difficult or risky to create in the real world.
System Trains Driverless cars in simulation before they hit the road
Using a photorealistic simulation engine, vehicles learn to drive in the real world and recover from near-crash scenarios.
Rob Matheson | MIT News Office
A simulation system invented at MIT to train driverless cars creates a photorealistic world with infinite steering possibilities, helping the cars learn to navigate a host of worse-case scenarios before cruising down real streets.
Control systems, or “controllers,” for autonomous vehicles largely rely on real-world datasets of driving trajectories from human drivers. From these data, they learn how to emulate safe steering controls in a variety of situations. But real-world data from hazardous “edge cases,” such as nearly crashing or being forced off the road or into other lanes, are — fortunately — rare.
Some computer programs, called “simulation engines,” aim to imitate these situations by rendering detailed virtual roads to help train the controllers to recover. But the learned control from simulation has never been shown to transfer to reality on a full-scale vehicle.
The MIT researchers tackle the problem with their photorealistic simulator, called Virtual Image Synthesis and Transformation for Autonomy (VISTA). It uses only a small dataset, captured by humans driving on a road, to synthesize a practically infinite number of new viewpoints from trajectories that the vehicle could take in the real world. The controller is rewarded for the distance it travels without crashing, so it must learn by itself how to reach a destination safely. In doing so, the vehicle learns to safely navigate any situation it encounters, including regaining control after swerving between lanes or recovering from near-crashes.
In tests, a controller trained within the VISTA simulator safely was able to be safely deployed onto a full-scale driverless car and to navigate through previously unseen streets. In positioning the car at off-road orientations that mimicked various near-crash situations, the controller was also able to successfully recover the car back into a safe driving trajectory within a few seconds. A paper describing the system has been published in IEEE Robotics and Automation Letters and will be presented at the upcoming ICRA conference in May. ... "
System Trains Driverless cars in simulation before they hit the road
Using a photorealistic simulation engine, vehicles learn to drive in the real world and recover from near-crash scenarios.
Rob Matheson | MIT News Office
A simulation system invented at MIT to train driverless cars creates a photorealistic world with infinite steering possibilities, helping the cars learn to navigate a host of worse-case scenarios before cruising down real streets.
Control systems, or “controllers,” for autonomous vehicles largely rely on real-world datasets of driving trajectories from human drivers. From these data, they learn how to emulate safe steering controls in a variety of situations. But real-world data from hazardous “edge cases,” such as nearly crashing or being forced off the road or into other lanes, are — fortunately — rare.
Some computer programs, called “simulation engines,” aim to imitate these situations by rendering detailed virtual roads to help train the controllers to recover. But the learned control from simulation has never been shown to transfer to reality on a full-scale vehicle.
The MIT researchers tackle the problem with their photorealistic simulator, called Virtual Image Synthesis and Transformation for Autonomy (VISTA). It uses only a small dataset, captured by humans driving on a road, to synthesize a practically infinite number of new viewpoints from trajectories that the vehicle could take in the real world. The controller is rewarded for the distance it travels without crashing, so it must learn by itself how to reach a destination safely. In doing so, the vehicle learns to safely navigate any situation it encounters, including regaining control after swerving between lanes or recovering from near-crashes.
In tests, a controller trained within the VISTA simulator safely was able to be safely deployed onto a full-scale driverless car and to navigate through previously unseen streets. In positioning the car at off-road orientations that mimicked various near-crash situations, the controller was also able to successfully recover the car back into a safe driving trajectory within a few seconds. A paper describing the system has been published in IEEE Robotics and Automation Letters and will be presented at the upcoming ICRA conference in May. ... "
Scaling AI Training
Scaling training costs. Technical.
Google open-sources framework that reduces AI training costs by up to 80% By Kyle Wiggers
Google researchers recently published a paper describing a framework — SEED RL — that scales AI model training to thousands of machines. They say that it could facilitate training at millions of frames per second on a machine while reducing costs by up to 80%, potentially leveling the playing field for startups that couldn’t previously compete with large AI labs.
Training sophisticated machine learning models in the cloud remains prohibitively expensive. According to a recent Synced report, the University of Washington’s Grover, which is tailored for both the generation and detection of fake news, cost $25,000 to train over the course of two weeks. OpenAI racked up $256 per hour to train its GPT-2 language model, and Google spent an estimated $6,912 training BERT, a bidirectional transformer model that redefined the state of the art for 11 natural language processing tasks. ... "
Google open-sources framework that reduces AI training costs by up to 80% By Kyle Wiggers
Google researchers recently published a paper describing a framework — SEED RL — that scales AI model training to thousands of machines. They say that it could facilitate training at millions of frames per second on a machine while reducing costs by up to 80%, potentially leveling the playing field for startups that couldn’t previously compete with large AI labs.
Training sophisticated machine learning models in the cloud remains prohibitively expensive. According to a recent Synced report, the University of Washington’s Grover, which is tailored for both the generation and detection of fake news, cost $25,000 to train over the course of two weeks. OpenAI racked up $256 per hour to train its GPT-2 language model, and Google spent an estimated $6,912 training BERT, a bidirectional transformer model that redefined the state of the art for 11 natural language processing tasks. ... "
Teaching AI to be Better at Second-Guessing
Intent is a element of context that humans frequently use in adapting to conversation.
How humans are teaching AI to become better at second-guessing
by Lachlan Gilbert, University of New South Wales
One of the holy grails in the development of artificial intelligence (AI) is giving machines the ability to predict intent when interacting with humans.
We humans do it all the time and without even being aware of it: we observe, we listen, we use our past experience to reason about what someone is doing, why they are doing it to come up with a prediction about what they will do next.
At the moment, AI may do a plausible job at detecting the intent of another person (in other words, after the fact). Or it may even have a list of predefined, possible responses that a human will respond with in a given situation. But when an AI system or machine only has a few clues or partial observations to go on, its responses can sometimes be a little… robotic. .... "
How humans are teaching AI to become better at second-guessing
by Lachlan Gilbert, University of New South Wales
One of the holy grails in the development of artificial intelligence (AI) is giving machines the ability to predict intent when interacting with humans.
We humans do it all the time and without even being aware of it: we observe, we listen, we use our past experience to reason about what someone is doing, why they are doing it to come up with a prediction about what they will do next.
At the moment, AI may do a plausible job at detecting the intent of another person (in other words, after the fact). Or it may even have a list of predefined, possible responses that a human will respond with in a given situation. But when an AI system or machine only has a few clues or partial observations to go on, its responses can sometimes be a little… robotic. .... "
Monday, March 23, 2020
Crowdsourcing Creativity
Spent much time on broad the idea, like the direction of the process.
Crowdsourcing Plot Lines to Help the Creative Process
Penn State News
Jessica Hallman
March 13, 2020
Researchers at the Pennsylvania State University (Penn State) College of Information Sciences and Technology have launched a crowdsourced system that provides writers with story ideas from the online crowd to facilitate the creative process. The Heteroglossia system lets authors share sections of their story drafts using a text editor, and online workers are tasked with brainstorming plot ideas from the perspective of fictional characters they are assigned. Penn State's Ting-Hao (Kenneth) Huang said human workers currently power the system, but artificial intelligence could be incorporated into the platform in the future. "I believe if we learn how to help creative writing or creative processes in general, we can learn more about how to build systems that can be creative."
Crowdsourcing Plot Lines to Help the Creative Process
Penn State News
Jessica Hallman
March 13, 2020
Researchers at the Pennsylvania State University (Penn State) College of Information Sciences and Technology have launched a crowdsourced system that provides writers with story ideas from the online crowd to facilitate the creative process. The Heteroglossia system lets authors share sections of their story drafts using a text editor, and online workers are tasked with brainstorming plot ideas from the perspective of fictional characters they are assigned. Penn State's Ting-Hao (Kenneth) Huang said human workers currently power the system, but artificial intelligence could be incorporated into the platform in the future. "I believe if we learn how to help creative writing or creative processes in general, we can learn more about how to build systems that can be creative."
IBM Partners for Coronavirus Research
Interesting directions for collaborating on technology.
IBM Partners with White House to Direct Supercomputing Power for Coronavirus Research
CNN
Clare Duffy
March 22, 2020
IBM will help coordinate an initiative to supply more than 330 petaflops of computing power to scientists researching COVID-19, as part of the COVID-19 High-Performance Computing Consortium, in partnership with the White House Office of Science and Technology Policy and the U.S. Department of Energy. The initiative will harness 16 supercomputing systems from IBM, national laboratories, several universities, Amazon, Google, Microsoft, and others. Computing power will be provided via remote access to researchers whose projects are approved by the consortium's leadership board. The consortium also will connect researchers with top computational scientists. Said IBM Research director Dario Gil, "We're bringing together expertise ... even across competitors, to work on this. We think it's important to bring a sense of community and to bring science and capability against this goal."
IBM Partners with White House to Direct Supercomputing Power for Coronavirus Research
CNN
Clare Duffy
March 22, 2020
IBM will help coordinate an initiative to supply more than 330 petaflops of computing power to scientists researching COVID-19, as part of the COVID-19 High-Performance Computing Consortium, in partnership with the White House Office of Science and Technology Policy and the U.S. Department of Energy. The initiative will harness 16 supercomputing systems from IBM, national laboratories, several universities, Amazon, Google, Microsoft, and others. Computing power will be provided via remote access to researchers whose projects are approved by the consortium's leadership board. The consortium also will connect researchers with top computational scientists. Said IBM Research director Dario Gil, "We're bringing together expertise ... even across competitors, to work on this. We think it's important to bring a sense of community and to bring science and capability against this goal."
AI and Big Data
A frequent question I have received, is what is the difference between Big Data and AI? My answer is DB is a means of using much more available data to perform analytical methods. While AI uses a particular set of machine learning methods to find complex patterns in data. In general AI methods are still less transparent but more powerful in some domains. They can and we did use them in conjunction. Sometimes there is little difference, both depend on large, sometimes very complex data.
Evolving Relationship Between Artificial Intelligence and Big Data in ReadWrite By Nitin Garg / 11 Jan 2020 / AI / Data and Security / Tech
Find the evolving relationship between big data and artificial intelligence. The growing popularity of these technologies offers engaging audience experience. It encourages newcomers to come up with an outstanding plan.
AI and Big Data help you transform your idea into substance. It helps you make full use of visuals, graphs, and multimedia to give your targeted audience with a great experience. According to Markets And Markets, the worldwide market for AI in accounting assumed to grow. As a result, growth from $666 million in 2019 to $4,791 million by 2024.
The critical component of delivering an outstanding pitch is taking a step further with an incredible plan of assuring success. Big data and Artificial intelligence help you contribute to multiple industries bringing an effective plan. It can directly speak to investors and your targeted audience, covering essential aspects and representing your idea in a nutshell.
According to Techjury, The big data analytics market is set to reach $103 billion by 2023, and in 2019, the big data market is expected to grow by 20%. .... "
Evolving Relationship Between Artificial Intelligence and Big Data in ReadWrite By Nitin Garg / 11 Jan 2020 / AI / Data and Security / Tech
Find the evolving relationship between big data and artificial intelligence. The growing popularity of these technologies offers engaging audience experience. It encourages newcomers to come up with an outstanding plan.
AI and Big Data help you transform your idea into substance. It helps you make full use of visuals, graphs, and multimedia to give your targeted audience with a great experience. According to Markets And Markets, the worldwide market for AI in accounting assumed to grow. As a result, growth from $666 million in 2019 to $4,791 million by 2024.
The critical component of delivering an outstanding pitch is taking a step further with an incredible plan of assuring success. Big data and Artificial intelligence help you contribute to multiple industries bringing an effective plan. It can directly speak to investors and your targeted audience, covering essential aspects and representing your idea in a nutshell.
According to Techjury, The big data analytics market is set to reach $103 billion by 2023, and in 2019, the big data market is expected to grow by 20%. .... "
Linking Gamification and AI
Steve Omohundro talks some favorite topics of mine in a recent presentation, slides at the link. Integration of gamification a favorite. We used gamification as a means to find alternative 'expert crowdourcing' to explore alternative solutions to wicked problems. In the supply chain space. Makes mention of Bytedance, that I had not heard of, will take a look.
Talk: The AI Platform Business Revolution, Matchmaking Empathetic technology and AI Gamification.
On October 15, Steve Omohundro spoke at FXPAL (FX Palo Alto Laboratory) about “The AI Platform Business Revolution, Matchmaking, Empathetic Technology, and AI Gamification”:
Abstract
Popular media is full of stories about self-driving cars, video deepfakes, and robot citizens. But this kind of popular artificial intelligence is having very little business impact. The actual impact of AI on business is in automating business processes and in creating the “AI Platform Business Revolution”. Platform companies create value by facilitating exchanges between two or more groups. AI is central to these businesses for matchmaking between producers and consumers, organizing massive data flows, eliminating malicious content, providing empathetic personalization, and generating engagement through gamification. The platform structure creates moats which generate outsized sustainable profits. This is why platform businesses are now dominating the world economy. The top five companies by market cap, half of the unicorn startups, and most of the biggest IPOs and acquisitions are platforms. For example, the platform startup Bytedance is now worth $75 billion based on three simple AI technologies.
In this talk we survey the current state of AI and show how it will generate massive business value in coming years. A recent McKinsey study estimates that AI will likely create over 70 trillion dollars of value by 2030. Every business must carefully choose its AI strategy now in order to thrive over coming decades. We discuss the limitations of today’s deep learning based systems and the “Software 2.0” infrastructure which has arisen to support it. We discuss the likely next steps in natural language, machine vision, machine learning, and robotic systems. We argue that the biggest impact will be created by systems which serve to engage, connect, and help individuals. There is an enormous opportunity to use this technology to create both social and business value. .... '
Talk: The AI Platform Business Revolution, Matchmaking Empathetic technology and AI Gamification.
On October 15, Steve Omohundro spoke at FXPAL (FX Palo Alto Laboratory) about “The AI Platform Business Revolution, Matchmaking, Empathetic Technology, and AI Gamification”:
Abstract
Popular media is full of stories about self-driving cars, video deepfakes, and robot citizens. But this kind of popular artificial intelligence is having very little business impact. The actual impact of AI on business is in automating business processes and in creating the “AI Platform Business Revolution”. Platform companies create value by facilitating exchanges between two or more groups. AI is central to these businesses for matchmaking between producers and consumers, organizing massive data flows, eliminating malicious content, providing empathetic personalization, and generating engagement through gamification. The platform structure creates moats which generate outsized sustainable profits. This is why platform businesses are now dominating the world economy. The top five companies by market cap, half of the unicorn startups, and most of the biggest IPOs and acquisitions are platforms. For example, the platform startup Bytedance is now worth $75 billion based on three simple AI technologies.
In this talk we survey the current state of AI and show how it will generate massive business value in coming years. A recent McKinsey study estimates that AI will likely create over 70 trillion dollars of value by 2030. Every business must carefully choose its AI strategy now in order to thrive over coming decades. We discuss the limitations of today’s deep learning based systems and the “Software 2.0” infrastructure which has arisen to support it. We discuss the likely next steps in natural language, machine vision, machine learning, and robotic systems. We argue that the biggest impact will be created by systems which serve to engage, connect, and help individuals. There is an enormous opportunity to use this technology to create both social and business value. .... '
Sunday, March 22, 2020
Technology and the Future of Marketing
An interesting Podcast re the future of marketing and its implications.
Why Omni-channel Personalization Is the Future of Marketing
Podcast:
Netcore's Rajesh Jain talks about how technology is shaping and transforming the future of marketing.
All customers want a unique, personalized experience, irrespective of how they interact with a brand – be it in-store, on an app, via a website, or wherever. With the prevalence of mobile and connected devices which give marketers access to vast customer data, and technologies such as analytics and machine learning, it is increasingly possible for companies to offer omni-channel personalization. But marketers also need to focus on identifying their “best customers,” instead of spreading their resources thin, says Rajesh Jain, founder and managing director of Netcore, a global marketing technology firm.
Jain defines “best customers” as those who “spend more, stay longer with you and spread your message more.” These customers, says Jain, have the greatest lifetime value for a company. In a conversation with Knowledge@Wharton, Jain talks about how technology is shaping and transforming the future of marketing.
Below is an edited version of the interview. .... "
Why Omni-channel Personalization Is the Future of Marketing
Podcast:
Netcore's Rajesh Jain talks about how technology is shaping and transforming the future of marketing.
All customers want a unique, personalized experience, irrespective of how they interact with a brand – be it in-store, on an app, via a website, or wherever. With the prevalence of mobile and connected devices which give marketers access to vast customer data, and technologies such as analytics and machine learning, it is increasingly possible for companies to offer omni-channel personalization. But marketers also need to focus on identifying their “best customers,” instead of spreading their resources thin, says Rajesh Jain, founder and managing director of Netcore, a global marketing technology firm.
Jain defines “best customers” as those who “spend more, stay longer with you and spread your message more.” These customers, says Jain, have the greatest lifetime value for a company. In a conversation with Knowledge@Wharton, Jain talks about how technology is shaping and transforming the future of marketing.
Below is an edited version of the interview. .... "
AI at the Edge
Some useful thoughts about the use of AI in devices, enabled by faster ubiquitous connections like 5G.
AI at the Edge Enabling a New Generation of Apps, Smart Devices By AI Trends Staff
Enabling an edge-computing architecture with AI is seen as a way forward for advances in strategic applications. And at the advent of 5G network speeds, AI is seen as essential to the endpoints.
A new network paradigm based on virtualization enabled by Software Defined Networking (SDN) and Network Function Virtualization (NFV), presents an opportunity to push AI processing out to the edge in a distributed architecture, suggests a recent report from Strategy Analytics.
Three types of edge computing are foreseen: device as the edge, in which an IoT device generates and consumes data and has embedded AI that can send and receive data to and from additional AI systems; enterprise premise network edge, that can support AI processing on a piece of hardware in a vehicle, drone or machinery, and can collect and process data from smart devices; and operator network edge, with an AI stack/platform to host applications and services, which may be located at a micro data center in a radio tower, edge router, base station or internet gateway. ... "
AI at the Edge Enabling a New Generation of Apps, Smart Devices By AI Trends Staff
Enabling an edge-computing architecture with AI is seen as a way forward for advances in strategic applications. And at the advent of 5G network speeds, AI is seen as essential to the endpoints.
A new network paradigm based on virtualization enabled by Software Defined Networking (SDN) and Network Function Virtualization (NFV), presents an opportunity to push AI processing out to the edge in a distributed architecture, suggests a recent report from Strategy Analytics.
Three types of edge computing are foreseen: device as the edge, in which an IoT device generates and consumes data and has embedded AI that can send and receive data to and from additional AI systems; enterprise premise network edge, that can support AI processing on a piece of hardware in a vehicle, drone or machinery, and can collect and process data from smart devices; and operator network edge, with an AI stack/platform to host applications and services, which may be located at a micro data center in a radio tower, edge router, base station or internet gateway. ... "
Fractal Uncertainty and Quantum
Intriguing but technical view:
Finding solutions amidst fractal uncertainty and quantum chaos
Math professor Semyon Dyatlov explores the relationship between classical and quantum physics.
Jonathan Mingle | MIT News correspondent
Semyon Dyatlov calls himself a “mathematical physicist.”
He’s an associate editor of the journal Probability and Mathematical Physics. His PhD dissertation advanced understanding of wave decay in black hole spacetimes. And much of his research focuses on developing new ways to understand the correspondence between classical physics (which describes light as rays that travel in straight lines and bounce off surfaces) and quantum systems (wherein light has wave-particle duality).
So it may come as a surprise that, as a student growing up in Siberia, he didn’t study physics in depth.
“Much of my work is deeply related to physics, even though I didn’t receive that much physics education as a student,” he says. “It took when I started working as a mathematician to slowly start understanding things like general relativity and modern particle physics.”... '
Finding solutions amidst fractal uncertainty and quantum chaos
Math professor Semyon Dyatlov explores the relationship between classical and quantum physics.
Jonathan Mingle | MIT News correspondent
Semyon Dyatlov calls himself a “mathematical physicist.”
He’s an associate editor of the journal Probability and Mathematical Physics. His PhD dissertation advanced understanding of wave decay in black hole spacetimes. And much of his research focuses on developing new ways to understand the correspondence between classical physics (which describes light as rays that travel in straight lines and bounce off surfaces) and quantum systems (wherein light has wave-particle duality).
So it may come as a surprise that, as a student growing up in Siberia, he didn’t study physics in depth.
“Much of my work is deeply related to physics, even though I didn’t receive that much physics education as a student,” he says. “It took when I started working as a mathematician to slowly start understanding things like general relativity and modern particle physics.”... '
Saturday, March 21, 2020
Multi-Agent VR for Task Application?
Might this be useful for simulating multi-agent complex, interactive tasks in VR? We encountered such problems when assigning multiple people to do a job and they needed to be informed of the location, status, actions ... of others in the team. This inhibited the use of VR solutions. Like too that this could be done with simple devices.
Novel System Allows Untethered Multi-Player VR
Purdue University News By Chris Adam
Purdue University researchers have created a virtual reality (VR) system that allows untethered multi-player gameplay on smartphones. The Coterie system manages the rendering of high-resolution virtual scenes to fulfill the quality of experience of VR, facilitating 4K resolutions on commodity smartphones and accommodating up to 10 players to engage in the same VR application at once. Purdue's Y. Charlie Hu said Coterie "opens the door for enterprise applications such as employee training, collaboration and operations, healthcare applications such as surgical training, as well as education and military applications."
Novel System Allows Untethered Multi-Player VR
Purdue University News By Chris Adam
Purdue University researchers have created a virtual reality (VR) system that allows untethered multi-player gameplay on smartphones. The Coterie system manages the rendering of high-resolution virtual scenes to fulfill the quality of experience of VR, facilitating 4K resolutions on commodity smartphones and accommodating up to 10 players to engage in the same VR application at once. Purdue's Y. Charlie Hu said Coterie "opens the door for enterprise applications such as employee training, collaboration and operations, healthcare applications such as surgical training, as well as education and military applications."
Drone Detection and Usage Database
Also would seem to create a useful database for detailed matching to contexts and task usage.
Research Improves Drone Detection
Aalto University
March 18, 2020
Researchers at Aalto University in Finland, Universite Catholique de Louvain in Belgium, and New York University have compiled radar measurement data from different types of aerial drones, to enhance the detection and identification of unmanned aerial vehicles. The researchers measured the Radar Cross Section of commercially available and custom-built drones, which indicates how each reflects radio signals, as a way of identifying their size, shape, and structural materials. Aalto's Vasilii Semkin said the publicly available results are intended to form the basis of a uniform drone database. Said Semkin, “There is an urgent need to find better ways to monitor drone use. We aim to continue this work and extend the measurement campaign to other frequency bands, as well as for a larger variety of drones and different real-life environments.” .... '
Research Improves Drone Detection
Aalto University
March 18, 2020
Researchers at Aalto University in Finland, Universite Catholique de Louvain in Belgium, and New York University have compiled radar measurement data from different types of aerial drones, to enhance the detection and identification of unmanned aerial vehicles. The researchers measured the Radar Cross Section of commercially available and custom-built drones, which indicates how each reflects radio signals, as a way of identifying their size, shape, and structural materials. Aalto's Vasilii Semkin said the publicly available results are intended to form the basis of a uniform drone database. Said Semkin, “There is an urgent need to find better ways to monitor drone use. We aim to continue this work and extend the measurement campaign to other frequency bands, as well as for a larger variety of drones and different real-life environments.” .... '
Google Virus Information Site
I see that Google has put out a generalized site on the Coronavirus Pandemic. Contains lots of links to other resources. See: https://www.google.com/covid19/ Will be following especially as it relates to new technology capabilities. Would like to see more coverage of what Google, as a major emergent technology resource, is doing in this space. And how we might help.
Edison Research: Infinite Dial on Podcasts, Voice Use
Short excerpt about Podcast use, been some time since I have followed:
Infinite Dial from Edison Research:
" .... Podcasting awareness and consumption in the U.S. continue to rise, according to the most recent information from the Infinite Dial 2020® from Edison Research and Triton Digital. Seventy-five percent of Americans age 12+ (approximately 212 million people) are now familiar with podcasting, up from 70% in 2019, and 37% (104 million) listen monthly, up from 32% in 2019. This continues the growth trend that The Infinite Dial® has measured since 2009.
“Podcasts now reach over 100 million Americans every month,” said Tom Webster, SVP of Edison Research, “and are attracting an increasingly diverse audience. Also, with 62% of Americans now saying they have used some kind of voice assistance technology, audio is becoming a bigger part of our everyday lives.”
In addition, the Infinite Dial® also found that 62% of those in the U.S. age 12+ use voice-operated assistants, and 45% of those in the U.S. age 12+ have listened to audio in a car through a cell phone. This year’s study also continues the legacy of measuring developing technologies, with the finding that 18% of Americans age 18+ own a car with an in-dash information and entertainment system. ... "
Infinite Dial from Edison Research:
" .... Podcasting awareness and consumption in the U.S. continue to rise, according to the most recent information from the Infinite Dial 2020® from Edison Research and Triton Digital. Seventy-five percent of Americans age 12+ (approximately 212 million people) are now familiar with podcasting, up from 70% in 2019, and 37% (104 million) listen monthly, up from 32% in 2019. This continues the growth trend that The Infinite Dial® has measured since 2009.
“Podcasts now reach over 100 million Americans every month,” said Tom Webster, SVP of Edison Research, “and are attracting an increasingly diverse audience. Also, with 62% of Americans now saying they have used some kind of voice assistance technology, audio is becoming a bigger part of our everyday lives.”
In addition, the Infinite Dial® also found that 62% of those in the U.S. age 12+ use voice-operated assistants, and 45% of those in the U.S. age 12+ have listened to audio in a car through a cell phone. This year’s study also continues the legacy of measuring developing technologies, with the finding that 18% of Americans age 18+ own a car with an in-dash information and entertainment system. ... "
Thinking about the Laws of Voice
Note that this article points to an interesting book on the topic. Have not read, but plan to.
Voice Assistants in Pre and Post-Operative Care and the Duty to Warn Patients of Remote Risks – A Legal Discussion By Eric Hal Schwartz
This guest post is an edited extract from Voice Technology in Healthcare, Chapter 14, The Laws of Voice, by Bianca Phillips, an officer of the Supreme Court of Victoria, Australia and Heather B. Deixler, a senior associate at Latham & Watkins LLP.
Voice assistants will increasingly provide patients with pre- and post-surgery information. The information may be of a general nature, or the skill could be more personalized. If a potential risk of surgery is so remote that it is rarely seen in practice, and only appears in archaic medical literature, does the voice assistant need to advise of that risk?
THE SCENARIO
Imagine the following hypothetical scenario. A patient, Stacey, is about to undergo cataract surgery. In the pre-op appointment, Stacey’s surgeon highlights the risks of surgery and, in her concluding remarks, advises Stacey that “you may also consult your voice assistant to learn more about your surgery, and ask questions about the risks and post-op process.”
That evening Stacey goes home and says to her voice assistant (VA) “[VA name], what are the risks of cataract surgery?” The VA responds “the risks of cataract surgery include: Posterior capsule opacity (PCO), Intraocular lens dislocation, Eye inflammation, Light sensitivity, Photopsia (perceived flashes of light), Macular edema (swelling of the central retina), Ptosis (droopy eyelid) and Ocular hypertension (elevated eye pressure).” Stacey continues to ask a range of questions about post-operative care and the recovery process.
Surgery seems to have gone well. However, two months after the surgery Stacey advises her doctor of considerably decreased vision in her left eye, which the surgeon determines is sympathetic ophthalmia. The estimated post-operative occurrence is between 0.01%–0.05%. Attempts to treat the sympathetic ophthalmia fail and the patient sustains a permanent loss of vision in her left eye. The surgeon failed to inform Stacey of the risk of sympathetic ophthalmia during the pre-op consultation, and Stacey’s VA also did not inform her of this risk. Stacey wants to know whether the surgeon and/or the VA had a duty to inform her of the remote risk. .... "
Voice Assistants in Pre and Post-Operative Care and the Duty to Warn Patients of Remote Risks – A Legal Discussion By Eric Hal Schwartz
This guest post is an edited extract from Voice Technology in Healthcare, Chapter 14, The Laws of Voice, by Bianca Phillips, an officer of the Supreme Court of Victoria, Australia and Heather B. Deixler, a senior associate at Latham & Watkins LLP.
Voice assistants will increasingly provide patients with pre- and post-surgery information. The information may be of a general nature, or the skill could be more personalized. If a potential risk of surgery is so remote that it is rarely seen in practice, and only appears in archaic medical literature, does the voice assistant need to advise of that risk?
THE SCENARIO
Imagine the following hypothetical scenario. A patient, Stacey, is about to undergo cataract surgery. In the pre-op appointment, Stacey’s surgeon highlights the risks of surgery and, in her concluding remarks, advises Stacey that “you may also consult your voice assistant to learn more about your surgery, and ask questions about the risks and post-op process.”
That evening Stacey goes home and says to her voice assistant (VA) “[VA name], what are the risks of cataract surgery?” The VA responds “the risks of cataract surgery include: Posterior capsule opacity (PCO), Intraocular lens dislocation, Eye inflammation, Light sensitivity, Photopsia (perceived flashes of light), Macular edema (swelling of the central retina), Ptosis (droopy eyelid) and Ocular hypertension (elevated eye pressure).” Stacey continues to ask a range of questions about post-operative care and the recovery process.
Surgery seems to have gone well. However, two months after the surgery Stacey advises her doctor of considerably decreased vision in her left eye, which the surgeon determines is sympathetic ophthalmia. The estimated post-operative occurrence is between 0.01%–0.05%. Attempts to treat the sympathetic ophthalmia fail and the patient sustains a permanent loss of vision in her left eye. The surgeon failed to inform Stacey of the risk of sympathetic ophthalmia during the pre-op consultation, and Stacey’s VA also did not inform her of this risk. Stacey wants to know whether the surgeon and/or the VA had a duty to inform her of the remote risk. .... "
Nokia Introduces Beacon for Wi-Fi 6 Mesh
Good to see more out of Wifi for enhanced communications. Nokia puts out a useful overview of the capabilities.
Nokia adds the Beacon 6 to its whole-home WiFi portfolio, which already includes the Beacon 1, the Beacon 3 and family of mesh fiber gateways
Beacon 6 is the first Nokia Wi-Fi device to support Wi-Fi 6 and Wi-Fi Certified EasyMesh™ along with the Nokia WiFi Cloud Controller to manage mesh WiFi remotely.
Nokia first to deliver a seamless transition for mobile devices moving between 5G and Wi-Fi 6 to maintain throughput and low latency for video streaming and cloud gaming applications
Nokia also pioneering support for low-latency technology innovations for Wi-Fi networks that revolutionize the way users experience the internet and gaming applications
Nokia Bell Labs researchers have significantly contributed to development of Wi-Fi 6
19 March 2020
Espoo, Finland – Nokia today announced it is adding a new Wi-Fi 6 Beacon to its whole-home WiFi portfolio, helping operators to deliver a powerful user experience. Providing a high-capacity, high-performance in-home solution, the new Beacon 6 uses Wi-Fi 6 to deliver 40 percent faster speeds than previous Wi-Fi generations.
To further enhance the in-home experience, Nokia is also adding low-latency technology built on Nokia Bell Labs innovations to its Wi-Fi portfolio. Drastically improving residential Wi-Fi networks, the Nokia Beacon 6 provides operators with an easy to install solution that can support low-latency applications such as gaming and gigabit speeds essential for creating a seamless end-to-end 5G experience.
Ben Wood, chief of research at CCS Insight said: “The benefits of 5G are going to change user experiences and customers’ expectations. The blend of the latest Wi-Fi 6 technology, low latency performance and in-home Wi-Fi mesh solutions linked to 5G will allow operators to deliver a seamless communications platform for next generation applications and solutions.”
Currently more than 25 percent of homes globally are connected to a Wi-Fi network and there are about 5 billion Wi-Fi-connected devices in the home including home computing, smart TVs and smart home devices. To cope with this growth, operators and end-users are investing in devices that support Wi-Fi 6, a new Wi-Fi standard that improves speed by at least four times in dense areas and reduces latency by 75 percent1.
The Beacon 6 is the first Nokia WiFi device to showcase several new technologies working seamlessly together. This includes: ... "
Microsoft Healthcare Bot for COVID-19
How quickly can we create and deliver these kinds of context specific examples for epidemics will be important. Here the Microsoft example looks very interesting. Can it learn and adapt to new contexts as they emerge? Make predictions of new goal states? Evaluate risks based on human reactions and predict results over time? See below:
Delivering information and eliminating bottlenecks with CDC’s COVID-19 assessment bot Mar 20, 2020 | Hadas Bitran, Group Manager, Microsoft Healthcare Israel, and Jean Gabarra, General Manager, Health AI
In a crisis like the COVID-19 pandemic, it’s not only important to deliver medical care but to also provide information to help people make decisions and prevent health systems from being overwhelmed.
Microsoft is helping with this challenge by offering its Healthcare Bot service powered by Microsoft Azure to organizations on the frontlines of the COVID-19 response to help screen patients for potential infection and care.
For example, the U.S. Centers for Disease Control and Prevention (CDC) just released a COVID-19 assessment bot that can quickly assess the symptoms and risk factors for people worried about infection, provide information and suggest a next course of action such as contacting a medical provider or, for those who do not need in-person medical care, managing the illness safely at home.
The bot, which utilizes Microsoft’s Healthcare Bot service, will initially be available on the CDC website. ... "
Delivering information and eliminating bottlenecks with CDC’s COVID-19 assessment bot Mar 20, 2020 | Hadas Bitran, Group Manager, Microsoft Healthcare Israel, and Jean Gabarra, General Manager, Health AI
In a crisis like the COVID-19 pandemic, it’s not only important to deliver medical care but to also provide information to help people make decisions and prevent health systems from being overwhelmed.
Microsoft is helping with this challenge by offering its Healthcare Bot service powered by Microsoft Azure to organizations on the frontlines of the COVID-19 response to help screen patients for potential infection and care.
For example, the U.S. Centers for Disease Control and Prevention (CDC) just released a COVID-19 assessment bot that can quickly assess the symptoms and risk factors for people worried about infection, provide information and suggest a next course of action such as contacting a medical provider or, for those who do not need in-person medical care, managing the illness safely at home.
The bot, which utilizes Microsoft’s Healthcare Bot service, will initially be available on the CDC website. ... "
Friday, March 20, 2020
Machine Learning Flaw?
Musing this, is there really a flaw here? Hmm. OK, you need enough data to fulfill the accuracy needed. So a reasonable caution.
Widely Used Machine Learning Method Doesn't Work as Claimed
UC Santa Cruz Newscenter
Tim Stephens
March 16, 2020
A study by researchers at the University of California, Santa Cruz (UCSC), Google, and Stanford University found fundamental flaws in a widely used machine learning (ML) technique for modeling complex networks. The researchers said low-dimensional embeddings have drawbacks, and mathematically showed that significant structural aspects of networks are lost in the embedding process. UCSC's C. Seshadhri warned that any embedding technique yielding a small list of numbers will basically fail because a low-dimensional geometry is insufficiently expressive for social networks and other complex networks. Seshadhri said the research shows the need to check the validity of underlying ML assumptions, because "in this day and age when machine learning is getting more and more complicated, it's important to have some understanding of what can and cannot be done."
Widely Used Machine Learning Method Doesn't Work as Claimed
UC Santa Cruz Newscenter
Tim Stephens
March 16, 2020
A study by researchers at the University of California, Santa Cruz (UCSC), Google, and Stanford University found fundamental flaws in a widely used machine learning (ML) technique for modeling complex networks. The researchers said low-dimensional embeddings have drawbacks, and mathematically showed that significant structural aspects of networks are lost in the embedding process. UCSC's C. Seshadhri warned that any embedding technique yielding a small list of numbers will basically fail because a low-dimensional geometry is insufficiently expressive for social networks and other complex networks. Seshadhri said the research shows the need to check the validity of underlying ML assumptions, because "in this day and age when machine learning is getting more and more complicated, it's important to have some understanding of what can and cannot be done."
Pattern Recognition for Dead Languages
Fascinating application of language pattern recognition. Which shows this tech can be taken into unexpected and complex places.
Dead Languages Come to Life
By Gary Anthes ACM
Communications of the ACM, April 2020, Vol. 63 No. 4, Pages 13-15 10.1145/3381908
Driven by advanced techniques in machine learning, commercial systems for automated language translation now nearly match the performance of human linguists, and far more efficiently. Google Translate supports 105 languages, from Afrikaans to Zulu, and in addition to printed text it can translate speech, handwriting, and the text found on websites and in images.
The methods for doing those things are clever, but the key enabler lies in the huge annotated databases of writings in the various language pairs. A translation from French to English succeeds because the algorithms were trained on millions of actual translation examples. The expectation is that every word or phrase that comes into the system, with its associated rules and patterns of language structure, will have been seen and translated before.
Now researchers have developed a method that, in some cases, can automatically translate extinct languages, those for which these big parallel data sets do not exist. Jiaming Luo and Regina Barzilay at the Massachusetts Institute of Technology (MIT) and Yuan Cao at Google were able to automate the "decipherment" of Linear B—a Greek language predecessor dating to 1450 B.C.—into modern Greek. Previous translations of Linear B to Greek were only possible manually, at great effort, by language and subject-matter experts. The same automated methods were also able to translate Ugaritic, an extinct Semitic language, into Hebrew.
How It Works: ....
Dead Languages Come to Life
By Gary Anthes ACM
Communications of the ACM, April 2020, Vol. 63 No. 4, Pages 13-15 10.1145/3381908
Driven by advanced techniques in machine learning, commercial systems for automated language translation now nearly match the performance of human linguists, and far more efficiently. Google Translate supports 105 languages, from Afrikaans to Zulu, and in addition to printed text it can translate speech, handwriting, and the text found on websites and in images.
The methods for doing those things are clever, but the key enabler lies in the huge annotated databases of writings in the various language pairs. A translation from French to English succeeds because the algorithms were trained on millions of actual translation examples. The expectation is that every word or phrase that comes into the system, with its associated rules and patterns of language structure, will have been seen and translated before.
Now researchers have developed a method that, in some cases, can automatically translate extinct languages, those for which these big parallel data sets do not exist. Jiaming Luo and Regina Barzilay at the Massachusetts Institute of Technology (MIT) and Yuan Cao at Google were able to automate the "decipherment" of Linear B—a Greek language predecessor dating to 1450 B.C.—into modern Greek. Previous translations of Linear B to Greek were only possible manually, at great effort, by language and subject-matter experts. The same automated methods were also able to translate Ugaritic, an extinct Semitic language, into Hebrew.
How It Works: ....
Robotic Light Finger Feels
An idea that creates haptics for robots.
This Clever Robotic Finger Feels With Light
By Wired in ACM
Robot finger meets actual finger.
A robotic skeleton developed at Columbia University is equipped with 32 photodiodes and 30 adjacent LEDs and is covered by a squishy skin of reflective silicone.
Researchers at Columbia University have developed a robotic skeleton equipped with 32 photodiodes and 30 adjacent LEDs and is covered by a squishy skin of reflective silicone, which keeps the device's own light in and outside light out.
When the robot finger touches an object, the soft exterior deforms, and the photodiodes detect changing light levels from the LEDs.
The system can determine where contact is being made with the finger, and the intensity of that contact. The 32 photodiodes and the 30 LEDs produce 960 signals, a massive amount of data from a single poke.
The system relies on machine learning to analyze all of the information.
This type of tactile sensing can facilitate robot manipulation, and this new system is a significant improvement over previous robotic fingers that used electrodes overlaid with rubber to sense touch. .... "
This Clever Robotic Finger Feels With Light
By Wired in ACM
Robot finger meets actual finger.
A robotic skeleton developed at Columbia University is equipped with 32 photodiodes and 30 adjacent LEDs and is covered by a squishy skin of reflective silicone.
Researchers at Columbia University have developed a robotic skeleton equipped with 32 photodiodes and 30 adjacent LEDs and is covered by a squishy skin of reflective silicone, which keeps the device's own light in and outside light out.
When the robot finger touches an object, the soft exterior deforms, and the photodiodes detect changing light levels from the LEDs.
The system can determine where contact is being made with the finger, and the intensity of that contact. The 32 photodiodes and the 30 LEDs produce 960 signals, a massive amount of data from a single poke.
The system relies on machine learning to analyze all of the information.
This type of tactile sensing can facilitate robot manipulation, and this new system is a significant improvement over previous robotic fingers that used electrodes overlaid with rubber to sense touch. .... "
Thursday, March 19, 2020
Folding@home Supports Coronavirus Research
Related to the SETI work this is a distributed protein folding approach to look for pharma solutions.
FOLDING@HOME UPDATE ON SARS-COV-2 (10 MAR 2020)
March 10, 2020
by John Chodera
Healthcare-Technical article.
This is an update on Folding@home’s efforts to assist researchers around the world taking up the global fight against COVID-19.
After initial quality control and limited testing phases, Folding@home team has released an initial wave of projects simulating potentially druggable protein targets from SARS-CoV-2 (the virus that causes COVID-19) and the related SARS-CoV virus (for which more structural data is available) into full production on Folding@home. Many thanks to the large number of Folding@home donors who have assisted us thus far by running in beta or advanced modes.
This initial wave of projects focuses on better understanding how these coronaviruses interact with the human ACE2 receptor required for viral entry into human host cells, and how researchers might be able to interfere with them through the design of new therapeutic antibodies or small molecules that might disrupt their interaction. .... "
..... Brian Venturo, co-founder and CTO of CoreWeave, the largest Ethereum miner in the US. The firm is redirecting the processing power of 6,000 graphics processing units (GPUs), from crypto-mining to hunting for coronavirus drug targets as part of a project started by Stanford University... "
FOLDING@HOME UPDATE ON SARS-COV-2 (10 MAR 2020)
March 10, 2020
by John Chodera
Healthcare-Technical article.
This is an update on Folding@home’s efforts to assist researchers around the world taking up the global fight against COVID-19.
After initial quality control and limited testing phases, Folding@home team has released an initial wave of projects simulating potentially druggable protein targets from SARS-CoV-2 (the virus that causes COVID-19) and the related SARS-CoV virus (for which more structural data is available) into full production on Folding@home. Many thanks to the large number of Folding@home donors who have assisted us thus far by running in beta or advanced modes.
This initial wave of projects focuses on better understanding how these coronaviruses interact with the human ACE2 receptor required for viral entry into human host cells, and how researchers might be able to interfere with them through the design of new therapeutic antibodies or small molecules that might disrupt their interaction. .... "
..... Brian Venturo, co-founder and CTO of CoreWeave, the largest Ethereum miner in the US. The firm is redirecting the processing power of 6,000 graphics processing units (GPUs), from crypto-mining to hunting for coronavirus drug targets as part of a project started by Stanford University... "
From MIT: 34 Coronavirus Pieces in TechnologyReview
34 pieces from MIT resources. Have read a number of these. Nicely done, largely nontechnical. Free. And you can register free for daily updates.
Remote Detection of Virus
Makes sense by remotely sensing heat. Of course temperature does not mean virus, but a first test for at a distance scanning. Here an existing system. Unsure of availability.
This AI camera detects people who may have COVID-19 in FastCompany
Austin-based Athena Security first gained recognition using AI to detect firearms. It’s now tackling another public health threat.
By Mark Sullivan
With the U.S. lagging other countries in the distribution of coronavirus testing kits, health authorities have had to look to other means of detection, like the infrared ear thermometers used in some countries. And now one Austin-based company says its security cameras use thermal imaging and computer vision tech to detect people who have fever possibly related to the virus.
Unlike the thermometers, which work one person at a time and at close range, Athena Security‘s security camera detection system may be far better for scanning larger numbers of people in places like airports, grocery stores, hospitals, and voting locations. .... "
This AI camera detects people who may have COVID-19 in FastCompany
Austin-based Athena Security first gained recognition using AI to detect firearms. It’s now tackling another public health threat.
By Mark Sullivan
With the U.S. lagging other countries in the distribution of coronavirus testing kits, health authorities have had to look to other means of detection, like the infrared ear thermometers used in some countries. And now one Austin-based company says its security cameras use thermal imaging and computer vision tech to detect people who have fever possibly related to the virus.
Unlike the thermometers, which work one person at a time and at close range, Athena Security‘s security camera detection system may be far better for scanning larger numbers of people in places like airports, grocery stores, hospitals, and voting locations. .... "
Detecting Odors
I often mention research in this area because we spent considerable time looking at the idea of an 'artificial nose' to effectively test quality in products, especially coffee, but in other areas as well. Here more in the area. Note the use of 'Neuromorphic', or brain-inspired chips to address the problem.
Intel Trains Neuromorphic Chip to Detect Odors
in VentureBeat
By Kyle Wiggers
Intel and Cornell University researchers have trained Intel's Loihi neuromorphic processor to identify 10 materials from their odors, demonstrating how neuromorphic computing could be applied to detect precursor smells and potentially find explosives and narcotics, diagnose diseases, and notice signs of smoke and carbon monoxide. The chip was trained by configuring the circuit schematic of biological olfaction, using a dataset compiling the activity of 72 chemical sensors in response to various scents. The researchers said the method kept Loihi's memory of the scents intact, and the chip has "superior" recognition accuracy compared with conventional techniques. Said Intel's Nabil Imam, "This work is a prime example of contemporary research at the crossroads of neuroscience and artificial intelligence and demonstrates Loihi's potential to provide important sensing capabilities that could benefit various industries." ... '
Intel Trains Neuromorphic Chip to Detect Odors
in VentureBeat
By Kyle Wiggers
Intel and Cornell University researchers have trained Intel's Loihi neuromorphic processor to identify 10 materials from their odors, demonstrating how neuromorphic computing could be applied to detect precursor smells and potentially find explosives and narcotics, diagnose diseases, and notice signs of smoke and carbon monoxide. The chip was trained by configuring the circuit schematic of biological olfaction, using a dataset compiling the activity of 72 chemical sensors in response to various scents. The researchers said the method kept Loihi's memory of the scents intact, and the chip has "superior" recognition accuracy compared with conventional techniques. Said Intel's Nabil Imam, "This work is a prime example of contemporary research at the crossroads of neuroscience and artificial intelligence and demonstrates Loihi's potential to provide important sensing capabilities that could benefit various industries." ... '
Update on RFID Tags
Related updates for specific RFID applications. See also continuous updates on this technology.
Vizenex RFID, manufacturing RFID Tags:
Vizinex RFID, which manufactures RFID tags for specific applications, has launched its Sentry Midrange II, the second generation of the company's tag for tracking large metal assets. With a read range of 25 feet when mounted on metal, the Sentry Midrange II allows for the tagging of a variety of assets, the company reports, including reusable containers, oil-field assets and construction equipment.
The Sentry Midrange II is 30 percent smaller than its predecessor, according to the company, but provides a broader frequency response. The tag's optional cover provides extra protection for the RF device and the flexibility to use bolts, rivets or zip-ties for attachment. Industries that use this type of RFID tag include government, military, manufacturing, waste management, rental equipment, and oil and gas. Applications include special transport items, reusable containers, logistics and yard management, and tool and die tracking. ... "
Vizenex RFID, manufacturing RFID Tags:
Vizinex RFID, which manufactures RFID tags for specific applications, has launched its Sentry Midrange II, the second generation of the company's tag for tracking large metal assets. With a read range of 25 feet when mounted on metal, the Sentry Midrange II allows for the tagging of a variety of assets, the company reports, including reusable containers, oil-field assets and construction equipment.
The Sentry Midrange II is 30 percent smaller than its predecessor, according to the company, but provides a broader frequency response. The tag's optional cover provides extra protection for the RF device and the flexibility to use bolts, rivets or zip-ties for attachment. Industries that use this type of RFID tag include government, military, manufacturing, waste management, rental equipment, and oil and gas. Applications include special transport items, reusable containers, logistics and yard management, and tool and die tracking. ... "
Wednesday, March 18, 2020
Bring Best Digital Means to Physical Stores
More changes in stores, but these will also be adapted to address future emergency situations like those we are experiencing now.
How can retailers bring the best of digital commerce to physical stores? Plus expert comment
How can retailers bring the best of digital commerce to physical stores?
Nike App at Retail digital tech in Foot Locker's new Washington Heights, NYC store - Photo: Foot by Lauren Goldberg
There are many benefits to e-commerce — speed to market and the ability to quickly react and optimize merchandising strategy and rich data to personalize the customer shopping experience, to name a few. At the recent 2020 Future Stores conference in Miami, a frequent theme was working out how retailers take these elements and leverage them in brick and mortar store environments.
When Foot Locker designed its new community store prototype, speed to market and the ability to react quickly was top of mind. According to Kambiz Hemati, former VP, global retail design for the footwear chain, fixtures were designed to be modular and flexible so they could quickly re-merchandise the store based on sales trends, customer behavior and local events. ... "
How can retailers bring the best of digital commerce to physical stores? Plus expert comment
How can retailers bring the best of digital commerce to physical stores?
Nike App at Retail digital tech in Foot Locker's new Washington Heights, NYC store - Photo: Foot by Lauren Goldberg
There are many benefits to e-commerce — speed to market and the ability to quickly react and optimize merchandising strategy and rich data to personalize the customer shopping experience, to name a few. At the recent 2020 Future Stores conference in Miami, a frequent theme was working out how retailers take these elements and leverage them in brick and mortar store environments.
When Foot Locker designed its new community store prototype, speed to market and the ability to react quickly was top of mind. According to Kambiz Hemati, former VP, global retail design for the footwear chain, fixtures were designed to be modular and flexible so they could quickly re-merchandise the store based on sales trends, customer behavior and local events. ... "
Tuesday, March 17, 2020
Uses of Inactive Pill Ingredients
Some interesting AI driven uses of 'inactive ' pill ingredients.
“Inactive” pill ingredients could raise the dose of your medication
With help from artificial intelligence, researchers identify hidden power of vitamin A and ordinary chewing gum glaze.
Kim Martineau | MIT Quest for Intelligence
March 17, 2020
The average medication contains a mix of eight “inactive” ingredients added to pills to make them taste better, last longer, and stabilize the active ingredients within. Some of those additives are now getting a closer look for their ability to cause allergic reactions in some patients. But now, in a new twist, MIT researchers have discovered that two other inactive ingredients may actually boost medication strength to the benefit of some patients.
In a study published March 17 in Cell Reports, researchers report that vitamin A palmitate, a common supplement, and gum resin, a popular glazing agent for pills and chewing gum — could make hundreds of drugs more effective, from blood-clotting agents and anti-cancer drugs to over-the-counter pain relievers. They also outline a method for using machine learning to find other inactive ingredients with untapped therapeutic value.
“Anything you ingest has a potential effect, but tracing that effect to the molecular level can be a Herculean effort,” says the study’s senior author Giovanni Traverso, an assistant professor in the Department of Mechanical Engineering and a gastroenterologist at Brigham and Women’s Hospital. “Machine learning gives you a way to narrow down the search space.” .... '
“Inactive” pill ingredients could raise the dose of your medication
With help from artificial intelligence, researchers identify hidden power of vitamin A and ordinary chewing gum glaze.
Kim Martineau | MIT Quest for Intelligence
March 17, 2020
The average medication contains a mix of eight “inactive” ingredients added to pills to make them taste better, last longer, and stabilize the active ingredients within. Some of those additives are now getting a closer look for their ability to cause allergic reactions in some patients. But now, in a new twist, MIT researchers have discovered that two other inactive ingredients may actually boost medication strength to the benefit of some patients.
In a study published March 17 in Cell Reports, researchers report that vitamin A palmitate, a common supplement, and gum resin, a popular glazing agent for pills and chewing gum — could make hundreds of drugs more effective, from blood-clotting agents and anti-cancer drugs to over-the-counter pain relievers. They also outline a method for using machine learning to find other inactive ingredients with untapped therapeutic value.
“Anything you ingest has a potential effect, but tracing that effect to the molecular level can be a Herculean effort,” says the study’s senior author Giovanni Traverso, an assistant professor in the Department of Mechanical Engineering and a gastroenterologist at Brigham and Women’s Hospital. “Machine learning gives you a way to narrow down the search space.” .... '
Google Creates Transcribe
Had seen and noted this before, likely quite helpful for business applications. I remember noting its need to be embedded in the enterprise Looking forward to see it on Assistant and IOS Devices.
Google Translate launches Transcribe for Android in 8 languages By Khari Johnson
Google Translate today launched Transcribe for Android, a feature that delivers a continual, real-time translation of a conversation. Transcribe will begin by rolling out support for 8 languages in the coming days: English, French, German, Hindi, Portuguese, Russian, Spanish and Thai. With Transcribe, Translate is now capable of translating classroom or conference lectures with no time limits, whereas before speech-to-text AI in Translate lasted no longer than a word, phrase, or sentence. Google plans to bring Transcribe to iOS devices at an unspecified date in the future. ..... "
Google Translate launches Transcribe for Android in 8 languages By Khari Johnson
Google Translate today launched Transcribe for Android, a feature that delivers a continual, real-time translation of a conversation. Transcribe will begin by rolling out support for 8 languages in the coming days: English, French, German, Hindi, Portuguese, Russian, Spanish and Thai. With Transcribe, Translate is now capable of translating classroom or conference lectures with no time limits, whereas before speech-to-text AI in Translate lasted no longer than a word, phrase, or sentence. Google plans to bring Transcribe to iOS devices at an unspecified date in the future. ..... "
Alexa Accelerator
Amazon seeks more startups for Alexa:
Alexa Fund Opens Virtual Startup Accelerator Applications
Eric HAL SCHWARTZ in Voicebot
Amazon announced a new startup accelerator program as part of the Alexa Fund on Monday. Alexa Next Stage will supersede the three-year-old Alexa Accelerator program, recruiting startups beyond just the founding point. Those chosen will take part in virtual classes and workshops this summer aimed at helping founders grow their companies.
EVOLVING ACCELERATOR
The Alexa Fund and Techstars started running the Alexa Accelerator in 2017, graduating 27 startups from the program. Alexa Next Stage is designed in response to feedback over the years on how to improve the program. Instead of just brand-new startups, Alexa Next Stage is for companies that have a foundation laid and are trying to acquire talent, capital, and customers as they scale up their businesses.
Startups based in North and South America and Europe can apply until April 13 for a spot. Amazon will pick participants based on the idea and plan the company has for adding Alexa’s abilities to their product. The program will take place from June to August, with a Demo Night event in Seattle at the conclusion. The reason for the geographic limits is that, unlike earlier Alexa Accelerator programs, the program will be entirely remote.
“This year, we’re running the program virtually so founders can choose to stay close to home and connected to their networks and customers,” the Alexa Fund’s Rodrigo Prudencio wrote in the announcement “The program’s curriculum and workshops will be delivered in real time and participants will attend these sessions together. Like our past program, these sessions offer the opportunity to engage with Amazon and Techstars mentors distributed throughout the major tech hubs around the world so they can help companies address their next scale and growth challenges.”
ALEXA BOOST
The application doesn’t mention what, if any, funding will be included with acceptance to the program. Techstars has a standard $20,000 baseline for its accelerator programs, but there may be differences in the investments as the kind of companies and the program they are in have both changed. It’s also not clear if it’s just a coincidence that the new program is entirely virtual just as concerns about the COVID-19 pandemic are canceling many people’s travel plans. We’ve reached out to Amazon for information and will update if we learn more. ... "
Alexa Fund Opens Virtual Startup Accelerator Applications
Eric HAL SCHWARTZ in Voicebot
Amazon announced a new startup accelerator program as part of the Alexa Fund on Monday. Alexa Next Stage will supersede the three-year-old Alexa Accelerator program, recruiting startups beyond just the founding point. Those chosen will take part in virtual classes and workshops this summer aimed at helping founders grow their companies.
EVOLVING ACCELERATOR
The Alexa Fund and Techstars started running the Alexa Accelerator in 2017, graduating 27 startups from the program. Alexa Next Stage is designed in response to feedback over the years on how to improve the program. Instead of just brand-new startups, Alexa Next Stage is for companies that have a foundation laid and are trying to acquire talent, capital, and customers as they scale up their businesses.
Startups based in North and South America and Europe can apply until April 13 for a spot. Amazon will pick participants based on the idea and plan the company has for adding Alexa’s abilities to their product. The program will take place from June to August, with a Demo Night event in Seattle at the conclusion. The reason for the geographic limits is that, unlike earlier Alexa Accelerator programs, the program will be entirely remote.
“This year, we’re running the program virtually so founders can choose to stay close to home and connected to their networks and customers,” the Alexa Fund’s Rodrigo Prudencio wrote in the announcement “The program’s curriculum and workshops will be delivered in real time and participants will attend these sessions together. Like our past program, these sessions offer the opportunity to engage with Amazon and Techstars mentors distributed throughout the major tech hubs around the world so they can help companies address their next scale and growth challenges.”
ALEXA BOOST
The application doesn’t mention what, if any, funding will be included with acceptance to the program. Techstars has a standard $20,000 baseline for its accelerator programs, but there may be differences in the investments as the kind of companies and the program they are in have both changed. It’s also not clear if it’s just a coincidence that the new program is entirely virtual just as concerns about the COVID-19 pandemic are canceling many people’s travel plans. We’ve reached out to Amazon for information and will update if we learn more. ... "
Subscribe to:
Posts (Atom)