Interesting for non-programmers, to use API's. At the link some technical examples.
Searching Websites the Way You Want
MIT CSAIL
Adam Conner-Simons
May 18, 2020
Researchers at the Massachusetts Institute of Technology's Computer Science & Artificial Intelligence Laboratory (CSAIL) have developed ScrAPIr, a tool that enables non-programmers to access, query, save, and share Web data application programming interfaces (APIs). Traditionally, APIs could only be accessed by users with strong coding skills, leaving non-coders to laboriously copy-paste data or use Web scrapers that download a site's webpages and search the content for desired data. To integrate a new API into ScrAPIr, a non-programmer only needs to fill out a form telling the tool about certain aspects of the API. Said MIT’s David Carger, “APIs deliver more information than what website designers choose to expose it, but this imposes limits on users who may want to look at the data in a different way. With this tool, all of the data exposed by the API is available for viewing, filtering, and sorting.” ... '
Sunday, May 31, 2020
Anti Wal-Mart Shoplifting AI by Everseen
Does it work? Am a frequent user of self checkout systems and am always thinking of the implications vs other methods. Here a piece from Wired.
Walmart Employees Are Out to Show Its Anti-Theft AI Doesn't Work in Wired
The retailer denies there is any widespread issue with the software, but a group expressed frustration—and public health concerns.
IN JANUARY, MY coworker received a peculiar email. The message, which she forwarded to me, was from a handful of corporate Walmart employees calling themselves the “Concerned Home Office Associates.” (Walmart’s headquarters in Bentonville, Arkansas, is often referred to as the Home Office.) While it’s not unusual for journalists to receive anonymous tips, they don’t usually come with their own slickly produced videos.
The employees said they were “past their breaking point” with Everseen, a small artificial intelligence firm based in Cork, Ireland, whose technology Walmart began using in 2017. Walmart uses Everseen in thousands of stores to prevent shoplifting at registers and self-checkout kiosks. But the workers claimed it misidentified innocuous behavior as theft, and often failed to stop actual instances of stealing.
They told WIRED they were dismayed that their employer—one of the largest retailers in the world—was relying on AI they believed was flawed. One worker said that the technology was sometimes even referred to internally as “NeverSeen” because of its frequent mistakes. WIRED granted the employees anonymity because they are not authorized to speak to the press.
The workers said they had been upset about Walmart’s use of Everseen for years, and claimed colleagues had raised concerns about the technology to managers, but were rebuked. They decided to speak to the press, they said, after a June 2019 Business Insider article reported Walmart’s partnership with Everseen publicly for the first time. The story described how Everseen uses AI to analyze footage from surveillance cameras installed in the ceiling, and can detect issues in real time, such as when a customer places an item in their bag without scanning it. When the system spots something, it automatically alerts store associates.
“Everseen overcomes human limitations. By using state-of-the-art artificial intelligence, computer vision systems, and big data we can detect abnormal activity and other threats,” a promotional video referenced in the story explains. “Our digital eye has perfect vision and it never needs a day off.”
In an effort to refute the claims made in the Business Insider piece, the Concerned Home Office Associates created a video, which purports to show Everseen’s technology failing to flag items not being scanned in three different Walmart stores. Set to cheery elevator music, it begins with a person using self-checkout to buy two jumbo packages of Reese’s White Peanut Butter Cups. Because they’re stacked on top of each other, only one is scanned, but both are successfully placed in the bagging area without issue. ... " (More detail behind paywall)
Walmart Employees Are Out to Show Its Anti-Theft AI Doesn't Work in Wired
The retailer denies there is any widespread issue with the software, but a group expressed frustration—and public health concerns.
IN JANUARY, MY coworker received a peculiar email. The message, which she forwarded to me, was from a handful of corporate Walmart employees calling themselves the “Concerned Home Office Associates.” (Walmart’s headquarters in Bentonville, Arkansas, is often referred to as the Home Office.) While it’s not unusual for journalists to receive anonymous tips, they don’t usually come with their own slickly produced videos.
The employees said they were “past their breaking point” with Everseen, a small artificial intelligence firm based in Cork, Ireland, whose technology Walmart began using in 2017. Walmart uses Everseen in thousands of stores to prevent shoplifting at registers and self-checkout kiosks. But the workers claimed it misidentified innocuous behavior as theft, and often failed to stop actual instances of stealing.
They told WIRED they were dismayed that their employer—one of the largest retailers in the world—was relying on AI they believed was flawed. One worker said that the technology was sometimes even referred to internally as “NeverSeen” because of its frequent mistakes. WIRED granted the employees anonymity because they are not authorized to speak to the press.
The workers said they had been upset about Walmart’s use of Everseen for years, and claimed colleagues had raised concerns about the technology to managers, but were rebuked. They decided to speak to the press, they said, after a June 2019 Business Insider article reported Walmart’s partnership with Everseen publicly for the first time. The story described how Everseen uses AI to analyze footage from surveillance cameras installed in the ceiling, and can detect issues in real time, such as when a customer places an item in their bag without scanning it. When the system spots something, it automatically alerts store associates.
“Everseen overcomes human limitations. By using state-of-the-art artificial intelligence, computer vision systems, and big data we can detect abnormal activity and other threats,” a promotional video referenced in the story explains. “Our digital eye has perfect vision and it never needs a day off.”
In an effort to refute the claims made in the Business Insider piece, the Concerned Home Office Associates created a video, which purports to show Everseen’s technology failing to flag items not being scanned in three different Walmart stores. Set to cheery elevator music, it begins with a person using self-checkout to buy two jumbo packages of Reese’s White Peanut Butter Cups. Because they’re stacked on top of each other, only one is scanned, but both are successfully placed in the bagging area without issue. ... " (More detail behind paywall)
Why Consumers Are Willing to Share Personal Information on Smartphones
Intriguing difference between phones and laptops.
Why Consumers Are Willing to Share Personal Information on Smartphones
Wharton’s Shiri Melumad speaks with Wharton Business Daily on Sirius XM about why consumers share personal information on smartphones.
Nearly everyone has experienced some version of phubbing, a term to describe being snubbed by someone who is more engrossed in their smartphone screen than the conversation or activity taking place in front of them. These powerful little devices have changed virtually everything about human communication, including the way we interact with each other. New research from Wharton marketing professors Shiri Melumad and Robert Meyer finds that people are more willing to share deeper and more personal information when communicating on a smartphone compared with a personal computer. In their paper, “Full Disclosure: How Smartphones Enhance Consumer Self-Disclosure,” the professors explain that it’s the device that makes all the difference. Smartphones are always at hand, and their tiny screens and keypads require laser-focused attention, which means the user is more likely to block out other concerns.
The findings are important for marketers looking to make the most out of user-generated content, especially the kind that can be shared with other potential customers. “The more personal and intimate nature of smartphone-generated reviews results in content that is more persuasive to outside readers, in turn heightening purchase intentions,” the professors write in their paper. Melumad recently joined the Wharton Business Daily radio show on Sirius XM to discuss the research. (Listen to the podcast at the top of this page.)
An edited transcript of the conversation follows. ..."
Why Consumers Are Willing to Share Personal Information on Smartphones
Wharton’s Shiri Melumad speaks with Wharton Business Daily on Sirius XM about why consumers share personal information on smartphones.
Nearly everyone has experienced some version of phubbing, a term to describe being snubbed by someone who is more engrossed in their smartphone screen than the conversation or activity taking place in front of them. These powerful little devices have changed virtually everything about human communication, including the way we interact with each other. New research from Wharton marketing professors Shiri Melumad and Robert Meyer finds that people are more willing to share deeper and more personal information when communicating on a smartphone compared with a personal computer. In their paper, “Full Disclosure: How Smartphones Enhance Consumer Self-Disclosure,” the professors explain that it’s the device that makes all the difference. Smartphones are always at hand, and their tiny screens and keypads require laser-focused attention, which means the user is more likely to block out other concerns.
The findings are important for marketers looking to make the most out of user-generated content, especially the kind that can be shared with other potential customers. “The more personal and intimate nature of smartphone-generated reviews results in content that is more persuasive to outside readers, in turn heightening purchase intentions,” the professors write in their paper. Melumad recently joined the Wharton Business Daily radio show on Sirius XM to discuss the research. (Listen to the podcast at the top of this page.)
An edited transcript of the conversation follows. ..."
Microsoft Builds Supercomputer for OpenAI
Some useful hints here about what is being contemplated. And, of course, big companies like MS, Google, Amazon, Apple, IBM .... have the access to huge amounts of data to work with, and exposure to rich problem types too. So expect new things from them. Note the statement that we are nowhere near 'AGI' (Artificial General Intelligence) yet, have been asked that several times lately.
Microsoft Just Built a World-Class Supercomputer Exclusively for OpenAI By Jason Dorrier in SingularityHub
Last year, Microsoft announced a billion-dollar investment in OpenAI, an organization whose mission is to create artificial general intelligence and make it safe for humanity. No Terminator-like dystopias here. No deranged machines making humans into paperclips. Just computers with general intelligence helping us solve our biggest problems.
A year on, we have the first results of that partnership. At this year’s Microsoft Build 2020, a developer conference showcasing Microsoft’s latest and greatest, the company said they’d completed a supercomputer exclusively for OpenAI’s machine learning research. But this is no run-of-the-mill supercomputer. It’s a beast of a machine. The company said it has 285,000 CPU cores, 10,000 GPUs, and 400 gigabits per second of network connectivity for each GPU server.
Stacked against the fastest supercomputers on the planet, Microsoft says it’d rank fifth.
The company didn’t release performance data, and the computer hasn’t been publicly benchmarked and included on the widely-followed Top500 list of supercomputers. But even absent official rankings, it’s likely safe to say its a world-class machine.
“As we’ve learned more and more about what we need and the different limits of all the components that make up a supercomputer, we were really able to say, ‘If we could design our dream system, what would it look like?’” said OpenAI CEO Sam Altman. “And then Microsoft was able to build it.”
What will OpenAI do with this dream-machine? The company is building ever bigger narrow AI algorithms—we’re nowhere near AGI yet—and they need a lot of computing power to do it. ... "
Microsoft Just Built a World-Class Supercomputer Exclusively for OpenAI By Jason Dorrier in SingularityHub
Last year, Microsoft announced a billion-dollar investment in OpenAI, an organization whose mission is to create artificial general intelligence and make it safe for humanity. No Terminator-like dystopias here. No deranged machines making humans into paperclips. Just computers with general intelligence helping us solve our biggest problems.
A year on, we have the first results of that partnership. At this year’s Microsoft Build 2020, a developer conference showcasing Microsoft’s latest and greatest, the company said they’d completed a supercomputer exclusively for OpenAI’s machine learning research. But this is no run-of-the-mill supercomputer. It’s a beast of a machine. The company said it has 285,000 CPU cores, 10,000 GPUs, and 400 gigabits per second of network connectivity for each GPU server.
Stacked against the fastest supercomputers on the planet, Microsoft says it’d rank fifth.
The company didn’t release performance data, and the computer hasn’t been publicly benchmarked and included on the widely-followed Top500 list of supercomputers. But even absent official rankings, it’s likely safe to say its a world-class machine.
“As we’ve learned more and more about what we need and the different limits of all the components that make up a supercomputer, we were really able to say, ‘If we could design our dream system, what would it look like?’” said OpenAI CEO Sam Altman. “And then Microsoft was able to build it.”
What will OpenAI do with this dream-machine? The company is building ever bigger narrow AI algorithms—we’re nowhere near AGI yet—and they need a lot of computing power to do it. ... "
Saturday, May 30, 2020
Indoor Positioning for Retail Innovation
We worked on this problem for our own innovation center tests.
A Smart Indoor Positioning System for Retail Automation
For the last few years we’ve been hearing about the retail apocalypse, though we would characterize it more as a retail extinction event. The difference being that until a couple of months ago, the demise of many brick-and-mortar businesses was a long, drawn out affair. No more. COVID-19 has caused a true retail apocalypse – goodbye, JCPenney, Pier One, J Crew, and company – by hastening the death of these ailing giants. There’s obviously a knock-on effect to tech companies shopping retail automation solutions. However, for one small Silicon Valley startup spun out of MIT, the novel coronavirus brought an unexpected opportunity for its thermal-based indoor positioning system.
You’re Being Followed
We’ve been covering the automation of retail for quite some time, from cashierless stores to robotic fulfillment centers in retailers like Walmart. The real money, of course, is in marketing. But it’s no longer necessary to blast messages and advertisements across a black hole and hope a few escape the gravitational pull of customer indifference. Today’s retail tech promises marketing precision by attempting to track and predict customer behavior in real-time both online and in the physical world.
A company like Zenreach, for example, makes a pretty good case of ROI for its WiFi hotspot marketing software that tracks how well the store’s message is doing based on how many people walk through the door. Audio beacons are another way that stores can directly track customers by using sound to locate people through their smartphones. These technologies do start to leak into the creepy zone when you realize that the devices are communicating about you and your shopping behavior at a frequency you can’t detect. They also think you’ve put on a bit too much weight with all of the COVID-19 stress eating.
In fact, there are quite a few indoor positioning systems that scientists and startups have developed to track motion and trajectory. Not all of it is directed at marketing. WiFi motion sensors, for example, use WiFi signals for various smart home applications like security and even in-home elderly care, with algorithms trained to detect falls and too many trips to the cookie jar.
Briefly About Butlr .... "
A Smart Indoor Positioning System for Retail Automation
For the last few years we’ve been hearing about the retail apocalypse, though we would characterize it more as a retail extinction event. The difference being that until a couple of months ago, the demise of many brick-and-mortar businesses was a long, drawn out affair. No more. COVID-19 has caused a true retail apocalypse – goodbye, JCPenney, Pier One, J Crew, and company – by hastening the death of these ailing giants. There’s obviously a knock-on effect to tech companies shopping retail automation solutions. However, for one small Silicon Valley startup spun out of MIT, the novel coronavirus brought an unexpected opportunity for its thermal-based indoor positioning system.
You’re Being Followed
We’ve been covering the automation of retail for quite some time, from cashierless stores to robotic fulfillment centers in retailers like Walmart. The real money, of course, is in marketing. But it’s no longer necessary to blast messages and advertisements across a black hole and hope a few escape the gravitational pull of customer indifference. Today’s retail tech promises marketing precision by attempting to track and predict customer behavior in real-time both online and in the physical world.
A company like Zenreach, for example, makes a pretty good case of ROI for its WiFi hotspot marketing software that tracks how well the store’s message is doing based on how many people walk through the door. Audio beacons are another way that stores can directly track customers by using sound to locate people through their smartphones. These technologies do start to leak into the creepy zone when you realize that the devices are communicating about you and your shopping behavior at a frequency you can’t detect. They also think you’ve put on a bit too much weight with all of the COVID-19 stress eating.
In fact, there are quite a few indoor positioning systems that scientists and startups have developed to track motion and trajectory. Not all of it is directed at marketing. WiFi motion sensors, for example, use WiFi signals for various smart home applications like security and even in-home elderly care, with algorithms trained to detect falls and too many trips to the cookie jar.
Briefly About Butlr .... "
Algorithm Selection, Design, as a Learning Problem
Good way to look at it. Many good points, ultimately technical.
Technical Perspective: Algorithm Selection as a Learning Problem
By Avrim Blum
Communications of the ACM, June 2020, Vol. 63 No. 6, Page 86
10.1145/3394623
The following paper by Gupta and Roughgarden—"Data-Driven Algorithm Design"—addresses the issue that the best algorithm to use for many problems depends on what the input "looks like." Certain algorithms work better for certain types of inputs, whereas other algorithms work better for others. This is especially the case for NP-hard problems, where we do not expect to ever have algorithms that work well on all inputs: instead, we often have various heuristics that each work better in different settings. Moreover, heuristic strategies often have parameters or hyperparameters that must be set in some way. ... "
To view the accompanying paper, visit doi.acm.org/10.1145/3394625
Data-Driven Algorithm Design
By Rishi Gupta, Tim Roughgarden
Communications of the ACM, June 2020, Vol. 63 No. 6, Pages 87-94
10.1145/339462
The best algorithm for a computational problem generally depends on the "relevant inputs," a concept that depends on the application domain and often defies formal articulation. Although there is a large literature on empirical approaches to selecting the best algorithm for a given application domain, there has been surprisingly little theoretical analysis of the problem.
We model the problem of identifying a good algorithm from data as a statistical learning problem. Our framework captures several state-of-the-art empirical and theoretical approaches to the problem, and our results identify conditions under which these approaches are guaranteed to perform well. We interpret our results in the contexts of learning greedy heuristics, instance feature-based algorithm selection, and parameter tuning in machine learning.
Back to Top
1. Introduction
Rigorously comparing algorithms is hard. Two different algorithms for a computational problem generally have incomparable performance: one algorithm is better on some inputs but worse on the others. How can a theory advocate one of the algorithms over the other? The simplest and most common solution in the theoretical analysis of algorithms is to summarize the performance of an algorithm using a single number, such as its worst-case performance or its average-case performance with respect to an input distribution. This approach effectively advocates using the algorithm with the best summarizing value (e.g., the smallest worst-case running time).
Solving a problem "in practice" generally means identifying an algorithm that works well for most or all instances of interest. When the "instances of interest" are easy to specify formally in advance—say, planar graphs, the traditional analysis approaches often give accurate performance predictions and identify useful algorithms. However, the instances of interest commonly possess domain-specific features that defy formal articulation. Solving a problem in practice can require designing an algorithm that is optimized for the specific application domain, even though the special structure of its instances is not well understood. Although there is a large literature, spanning numerous communities, on empirical approaches to data-driven algorithm design (e.g., Fink11, Horvitz et al.14, Huang et al.15, Hutter et al.16, Kotthoff et al.18, Leyton-Brown et al.20), there has been surprisingly little theoretical analysis of the problem. One possible explanation is that worst-case analysis, which is the dominant algorithm analysis paradigm in theoretical computer science, is intentionally application agnostic. .... "
Technical Perspective: Algorithm Selection as a Learning Problem
By Avrim Blum
Communications of the ACM, June 2020, Vol. 63 No. 6, Page 86
10.1145/3394623
The following paper by Gupta and Roughgarden—"Data-Driven Algorithm Design"—addresses the issue that the best algorithm to use for many problems depends on what the input "looks like." Certain algorithms work better for certain types of inputs, whereas other algorithms work better for others. This is especially the case for NP-hard problems, where we do not expect to ever have algorithms that work well on all inputs: instead, we often have various heuristics that each work better in different settings. Moreover, heuristic strategies often have parameters or hyperparameters that must be set in some way. ... "
To view the accompanying paper, visit doi.acm.org/10.1145/3394625
Data-Driven Algorithm Design
By Rishi Gupta, Tim Roughgarden
Communications of the ACM, June 2020, Vol. 63 No. 6, Pages 87-94
10.1145/339462
The best algorithm for a computational problem generally depends on the "relevant inputs," a concept that depends on the application domain and often defies formal articulation. Although there is a large literature on empirical approaches to selecting the best algorithm for a given application domain, there has been surprisingly little theoretical analysis of the problem.
We model the problem of identifying a good algorithm from data as a statistical learning problem. Our framework captures several state-of-the-art empirical and theoretical approaches to the problem, and our results identify conditions under which these approaches are guaranteed to perform well. We interpret our results in the contexts of learning greedy heuristics, instance feature-based algorithm selection, and parameter tuning in machine learning.
Back to Top
1. Introduction
Rigorously comparing algorithms is hard. Two different algorithms for a computational problem generally have incomparable performance: one algorithm is better on some inputs but worse on the others. How can a theory advocate one of the algorithms over the other? The simplest and most common solution in the theoretical analysis of algorithms is to summarize the performance of an algorithm using a single number, such as its worst-case performance or its average-case performance with respect to an input distribution. This approach effectively advocates using the algorithm with the best summarizing value (e.g., the smallest worst-case running time).
Solving a problem "in practice" generally means identifying an algorithm that works well for most or all instances of interest. When the "instances of interest" are easy to specify formally in advance—say, planar graphs, the traditional analysis approaches often give accurate performance predictions and identify useful algorithms. However, the instances of interest commonly possess domain-specific features that defy formal articulation. Solving a problem in practice can require designing an algorithm that is optimized for the specific application domain, even though the special structure of its instances is not well understood. Although there is a large literature, spanning numerous communities, on empirical approaches to data-driven algorithm design (e.g., Fink11, Horvitz et al.14, Huang et al.15, Hutter et al.16, Kotthoff et al.18, Leyton-Brown et al.20), there has been surprisingly little theoretical analysis of the problem. One possible explanation is that worst-case analysis, which is the dominant algorithm analysis paradigm in theoretical computer science, is intentionally application agnostic. .... "
Alexa Monologues
Alexa is expanding the possibilities. Need more of this with clear understanding and response.
Amazon Upgrades Alexa’s Long-Form Speech and Vastly Expands Custom Voice Languages and Styles
Eric Hal Schwartz in Voicebot.ai
more than a dozen new voices and styles from which to choose, and developers can adjust the voice assistant to sound more natural when speaking for more than a few sentences.
THE ALEXA MONOLOGUES
A lot of interaction with Alexa involves short responses or rote lines. That starts to sound strange when the voice assistant speaks for more than a few seconds. Alexa’s new long-form speaking stye is designed to address that disconnect and make using Alexa feel as comfortable as talking to another human. Since people don’t speak the same way when uttering a sentence as they do when expounding for multiple paragraphs, the addition is likely to be popular with voice apps that read magazines, books, or transcribed conversations from a podcast out loud. For now, this style is only an option for Alexa in the United States.
“For example, you can use this speaking style for customers who want to have the content on a web page read to them or listen to a storytelling section in a game,” Alexa developer Catherine Gao explained in Amazon’s blog post about the new feature. “Powered by a deep-learning text-to-speech model, the long-form speaking style enables Alexa to speak with more natural pauses while going from one paragraph to the next or even from one dialog to another between different characters.” .... '
Amazon Upgrades Alexa’s Long-Form Speech and Vastly Expands Custom Voice Languages and Styles
Eric Hal Schwartz in Voicebot.ai
more than a dozen new voices and styles from which to choose, and developers can adjust the voice assistant to sound more natural when speaking for more than a few sentences.
THE ALEXA MONOLOGUES
A lot of interaction with Alexa involves short responses or rote lines. That starts to sound strange when the voice assistant speaks for more than a few seconds. Alexa’s new long-form speaking stye is designed to address that disconnect and make using Alexa feel as comfortable as talking to another human. Since people don’t speak the same way when uttering a sentence as they do when expounding for multiple paragraphs, the addition is likely to be popular with voice apps that read magazines, books, or transcribed conversations from a podcast out loud. For now, this style is only an option for Alexa in the United States.
“For example, you can use this speaking style for customers who want to have the content on a web page read to them or listen to a storytelling section in a game,” Alexa developer Catherine Gao explained in Amazon’s blog post about the new feature. “Powered by a deep-learning text-to-speech model, the long-form speaking style enables Alexa to speak with more natural pauses while going from one paragraph to the next or even from one dialog to another between different characters.” .... '
Why is AI so Confused by Language?
From the Elemental Blog, well worth reading through there:
Why is AI so confused by language? It’s all about mental models. By David Ferrucci
In my last post, I shared some telling examples where computers failed to understand what they read. The errors they made were bizarre and fundamental. But why? Computers are clearly missing something, but can we more clearly pin down what?
Let’s examine one specific error that sheds some light on the situation. My team ran an experiment where we took the same first-grade story I discussed last time, but truncated the final sentence:
Fernando and Zoey go to a plant sale. They buy mint plants. They like the minty smell of the leaves.
Fernando puts his plant near a sunny window. Zoey puts her plant in her bedroom. Fernando’s plant looks green and healthy after a few days. But Zoey’s plant has some brown leaves.
“Your plant needs more light,” Fernando says.
Zoey moves her plant to a sunny window. Soon, ___________.
[adapted from ReadWorks.org]
Then we asked workers on Amazon Mechanical Turk to fill in the blank. Here’s what the workers suggested:
Why is AI so confused by language? It’s all about mental models. By David Ferrucci
In my last post, I shared some telling examples where computers failed to understand what they read. The errors they made were bizarre and fundamental. But why? Computers are clearly missing something, but can we more clearly pin down what?
Let’s examine one specific error that sheds some light on the situation. My team ran an experiment where we took the same first-grade story I discussed last time, but truncated the final sentence:
Fernando and Zoey go to a plant sale. They buy mint plants. They like the minty smell of the leaves.
Fernando puts his plant near a sunny window. Zoey puts her plant in her bedroom. Fernando’s plant looks green and healthy after a few days. But Zoey’s plant has some brown leaves.
“Your plant needs more light,” Fernando says.
Zoey moves her plant to a sunny window. Soon, ___________.
[adapted from ReadWorks.org]
Then we asked workers on Amazon Mechanical Turk to fill in the blank. Here’s what the workers suggested:
Friday, May 29, 2020
Steve Gibson on Why Contract Tracing Won't Work
I have inserted links below to this analysis of Apple/Google attempts at generalized software based tracking.
See also related Bruce Schneier article: https://www.schneier.com/blog/archives/2020/05/me_on_covad-19_.html With considerable and often thoughtful discussion.
Steve Gibson in Podcast:
Contact Tracing Apps R.I.P. https://twit.tv/shows/security-now/episodes/768
https://www.grc.com/sn/SN-768-Notes.pdf
Software-based Contact Tracing is Doomed
See also related Bruce Schneier article: https://www.schneier.com/blog/archives/2020/05/me_on_covad-19_.html With considerable and often thoughtful discussion.
Steve Gibson in Podcast:
Contact Tracing Apps R.I.P. https://twit.tv/shows/security-now/episodes/768
https://www.grc.com/sn/SN-768-Notes.pdf
Software-based Contact Tracing is Doomed
Amazon Echo Look Experiment Ends
A look at the history of and now the end of fashion advice from Echo Look. Had some people look at it but it got only rare interest and the experiment ended quickly. Passing this along to people that had interest for closure.
Amazon Echo Look No More – Another Alexa Device Discontinued By Brett Kinsella in voicebt.ai
Amazon quietly introduced the Echo Look Alexa-enabled smart speaker for fashion advice in April 2017. Yesterday, the company quietly informed its few thousand users that Echo Look would be discontinued. In providing background on the latest shakeup of Amazon’s Alexa portfolio, an Amazon spokesperson shared the text of an email sent to Echo Look users yesterday saying:
“When we introduced Echo Look three years ago, our goal was to train Alexa to become a style assistant as a novel way to apply AI and machine learning to fashion. With the help of our customers we evolved the service, enabling Alexa to give outfit advice and offer style recommendations. We’ve since moved Style by Alexa features into the Amazon Shopping app and to Alexa-enabled devices making them even more convenient and available to more Amazon customers. For that reason, we have decided it’s time to wind down Echo Look. Beginning July 24, 2020, both Echo Look and its app will no longer function. Customers will still be able to enjoy style advice from Alexa through the Amazon Shopping app and other Alexa-enabled devices. We look forward to continuing to support our customers and their style needs with Alexa.” ....
Amazon Echo Look always seemed like an experiment. It was launched in a closed, invite-only beta three years ago. A year later it was made available to the public for purchase though there was never much attention paid to the device in Amazon’s product launch events.
Echo was the first Alexa-enabled product that included a camera. However, it notably had no screen. The first smart display, Echo Show, would launch two months later and get a product refresh within 15 months of launch. Echo Look never received a formal product update and wasn’t even billed as a smart speaker. The company made clear that Echo Look could not do many of the things that were popular features on Echo smart speakers. .... "
Amazon Echo Look No More – Another Alexa Device Discontinued By Brett Kinsella in voicebt.ai
Amazon quietly introduced the Echo Look Alexa-enabled smart speaker for fashion advice in April 2017. Yesterday, the company quietly informed its few thousand users that Echo Look would be discontinued. In providing background on the latest shakeup of Amazon’s Alexa portfolio, an Amazon spokesperson shared the text of an email sent to Echo Look users yesterday saying:
“When we introduced Echo Look three years ago, our goal was to train Alexa to become a style assistant as a novel way to apply AI and machine learning to fashion. With the help of our customers we evolved the service, enabling Alexa to give outfit advice and offer style recommendations. We’ve since moved Style by Alexa features into the Amazon Shopping app and to Alexa-enabled devices making them even more convenient and available to more Amazon customers. For that reason, we have decided it’s time to wind down Echo Look. Beginning July 24, 2020, both Echo Look and its app will no longer function. Customers will still be able to enjoy style advice from Alexa through the Amazon Shopping app and other Alexa-enabled devices. We look forward to continuing to support our customers and their style needs with Alexa.” ....
Amazon Echo Look always seemed like an experiment. It was launched in a closed, invite-only beta three years ago. A year later it was made available to the public for purchase though there was never much attention paid to the device in Amazon’s product launch events.
Echo was the first Alexa-enabled product that included a camera. However, it notably had no screen. The first smart display, Echo Show, would launch two months later and get a product refresh within 15 months of launch. Echo Look never received a formal product update and wasn’t even billed as a smart speaker. The company made clear that Echo Look could not do many of the things that were popular features on Echo smart speakers. .... "
Street Lamps as a Platform for the Urban Smart City
Considerable piece on using Street lights as a platform for the urban smart city. Have seen this posed a number of times, greet place to start, how often has it been successfully done?
Street Lamps as a Platform
By Max Mühlhäuser, Christian Meurisch, Michael Stein, Jörg Daubert, Julius Von Willich, Jan Riemann, Lin Wang
Communications of the ACM, June 2020, Vol. 63 No. 6, Pages 75-83
10.1145/3376900
Street lamps constitute the densest electrically operated public infrastructure in urban areas. Their changeover to energy-friendly LED light quickly amortizes and is increasingly leveraged for smart city projects, where LED street lamps double, for example, as wireless networking or sensor infrastructure. We make the case for a new paradigm called SLaaP—street lamps as a platform. SLaaP denotes a considerably more dramatic changeover, turning urban light poles into a versatile computational infrastructure. SLaaP is proposed as an open, enabling platform, fostering innovative citywide services for the full range of stakeholders and end users—seamlessly extending from everyday use to emergency response. In this article, we first describe the role and potential of street lamps and introduce one novel base service as a running example. We then discuss citywide infrastructure design and operation, followed by addressing the major layers of a SLaaP infrastructure: hardware, distributed software platform, base services, value-added services and applications for users and 'things.' Finally, we discuss the crucial roles and participation of major stakeholders: citizens, city, government, and economy.
Recent years have seen the emergence of smart street lamps, with very different meanings of 'smart'—sometimes related to the original purpose as with usage-dependent lighting, but mostly as add-on capabilities like urban sensing, monitoring, digital signage, WiFi access, or e-vehicle charging.a Research about their use in settings for edge computing14 or car-to-infrastructure communication (for example, traffic control, hazard warnings, or autonomous driving)6 hints at their great potential as computing resources. The future holds even more use cases: for example, after a first wave of 5G mobile network rollouts from 2020 onward, a second wave shall apply mm-wave frequencies for which densely deployed light poles can be appropriate 'cell towers.'
Street lamps: A (potential) true infrastructure. Given the huge potential of street lamps evident already today and given the broad spectrum of use cases, a city's street lamps may obviously constitute a veritable infrastructure. However, cities today do not consider street lamps—beyond the lighting function—as an infrastructure in the strict sense. Like road, water, energy, or telecommunication, infrastructures constitute a sovereign duty: provision and appropriate public access must be regulated, design and operation must balance stakeholder interests, careful planning has to take into account present and future use cases and demands, maintenance, threat protection, and more. Well-considered outsourcing or privatization may be aligned with these public interests.
The LED dividend: A unique opportunity. The widespread lack of such considerations in cities is even more dramatic since a once-in-history opportunity opens up with the changeover to energy efficient LED lighting, expected to save large cities millions in terms of energy cost, as we will discuss, called 'LED dividend' in the following. Given their notoriously tight budgets, cities urgently need to dedicate these savings if they want to 'own' and control an infrastructure, which, once built, can foster innovation and assure royalties and new business as sources of city and citizen prosperity ... "
Street Lamps as a Platform
By Max Mühlhäuser, Christian Meurisch, Michael Stein, Jörg Daubert, Julius Von Willich, Jan Riemann, Lin Wang
Communications of the ACM, June 2020, Vol. 63 No. 6, Pages 75-83
10.1145/3376900
Street lamps constitute the densest electrically operated public infrastructure in urban areas. Their changeover to energy-friendly LED light quickly amortizes and is increasingly leveraged for smart city projects, where LED street lamps double, for example, as wireless networking or sensor infrastructure. We make the case for a new paradigm called SLaaP—street lamps as a platform. SLaaP denotes a considerably more dramatic changeover, turning urban light poles into a versatile computational infrastructure. SLaaP is proposed as an open, enabling platform, fostering innovative citywide services for the full range of stakeholders and end users—seamlessly extending from everyday use to emergency response. In this article, we first describe the role and potential of street lamps and introduce one novel base service as a running example. We then discuss citywide infrastructure design and operation, followed by addressing the major layers of a SLaaP infrastructure: hardware, distributed software platform, base services, value-added services and applications for users and 'things.' Finally, we discuss the crucial roles and participation of major stakeholders: citizens, city, government, and economy.
Recent years have seen the emergence of smart street lamps, with very different meanings of 'smart'—sometimes related to the original purpose as with usage-dependent lighting, but mostly as add-on capabilities like urban sensing, monitoring, digital signage, WiFi access, or e-vehicle charging.a Research about their use in settings for edge computing14 or car-to-infrastructure communication (for example, traffic control, hazard warnings, or autonomous driving)6 hints at their great potential as computing resources. The future holds even more use cases: for example, after a first wave of 5G mobile network rollouts from 2020 onward, a second wave shall apply mm-wave frequencies for which densely deployed light poles can be appropriate 'cell towers.'
Street lamps: A (potential) true infrastructure. Given the huge potential of street lamps evident already today and given the broad spectrum of use cases, a city's street lamps may obviously constitute a veritable infrastructure. However, cities today do not consider street lamps—beyond the lighting function—as an infrastructure in the strict sense. Like road, water, energy, or telecommunication, infrastructures constitute a sovereign duty: provision and appropriate public access must be regulated, design and operation must balance stakeholder interests, careful planning has to take into account present and future use cases and demands, maintenance, threat protection, and more. Well-considered outsourcing or privatization may be aligned with these public interests.
The LED dividend: A unique opportunity. The widespread lack of such considerations in cities is even more dramatic since a once-in-history opportunity opens up with the changeover to energy efficient LED lighting, expected to save large cities millions in terms of energy cost, as we will discuss, called 'LED dividend' in the following. Given their notoriously tight budgets, cities urgently need to dedicate these savings if they want to 'own' and control an infrastructure, which, once built, can foster innovation and assure royalties and new business as sources of city and citizen prosperity ... "
Microsoft Buying its Way into RPA
Microsoft buys it way into RPA capabilities, announced last week. Late to the space it seems, but now a flurry of activity. With claim of AI capabilities.
Microsoft + Softomotive: What Does This Deal Mean For RPA (Robotic Process Automation)?
Tom Taulli in Forbes
At Microsoft’s Build conference this week, CEO Satya Nadella announced the acquisition of Softomotive, which is a top RPA (Robotic Process Automation) vendor. This technology allows for the automation of repetitive and tedious processes, such as with legacy IT systems.
Keep in mind that Microsoft recently launched a new version of its own platform, called Power Automate, and also led a venture round in an AI-based RPA company, FortressIQ (here’s a Forbes.com post I wrote about it).
So let’s get a quick background on Softomotive: Founded in 2005, the company was one of the pioneers of the RPA industry. It initially focused on developing a visual scripting system, using VBScript, for desktop automation. Softomotive would then go on to evolve the platform, such as by creating ProcessRobot for larger enterprises.
As of now, there are over 9,000 customers across the globe. Interestingly enough, Microsoft will make the Softomotive’s WinAutomation application free for those who have an attended licence for Power Automate. ... "
Microsoft + Softomotive: What Does This Deal Mean For RPA (Robotic Process Automation)?
Tom Taulli in Forbes
At Microsoft’s Build conference this week, CEO Satya Nadella announced the acquisition of Softomotive, which is a top RPA (Robotic Process Automation) vendor. This technology allows for the automation of repetitive and tedious processes, such as with legacy IT systems.
Keep in mind that Microsoft recently launched a new version of its own platform, called Power Automate, and also led a venture round in an AI-based RPA company, FortressIQ (here’s a Forbes.com post I wrote about it).
So let’s get a quick background on Softomotive: Founded in 2005, the company was one of the pioneers of the RPA industry. It initially focused on developing a visual scripting system, using VBScript, for desktop automation. Softomotive would then go on to evolve the platform, such as by creating ProcessRobot for larger enterprises.
As of now, there are over 9,000 customers across the globe. Interestingly enough, Microsoft will make the Softomotive’s WinAutomation application free for those who have an attended licence for Power Automate. ... "
Simulating Loaded Dice
Intriguing. We spent lots of time updating ow we generated random numbers, sometimes laboriously checking internal random number generators. I can think of ways this could be used to generate numbers more clearly understandable to decision makers. Since many real systems use numbers that are 'loaded' by context.
Algorithm quickly simulates a roll of loaded dice by Steve Nadis, Massachusetts Institute of Technology in TechExplore
A new algorithm, called the Fast Loaded Dice Roller (FLDR), simulates the roll of dice to produce random integers. The dice, in this case, could have any number of sides, and they are “loaded,” or weighted, to make some sides more likely to come up than others. Credit: Jose-Luis Olivares, MIT
The fast and efficient generation of random numbers has long been an important challenge. For centuries, games of chance have relied on the roll of a die, the flip of a coin, or the shuffling of cards to bring some randomness into the proceedings. In the second half of the 20th century, computers started taking over that role, for applications in cryptography, statistics, and artificial intelligence, as well as for various simulations—climatic, epidemiological, financial, and so forth.
MIT researchers have now developed a computer algorithm that might, at least for some tasks, churn out random numbers with the best combination of speed, accuracy, and low memory requirements available today. The algorithm, called the Fast Loaded Dice Roller (FLDR), was created by MIT graduate student Feras Saad, Research Scientist Cameron Freer, Professor Martin Rinard, and Principal Research Scientist Vikash Mansinghka, and it will be presented next week at the 23rd International Conference on Artificial Intelligence and Statistics.
Simply put, FLDR is a computer program that simulates the roll of dice to produce random integers. The dice can have any number of sides, and they are "loaded," or weighted, to make some sides more likely to come up than others. A loaded die can still yield random numbers—as one cannot predict in advance which side will turn up—but the randomness is constrained to meet a preset probability distribution. One might, for instance, use loaded dice to simulate the outcome of a baseball game; while the superior team is more likely to win, on a given day either team could end up on top.... "
Algorithm quickly simulates a roll of loaded dice by Steve Nadis, Massachusetts Institute of Technology in TechExplore
A new algorithm, called the Fast Loaded Dice Roller (FLDR), simulates the roll of dice to produce random integers. The dice, in this case, could have any number of sides, and they are “loaded,” or weighted, to make some sides more likely to come up than others. Credit: Jose-Luis Olivares, MIT
The fast and efficient generation of random numbers has long been an important challenge. For centuries, games of chance have relied on the roll of a die, the flip of a coin, or the shuffling of cards to bring some randomness into the proceedings. In the second half of the 20th century, computers started taking over that role, for applications in cryptography, statistics, and artificial intelligence, as well as for various simulations—climatic, epidemiological, financial, and so forth.
MIT researchers have now developed a computer algorithm that might, at least for some tasks, churn out random numbers with the best combination of speed, accuracy, and low memory requirements available today. The algorithm, called the Fast Loaded Dice Roller (FLDR), was created by MIT graduate student Feras Saad, Research Scientist Cameron Freer, Professor Martin Rinard, and Principal Research Scientist Vikash Mansinghka, and it will be presented next week at the 23rd International Conference on Artificial Intelligence and Statistics.
Simply put, FLDR is a computer program that simulates the roll of dice to produce random integers. The dice can have any number of sides, and they are "loaded," or weighted, to make some sides more likely to come up than others. A loaded die can still yield random numbers—as one cannot predict in advance which side will turn up—but the randomness is constrained to meet a preset probability distribution. One might, for instance, use loaded dice to simulate the outcome of a baseball game; while the superior team is more likely to win, on a given day either team could end up on top.... "
Thursday, May 28, 2020
AI Making Personality Distinctions via Images
Intriguing paper, but I have my doubts that you can determine useful personality distinctions this way.
Artificial intelligence can make personality judgments based on photographs
6 days ago
National Research University Higher School of Economics
Russian researchers from HSE University and Open University for the Humanities and Economics have demonstrated that artificial intelligence is able to infer people's personality from 'selfie' photographs better than human raters do. Conscientiousness emerged to be more easily recognizable than the other four traits. Personality predictions based on female faces appeared to be more reliable than those for male faces. The technology can be used to find the 'best matches' in customer service, dating or online tutoring.
The article, "Assessing the Big Five personality traits using real-life static facial images," will be published on May 22 in Scientific Reports.
Physiognomists from Ancient Greece to Cesare Lombroso have tried to link facial appearance to personality, but the majority of their ideas failed to withstand the scrutiny of modern science. The few established associations of specific facial features with personality traits, such as facial width-to-height ratio, are quite weak. Studies asking human raters to make personality judgments based on photographs have produced inconsistent results, suggesting that our judgments are too unreliable to be of any practical importance. ...
Kachur, A., Osin, E., Davydov, D., Shutilov, K., & Novokshonov, A. (2020). Assessing the Big Five personality traits using real-life static facial images. Scientific Reports. https://www.nature.com/articles/s41598-020-65358-6
Artificial intelligence can make personality judgments based on photographs
6 days ago
National Research University Higher School of Economics
Russian researchers from HSE University and Open University for the Humanities and Economics have demonstrated that artificial intelligence is able to infer people's personality from 'selfie' photographs better than human raters do. Conscientiousness emerged to be more easily recognizable than the other four traits. Personality predictions based on female faces appeared to be more reliable than those for male faces. The technology can be used to find the 'best matches' in customer service, dating or online tutoring.
The article, "Assessing the Big Five personality traits using real-life static facial images," will be published on May 22 in Scientific Reports.
Physiognomists from Ancient Greece to Cesare Lombroso have tried to link facial appearance to personality, but the majority of their ideas failed to withstand the scrutiny of modern science. The few established associations of specific facial features with personality traits, such as facial width-to-height ratio, are quite weak. Studies asking human raters to make personality judgments based on photographs have produced inconsistent results, suggesting that our judgments are too unreliable to be of any practical importance. ...
Kachur, A., Osin, E., Davydov, D., Shutilov, K., & Novokshonov, A. (2020). Assessing the Big Five personality traits using real-life static facial images. Scientific Reports. https://www.nature.com/articles/s41598-020-65358-6
Wearable Vitamin Sensor
Another wearable sensor example.
Wearable Sensor Tracks Vitamin C Levels in Sweat
UC San Diego News Center
By Alison Caldwell
A team of University of California, San Diego (UCSD) researchers developed a new wearable sensor that monitors vitamin C levels in perspiration, which could offer a highly personalized option for users to track daily nutritional consumption and dietary compliance. The wearable is an adhesive patch that includes a system to stimulate sweating, and an electrode sensor to rapidly detect vitamin C concentrations. The flexible electrodes contain the enzyme ascorbate oxidase, which converts vitamin C to dehydroascorbic acid; the resulting rise of oxygen triggers a current that the device measures. UCSD's Juliane Sempionatto said, "Ultimately, this sort of device would be valuable for supporting behavioral changes around diet and nutrition." ... '
Wearable Sensor Tracks Vitamin C Levels in Sweat
UC San Diego News Center
By Alison Caldwell
A team of University of California, San Diego (UCSD) researchers developed a new wearable sensor that monitors vitamin C levels in perspiration, which could offer a highly personalized option for users to track daily nutritional consumption and dietary compliance. The wearable is an adhesive patch that includes a system to stimulate sweating, and an electrode sensor to rapidly detect vitamin C concentrations. The flexible electrodes contain the enzyme ascorbate oxidase, which converts vitamin C to dehydroascorbic acid; the resulting rise of oxygen triggers a current that the device measures. UCSD's Juliane Sempionatto said, "Ultimately, this sort of device would be valuable for supporting behavioral changes around diet and nutrition." ... '
Samsung Health on Newer TVs
Preloading TV's with software that addresses current conditions, like stay at homework places.
Good way to drive TV sales. No real mention of 'assistant' functions. No mention of Samsung's Bixby. See the ability of having cross function enabled in the system, a possible assistance application.
Samsung Health Now Available as a Comprehensive In-Home Fitness and Wellness Platform on 2020 Samsung Smart TVs
With free access to Samsung Health, 5,000 hours of content on the TV and over 250 instructive videos from barre3, Calm, Fitplan, Jillian Michaels Fitness, obé fitness, and Echelon available
Samsung Electronics announced today that its Samsung Health platform is now available on 2020 Samsung Smart TV models. Designed to revolutionize the concept of at-home workouts, Samsung Health is a user-centric wellness platform that goes beyond fitness. It is a companion that syncs across various digital devices – smartphones, wearables and now Samsung Smart TVs. With Samsung Health, users will be able to enjoy free premium classes, start new wellness routines and even get the whole household moving with family challenges and more – all from the comfort of home.
“The whole intention of Samsung Health is to motivate our consumers to live healthier lives by meeting them wherever they are, across Samsung platforms,” said Won-Jin Lee, Executive Vice President of Service Business at Samsung Electronics. “We knew that to do this, we needed to develop a user-centric and immersive platform that offered a variety of in-home fitness and wellness options. Given the current climate, we hope that the launch of Samsung Health makes it easier for our consumers to prioritize their physical and mental wellbeing on a daily basis.” ... '
Good way to drive TV sales. No real mention of 'assistant' functions. No mention of Samsung's Bixby. See the ability of having cross function enabled in the system, a possible assistance application.
Samsung Health Now Available as a Comprehensive In-Home Fitness and Wellness Platform on 2020 Samsung Smart TVs
With free access to Samsung Health, 5,000 hours of content on the TV and over 250 instructive videos from barre3, Calm, Fitplan, Jillian Michaels Fitness, obé fitness, and Echelon available
Samsung Electronics announced today that its Samsung Health platform is now available on 2020 Samsung Smart TV models. Designed to revolutionize the concept of at-home workouts, Samsung Health is a user-centric wellness platform that goes beyond fitness. It is a companion that syncs across various digital devices – smartphones, wearables and now Samsung Smart TVs. With Samsung Health, users will be able to enjoy free premium classes, start new wellness routines and even get the whole household moving with family challenges and more – all from the comfort of home.
“The whole intention of Samsung Health is to motivate our consumers to live healthier lives by meeting them wherever they are, across Samsung platforms,” said Won-Jin Lee, Executive Vice President of Service Business at Samsung Electronics. “We knew that to do this, we needed to develop a user-centric and immersive platform that offered a variety of in-home fitness and wellness options. Given the current climate, we hope that the launch of Samsung Health makes it easier for our consumers to prioritize their physical and mental wellbeing on a daily basis.” ... '
Beware of Assumptions
Bob Herbold tells a good story about the danger of assumptions for leaders.
There is a powerful lesson for leaders here:
Beware of Assumptions – Regularly isolate key assumptions that are being used, constantly probe the basis for those assumptions, and experiment appropriately.
Also, thank goodness the America’s Cup team had an out-spoken sceptic in the crew. We all need those kinds of people on our team! .... "
There is a powerful lesson for leaders here:
Beware of Assumptions – Regularly isolate key assumptions that are being used, constantly probe the basis for those assumptions, and experiment appropriately.
Also, thank goodness the America’s Cup team had an out-spoken sceptic in the crew. We all need those kinds of people on our team! .... "
Predictive Maintenance Driving 3D Printing
A long term interest and application area. You could do a good job of prediction, but had to have a complex array of replacement parts in inventory. Here, for the right kind of of application, the inventory could be minimized. Could the parts even be produced to address certain kinds of predictive degradation by altering manufacturing design?
Army 3D Printing Study Shows Promise for Predictive Maintenance
U.S. Army Research Laboratory
May 19, 2020
A study by researchers at the U.S. Army's Combat Capabilities Development Command (CCDC) Army Research Laboratory (ARL), the National Institute of Standards and Technology, CCDC Aviation and Missile Center, and Johns Hopkins University detailed a method for monitoring the performance of three-dimensionally (3D)-printed parts. The technique uses sensors to detect and track the wear and tear of 3D-printed maraging steel (known to possess superior strength and toughness without losing ductility), to help forecast degradation or malfunctions that warrant replacement. ARL's Todd C. Henry said the study was as much about understanding the specific performance of a 3D-printed material as it was about understanding the ability to monitor and detect the performance and degradation of 3D-printed materials.
Army 3D Printing Study Shows Promise for Predictive Maintenance
U.S. Army Research Laboratory
May 19, 2020
A study by researchers at the U.S. Army's Combat Capabilities Development Command (CCDC) Army Research Laboratory (ARL), the National Institute of Standards and Technology, CCDC Aviation and Missile Center, and Johns Hopkins University detailed a method for monitoring the performance of three-dimensionally (3D)-printed parts. The technique uses sensors to detect and track the wear and tear of 3D-printed maraging steel (known to possess superior strength and toughness without losing ductility), to help forecast degradation or malfunctions that warrant replacement. ARL's Todd C. Henry said the study was as much about understanding the specific performance of a 3D-printed material as it was about understanding the ability to monitor and detect the performance and degradation of 3D-printed materials.
Wednesday, May 27, 2020
Bot Activity During Coronavirus
Don't know what to fully make of this. How accurate is the machine learning of the model working to identify bots?. Looking for the Carnegie piece supporting this to get an idea. Here is one CMU article which covers the research.
Researchers: Nearly Half Of Accounts Tweeting About Coronavirus Are Likely Bots By Bobby Allyn
Computer scientists at Carnegie Mellon University have determined that nearly half of all Twitter accounts spreading messages about the COVID-19 pandemic are likely bots. The team analyzed more than 200 million tweets discussing the virus since January, and found about 45% were sent by accounts that behave more like computerized bots than humans. In addition, the researchers identified more than 100 false narratives about the novel coronavirus that bot-controlled accounts are spreading on the platform. The researchers used a bot-hunter tool to flag accounts that post messages more often than is humanly possible, or which claim to be in multiple countries within a period of a few hours. Said Carnegie Mellon researcher Kathleen Carley, "We're seeing up to two times as much bot activity as we'd predicted based on previous natural disasters, crises, and elections." ... '
Researchers: Nearly Half Of Accounts Tweeting About Coronavirus Are Likely Bots By Bobby Allyn
Computer scientists at Carnegie Mellon University have determined that nearly half of all Twitter accounts spreading messages about the COVID-19 pandemic are likely bots. The team analyzed more than 200 million tweets discussing the virus since January, and found about 45% were sent by accounts that behave more like computerized bots than humans. In addition, the researchers identified more than 100 false narratives about the novel coronavirus that bot-controlled accounts are spreading on the platform. The researchers used a bot-hunter tool to flag accounts that post messages more often than is humanly possible, or which claim to be in multiple countries within a period of a few hours. Said Carnegie Mellon researcher Kathleen Carley, "We're seeing up to two times as much bot activity as we'd predicted based on previous natural disasters, crises, and elections." ... '
Kubernetes Workflows
This presentation was just brought to my attention. Especially useful when maintenance will be required, and it should always be built in for serious operations.
The Evolution of Distributed Systems on Kubernetes
Bilgin Ibryam takes us on a journey exploring Kubernetes primitives, design patterns and new workload types.
Bilgin Ibryam is a product manager and a former architect at Red Hat. In his day-to-day job, he works with customers from all sizes and locations, helping them to be successful with adoption of emerging technologies through proven and repeatable patterns and practises. His current interests include enterprise blockchains, cloud-native data and serverless.
About the conference
Software is changing the world. QCon empowers software development by facilitating the spread of knowledge and innovation in the developer community. A practitioner-driven conference, QCon is designed for technical team leads, architects, engineering directors, and project managers who influence innovation in the ... "
The Evolution of Distributed Systems on Kubernetes
Bilgin Ibryam takes us on a journey exploring Kubernetes primitives, design patterns and new workload types.
Bilgin Ibryam is a product manager and a former architect at Red Hat. In his day-to-day job, he works with customers from all sizes and locations, helping them to be successful with adoption of emerging technologies through proven and repeatable patterns and practises. His current interests include enterprise blockchains, cloud-native data and serverless.
About the conference
Software is changing the world. QCon empowers software development by facilitating the spread of knowledge and innovation in the developer community. A practitioner-driven conference, QCon is designed for technical team leads, architects, engineering directors, and project managers who influence innovation in the ... "
Talk: The ACM Code of Ethics vs Snake Oil and Dodgy Development
And directly related to the last post, note the excellent archive of past talks linked to below.
Register Now: "Leveraging the ACM Code of Ethics Against Ethical Snake Oil and Dodgy Development"
Register now for the upcoming ACM TechTalk "Leveraging the ACM Code of Ethics Against Ethical Snake Oil and Dodgy Development," presented on Monday, June 8 at 12:00 PM ET/9:00 AM PT by Don Gotterbarn, Professor Emeritus at East Tennessee State University and Co-Chair, ACM Committee on Professional Ethics (COPE); and Marty Wolf, Professor at Bemidji State University; Co-Chair, ACM Committee on Professional Ethics (COPE). Keith Miller, Professor at the University of Missouri – Saint Louis, will moderate the questions and answers session following the talk. Continue the discussion on ACM's Discourse Page. You can view our entire archive of past ACM TechTalks on demand at https://learning.acm.org/techtalks-archive.
Register Now: "Leveraging the ACM Code of Ethics Against Ethical Snake Oil and Dodgy Development"
Register now for the upcoming ACM TechTalk "Leveraging the ACM Code of Ethics Against Ethical Snake Oil and Dodgy Development," presented on Monday, June 8 at 12:00 PM ET/9:00 AM PT by Don Gotterbarn, Professor Emeritus at East Tennessee State University and Co-Chair, ACM Committee on Professional Ethics (COPE); and Marty Wolf, Professor at Bemidji State University; Co-Chair, ACM Committee on Professional Ethics (COPE). Keith Miller, Professor at the University of Missouri – Saint Louis, will moderate the questions and answers session following the talk. Continue the discussion on ACM's Discourse Page. You can view our entire archive of past ACM TechTalks on demand at https://learning.acm.org/techtalks-archive.
The Ethics of Dark Patterns
A look at 'Dark Patterns', had not heard the term. Linking to ACM look at ethics by people that build such interfaces.
Dark Patterns: Past, Present, and Future in ACMQueue
The evolution of tricky user interfaces
Arvind Narayanan, Arunesh Mathur, Marshini Chetty, and Mihir Kshirsagar
Dark patterns are user interfaces that benefit an online service by leading users into making decisions they might not otherwise make. Some dark patterns deceive users while others covertly manipulate or coerce them into choices that are not in their best interests. A few egregious examples have led to public backlash recently: TurboTax hid its U.S. government-mandated free tax-file program for low-income users on its website to get them to use its paid program;9 Facebook asked users to enter phone numbers for two-factor authentication but then used those numbers to serve targeted ads;31 Match.com knowingly let scammers generate fake messages of interest in its online dating app to get users to sign up for its paid service.13 Many dark patterns have been adopted on a large scale across the web. Figure 1 shows a deceptive countdown timer dark pattern on JustFab. The advertised offer remains valid even after the timer expires. This pattern is a common tactic—a recent study found such deceptive countdown timers on 140 shopping websites. ... "
Dark Patterns: Past, Present, and Future in ACMQueue
The evolution of tricky user interfaces
Arvind Narayanan, Arunesh Mathur, Marshini Chetty, and Mihir Kshirsagar
Dark patterns are user interfaces that benefit an online service by leading users into making decisions they might not otherwise make. Some dark patterns deceive users while others covertly manipulate or coerce them into choices that are not in their best interests. A few egregious examples have led to public backlash recently: TurboTax hid its U.S. government-mandated free tax-file program for low-income users on its website to get them to use its paid program;9 Facebook asked users to enter phone numbers for two-factor authentication but then used those numbers to serve targeted ads;31 Match.com knowingly let scammers generate fake messages of interest in its online dating app to get users to sign up for its paid service.13 Many dark patterns have been adopted on a large scale across the web. Figure 1 shows a deceptive countdown timer dark pattern on JustFab. The advertised offer remains valid even after the timer expires. This pattern is a common tactic—a recent study found such deceptive countdown timers on 140 shopping websites. ... "
Touch Sensors
Touch sensors are advancing. We sought for understanding and adjusting how how products felt via a sensor.
OmniTact: A Multi-Directional High-Resolution Touch Sensor
Akhil Padmanabha and Frederik Ebert May 14, 2020
Touch has been shown to be important for dexterous manipulation in robotics. Recently, the GelSight sensor has caught significant interest for learning-based robotics due to its low cost and rich signal. For example, GelSight sensors have been used for learning inserting USB cables (Li et al, 2014), rolling a die (Tian et al. 2019) or grasping objects (Calandra et al. 2017).
The reason why learning-based methods work well with GelSight sensors is that they output high-resolution tactile images from which a variety of features such as object geometry, surface texture, normal and shear forces can be estimated that often prove critical to robotic control. The tactile images can be fed into standard CNN-based computer vision pipelines allowing the use of a variety of different learning-based techniques: In Calandra et al. 2017 a grasp-success classifier is trained on GelSight data collected in self-supervised manner, in Tian et al. 2019 Visual Foresight, a video-prediction-based control algorithm is used to make a robot roll a die purely based on tactile images, and in Lambeta et al. 2020 a model-based RL algorithm is applied to in-hand manipulation using GelSight images.
Unfortunately applying GelSight sensors in practical real-world scenarios is still challenging due to its large size and the fact that it is only sensitive on one side. Here we introduce a new, more compact tactile sensor design based on GelSight that allows for omnidirectional sensing, i.e. making the sensor sensitive on all sides like a human finger, and show how this opens up new possibilities for sensorimotor learning. We demonstrate this by teaching a robot to pick up electrical plugs and insert them purely based on tactile feedback. ... "
OmniTact: A Multi-Directional High-Resolution Touch Sensor
Akhil Padmanabha and Frederik Ebert May 14, 2020
Touch has been shown to be important for dexterous manipulation in robotics. Recently, the GelSight sensor has caught significant interest for learning-based robotics due to its low cost and rich signal. For example, GelSight sensors have been used for learning inserting USB cables (Li et al, 2014), rolling a die (Tian et al. 2019) or grasping objects (Calandra et al. 2017).
The reason why learning-based methods work well with GelSight sensors is that they output high-resolution tactile images from which a variety of features such as object geometry, surface texture, normal and shear forces can be estimated that often prove critical to robotic control. The tactile images can be fed into standard CNN-based computer vision pipelines allowing the use of a variety of different learning-based techniques: In Calandra et al. 2017 a grasp-success classifier is trained on GelSight data collected in self-supervised manner, in Tian et al. 2019 Visual Foresight, a video-prediction-based control algorithm is used to make a robot roll a die purely based on tactile images, and in Lambeta et al. 2020 a model-based RL algorithm is applied to in-hand manipulation using GelSight images.
Unfortunately applying GelSight sensors in practical real-world scenarios is still challenging due to its large size and the fact that it is only sensitive on one side. Here we introduce a new, more compact tactile sensor design based on GelSight that allows for omnidirectional sensing, i.e. making the sensor sensitive on all sides like a human finger, and show how this opens up new possibilities for sensorimotor learning. We demonstrate this by teaching a robot to pick up electrical plugs and insert them purely based on tactile feedback. ... "
Vizient and IBM Watson Health
Been following for some time how Watson is being embedded in real world business process and problem solving. Here another example in healthcare. Note the replacement of some of IBM's performance improvent offerings.
Vizient Inc. Partners with IBM Watson Health to Help Support Healthcare Providers’ Performance Improvement Needs
IRVING, Texas (BUSINESS WIRE), May 26, 2020 - Vizient Inc., the largest member-driven healthcare performance improvement company in the United States, today announces it has entered into a partnership with IBM Watson Health. The partnership will allow each company to help better serve its customers in areas including performance benchmarking, strategic planning, and clinical and operational performance.
Through the partnership agreement, IBM Watson Health is withdrawing the following offerings from its portfolio and offering a transition path for its clients to Vizient’s analytics portfolio:
IBM ActionOI®: Vizient will support the ActionOI functionality through Vizient Operational Data Base (ODB) which provides operational benchmarking plus services for peer networking.
IBM CareDiscovery®: Vizient will offer Vizient Clinical Data Base (CDB) which provides clinical performance improvement plus OPPE reporting and new peer networking opportunities focused on operational and clinical performance improvement.
IBM Market Expert®: Vizient will offer Vizient Sg2 Market Edge, which integrates strategic growth intelligence with advanced analytics to enable full continuum strategic planning. ... "
Vizient Inc. Partners with IBM Watson Health to Help Support Healthcare Providers’ Performance Improvement Needs
IRVING, Texas (BUSINESS WIRE), May 26, 2020 - Vizient Inc., the largest member-driven healthcare performance improvement company in the United States, today announces it has entered into a partnership with IBM Watson Health. The partnership will allow each company to help better serve its customers in areas including performance benchmarking, strategic planning, and clinical and operational performance.
Through the partnership agreement, IBM Watson Health is withdrawing the following offerings from its portfolio and offering a transition path for its clients to Vizient’s analytics portfolio:
IBM ActionOI®: Vizient will support the ActionOI functionality through Vizient Operational Data Base (ODB) which provides operational benchmarking plus services for peer networking.
IBM CareDiscovery®: Vizient will offer Vizient Clinical Data Base (CDB) which provides clinical performance improvement plus OPPE reporting and new peer networking opportunities focused on operational and clinical performance improvement.
IBM Market Expert®: Vizient will offer Vizient Sg2 Market Edge, which integrates strategic growth intelligence with advanced analytics to enable full continuum strategic planning. ... "
Tuesday, May 26, 2020
Refraction AI Contactless Food Delivery
Here is an excerpt from Spectrum IEEE on robotic solutions. Featuring Refraction AI, a contactless food delivery system. Plus a number of videos of related solutions, go to the link below to see them.
Video Friday: Robot Startup Refraction AI Testing Contactless Food Delivery By Evan Ackerman, Erico Guizzo and Fan Shi
Refraction AI, founded by University of Michigan researchers, has developed a three-wheeled autonomous vehicle called REV-1 that is providing delivery services from local restaurants in Ann Arbor.
Editor's Picks
Self-driving delivery vehicle developed by Unity Drive Innovation (UDI)
Robot Vehicles Make Contactless Deliveries Amid Coronavirus Quarantine
UVD Robots ultraviolet disinfecting robots deployed to China to fight coronavirus outbreak
Autonomous Robots Are Helping Kill Coronavirus in Hospitals
Dutch Police Training Eagles to Take Down Drones
Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):
ICRA 2020 – June 01, 2020 – [Virtual Conference]
RSS 2020 – July 12-16, 2020 – [Virtual Conference]
CLAWAR 2020 – August 24-26, 2020 – Moscow, Russia
ICUAS 2020 – September 1-4, 2020 – Athens, Greece
Let us know if you have suggestions for next week, and enjoy today’s videos.
Refraction AI, a University of Michigan startup that began delivering food in late 2019, says its pilot deployment of five “Rev-1” robots is doing four times as many runs since the COVID-19 crisis began. The small fleet of delivery robots helps keep employees and patrons safer by limiting human to human contact while also helping restaurants save money on delivery services due to the lower cost of Refraction AI’s service. .... "
Video Friday: Robot Startup Refraction AI Testing Contactless Food Delivery By Evan Ackerman, Erico Guizzo and Fan Shi
Refraction AI, founded by University of Michigan researchers, has developed a three-wheeled autonomous vehicle called REV-1 that is providing delivery services from local restaurants in Ann Arbor.
Editor's Picks
Self-driving delivery vehicle developed by Unity Drive Innovation (UDI)
Robot Vehicles Make Contactless Deliveries Amid Coronavirus Quarantine
UVD Robots ultraviolet disinfecting robots deployed to China to fight coronavirus outbreak
Autonomous Robots Are Helping Kill Coronavirus in Hospitals
Dutch Police Training Eagles to Take Down Drones
Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):
ICRA 2020 – June 01, 2020 – [Virtual Conference]
RSS 2020 – July 12-16, 2020 – [Virtual Conference]
CLAWAR 2020 – August 24-26, 2020 – Moscow, Russia
ICUAS 2020 – September 1-4, 2020 – Athens, Greece
Let us know if you have suggestions for next week, and enjoy today’s videos.
Refraction AI, a University of Michigan startup that began delivering food in late 2019, says its pilot deployment of five “Rev-1” robots is doing four times as many runs since the COVID-19 crisis began. The small fleet of delivery robots helps keep employees and patrons safer by limiting human to human contact while also helping restaurants save money on delivery services due to the lower cost of Refraction AI’s service. .... "
Ideo Designer Designing Smarts to Compete with Apple
We worked with Ideo in some of our shopping labs and problem solving spaces. So the emergence of the emergence of a new smart speaker from an Ideo designer was of interest. Look forward to seeing what this looks like. Audio only, or an aim at the assistant-enabled markets from Amazon and Google, Samsung and Baidu too?
Ex-Apple designer to launch a product that competes with Apple itself
Christopher Stringer worked on many of Apple’s biggest hits. Now, he’s getting ready to release a new series of smart speakers that could compete with Apple HomePod and Sonos.
By Mark Wilson in Fastcompany
Christopher Stringer was a designer at Ideo, who helped create Dell’s hit ’90s design language, before he got the call from Jony Ive in 1995 to join Apple. Stringer went on to become a key figure in one of the most influential industrial design teams in history, launching dozens of products from PowerBooks to the iPhone.
In 2017, Stringer left Apple to build something new. According to a new report in the Financial Times, Stringer is now raising money to launch a product that will compete with Apple itself.
Stringer’s startup is called Syng. He cofounded it with Damon Way, who launched the tech protector brand Incase, and Afrooz Family, a master coder and former sound engineer at Apple who worked on the HomePod. Set up in Venice Beach, Los Angeles, Syng describes itself as “a future of sound company,” and it’s working on a new series of smart speakers, dubbed “Cell,” to rival the HomePod and Sonos, with the first of the line coming out in the fourth quarter of 2020. The Cell will undoubtedly be an impressive piece of machinery: Syng has lured designers from Apple, Nest, and Nike. .... "
Ex-Apple designer to launch a product that competes with Apple itself
Christopher Stringer worked on many of Apple’s biggest hits. Now, he’s getting ready to release a new series of smart speakers that could compete with Apple HomePod and Sonos.
By Mark Wilson in Fastcompany
Christopher Stringer was a designer at Ideo, who helped create Dell’s hit ’90s design language, before he got the call from Jony Ive in 1995 to join Apple. Stringer went on to become a key figure in one of the most influential industrial design teams in history, launching dozens of products from PowerBooks to the iPhone.
In 2017, Stringer left Apple to build something new. According to a new report in the Financial Times, Stringer is now raising money to launch a product that will compete with Apple itself.
Stringer’s startup is called Syng. He cofounded it with Damon Way, who launched the tech protector brand Incase, and Afrooz Family, a master coder and former sound engineer at Apple who worked on the HomePod. Set up in Venice Beach, Los Angeles, Syng describes itself as “a future of sound company,” and it’s working on a new series of smart speakers, dubbed “Cell,” to rival the HomePod and Sonos, with the first of the line coming out in the fourth quarter of 2020. The Cell will undoubtedly be an impressive piece of machinery: Syng has lured designers from Apple, Nest, and Nike. .... "
Basic, Non-technical look at Quantum Computing
Looks to be a good, basic, non technical introduction, and also links to further free courses.
Quantum Computing for the Newb
Introductory Concepts
By Amelie Schreiber
In this article, we will give a basic introduction to our free course material for quantum computing. This is an introductory course for the absolute beginner. If you are not an expert in computer programming or mathematics, this course is for you! It introduces all of the basic math needed for quantum computing and the basic programming skills you need will also be introduced. Best of all, everything is in interactive, online, Jupyter notebooks. So there is no need to download anything or learn terminal. If you already feel comfortable with Python, then much of this will be a breeze! You can focus more on the concepts from quantum physics such as state vectors, quantum gates represented as matrices, and quantum circuit diagrams. To get to the material, follow this Github pages link. ... "
Quantum Computing for the Newb
Introductory Concepts
By Amelie Schreiber
In this article, we will give a basic introduction to our free course material for quantum computing. This is an introductory course for the absolute beginner. If you are not an expert in computer programming or mathematics, this course is for you! It introduces all of the basic math needed for quantum computing and the basic programming skills you need will also be introduced. Best of all, everything is in interactive, online, Jupyter notebooks. So there is no need to download anything or learn terminal. If you already feel comfortable with Python, then much of this will be a breeze! You can focus more on the concepts from quantum physics such as state vectors, quantum gates represented as matrices, and quantum circuit diagrams. To get to the material, follow this Github pages link. ... "
Wal-Mart Grounding Jet.com
No new news, but some of the motivations provided of interest, Would they have thought differently if they knew of the virus coming? Will it fundamentally change how people buy?
Walmart to ground Jet.com
By Dan Berthiaume - 05/19/2020 in CSA
Walmart is winding down a digitally-native retailer it bought for $3 billion in 2016.
The discount giant will cease operating its Jet.com e-commerce subsidiary at an unspecified date. Walmart announced its intention to shutter Jet.com in a two-sentence statement in its first quarter earnings report: “Due to continued strength of the Walmart.com brand, the company will discontinue Jet.com,” Walmart said. “The acquisition of Jet.com nearly four years ago was critical to accelerating our omni strategy.”
On the company's quarterly earnings call with analysts, Walmart CEO Doug McMillion credited the acquisition of Jet.com as “jump-starting the progress we have made the last few years.” He cited the growth of Walmart’s curbside pickup, delivery to the home and expansion of categories beyond groceries, including apparel and home decor. McMillion also said "we're seeing the Walmart brand resonate regardless of income, geography or age." .... "\
Walmart to ground Jet.com
By Dan Berthiaume - 05/19/2020 in CSA
Walmart is winding down a digitally-native retailer it bought for $3 billion in 2016.
The discount giant will cease operating its Jet.com e-commerce subsidiary at an unspecified date. Walmart announced its intention to shutter Jet.com in a two-sentence statement in its first quarter earnings report: “Due to continued strength of the Walmart.com brand, the company will discontinue Jet.com,” Walmart said. “The acquisition of Jet.com nearly four years ago was critical to accelerating our omni strategy.”
On the company's quarterly earnings call with analysts, Walmart CEO Doug McMillion credited the acquisition of Jet.com as “jump-starting the progress we have made the last few years.” He cited the growth of Walmart’s curbside pickup, delivery to the home and expansion of categories beyond groceries, including apparel and home decor. McMillion also said "we're seeing the Walmart brand resonate regardless of income, geography or age." .... "\
Google Letting You Buy with Voice
This has been possible directly with Amazon Alexa for a long time, I use it fairly often, but the claim here is that Google Assistant is much more secure, using a 'voiceprint'. Security like that might also enable secure business and healthcare transactions as well. Google assistant already detects the language you are using and adapts the response.
Google is working on voice confirmation for purchases with Assistant
Buy stuff with your voice.
Rachel England, @rachel_england in Engadget
Google is working on voice confirmation for purchases with Assistant
Buy stuff with your voice.
Rachel England, @rachel_england in Engadget
Plasma Jets may Propel Aircraft
Some of the interesting details in the link below.
Plasma Jets May One Day Propel Aircraft
Plasma thrusters could help jet planes fly without fossil fuels
By Charles Q. Choi in Spectrum IEEE Energy
Jet planes may one day fly without fossil fuels by using plasma jets, new research from scientists in China suggests.
A variety of spacecraft, such as NASA’s Dawn space probe, generate plasma from gases such as xenon for propulsion. However, such thrusters exert only tiny propulsive forces, and so can find use only in outer space, in the absence of air friction.
Now researchers have created a prototype thruster capable of generating plasma jets with propulsive forces comparable to those from conventional jet engines, using only air and electricity.
An air compressor forces high-pressure air at a rate of 30 liters per minute into an ionization chamber in the device, which uses microwaves to convert this air stream into a plasma jet blasted out of a quartz tube. Plasma temperatures could exceed 1,000 °C. ... "
Plasma Jets May One Day Propel Aircraft
Plasma thrusters could help jet planes fly without fossil fuels
By Charles Q. Choi in Spectrum IEEE Energy
Jet planes may one day fly without fossil fuels by using plasma jets, new research from scientists in China suggests.
A variety of spacecraft, such as NASA’s Dawn space probe, generate plasma from gases such as xenon for propulsion. However, such thrusters exert only tiny propulsive forces, and so can find use only in outer space, in the absence of air friction.
Now researchers have created a prototype thruster capable of generating plasma jets with propulsive forces comparable to those from conventional jet engines, using only air and electricity.
An air compressor forces high-pressure air at a rate of 30 liters per minute into an ionization chamber in the device, which uses microwaves to convert this air stream into a plasma jet blasted out of a quartz tube. Plasma temperatures could exceed 1,000 °C. ... "
Facebook's Blenderbot
Good indication of what Facebook has been doing in this space.. From O'Reilly:
Facebook open-sources BlenderBot
Facebook AI has open-sourced BlenderBot, an open domain chatbot. It’s said to “feel more human,” because it blends a diverse set of conversational skills—including empathy, knowledge, and personality—together in one system. The model has 9.4 billion parameters, which is 3.6 times more than the largest existing system. Here’s the complete model, code, and evaluation setup.
Facebook open-sources BlenderBot
Facebook AI has open-sourced BlenderBot, an open domain chatbot. It’s said to “feel more human,” because it blends a diverse set of conversational skills—including empathy, knowledge, and personality—together in one system. The model has 9.4 billion parameters, which is 3.6 times more than the largest existing system. Here’s the complete model, code, and evaluation setup.
Monday, May 25, 2020
Embedding Machine Learning into RPA Process
Something we did, but with BPM models and process flow. Makes sense because you can better understand the context involved. Process models, even simple visualizations, can help sell the model, get useful data, and promote the contextual design and value. Rules are understandable, but algorithms are usually not to decision makers.
Small ML is the next big leap in RPA
Instead of doing big ML projects, embed ML into your day-to-day RPA work and be amazed. By Eljas Linna in TowardsDataScience
The boom in robotic process automation (RPA) over the past few years has made it pretty clear that business processes in nearly every industry have an endless amount of bottlenecks to be resolved and efficiency improvements to be gained. Years before the full surge of RPA, McKinsey already estimated the annual impact of knowledge work automation to be around $6 trillion in 2025.
Having followed the evolution of RPA from python scripts towards generalized platforms, I’ve witnessed quite a transformation. The tools and libraries available in RPA have improved over time, each iteration widening the variety of processes that can be automated and improving the overall automation rates further. I believe the addition of machine learning (ML) in the everyday toolbox of RPA developers is the next huge leap in the scope and effectiveness of process automation. And I am not alone. But there’s a catch. It will look very different from what all the hype would lead you to believe.
Why even care about machine learning?
Imagine RPA without if-else logic or variables. You could only automate simple and completely static click-through processes. As we gradually add in some variables and logic, we can start automating more complex and impactful processes. The more complex the process you want to automate, the more logic rules you need to add, and the more edge cases you need to consider. The burden on the RPA developer’s rule system grows exponentially. See where I’m going with this? ... "
Small ML is the next big leap in RPA
Instead of doing big ML projects, embed ML into your day-to-day RPA work and be amazed. By Eljas Linna in TowardsDataScience
The boom in robotic process automation (RPA) over the past few years has made it pretty clear that business processes in nearly every industry have an endless amount of bottlenecks to be resolved and efficiency improvements to be gained. Years before the full surge of RPA, McKinsey already estimated the annual impact of knowledge work automation to be around $6 trillion in 2025.
Having followed the evolution of RPA from python scripts towards generalized platforms, I’ve witnessed quite a transformation. The tools and libraries available in RPA have improved over time, each iteration widening the variety of processes that can be automated and improving the overall automation rates further. I believe the addition of machine learning (ML) in the everyday toolbox of RPA developers is the next huge leap in the scope and effectiveness of process automation. And I am not alone. But there’s a catch. It will look very different from what all the hype would lead you to believe.
Why even care about machine learning?
Imagine RPA without if-else logic or variables. You could only automate simple and completely static click-through processes. As we gradually add in some variables and logic, we can start automating more complex and impactful processes. The more complex the process you want to automate, the more logic rules you need to add, and the more edge cases you need to consider. The burden on the RPA developer’s rule system grows exponentially. See where I’m going with this? ... "
Could You Run your Business as a Sim Game?
The overall idea here was pitched to us, especially the supply chain aspects of such a sim. Since we were already doing related analytics, you could see how such a system could mimic, diagnose and propose solutions for the real world. In particular to mimic and address unusual conditions. We thought there was a place to include gamification of the system's reaction to that. But there was considerable funding required to make it work, and we were not ready for that, so we said no. Apparently, according to the article it never got a real client. Note the health systems example, could that be useful today? Worth a thought.
Go read this incredible history of the SimCity studio’s forgotten business games division Maxis Business Simulations was strange, ambitious, and doomed ....
By Adi Robertson @thedextriarchy ... in TheVerge. ...
Go read this incredible history of the SimCity studio’s forgotten business games division Maxis Business Simulations was strange, ambitious, and doomed ....
By Adi Robertson @thedextriarchy ... in TheVerge. ...
Evolution of Distributed Systems on Kubernetes
Ultimately in delivery, workflow design is key, here a presentation on the topic.
Kubernetes is an open-source container-orchestration system for automating application deployment, scaling, and management ...
The Evolution of Distributed Systems on Kubernetes
Bilgin Ibryam takes us on a journey exploring Kubernetes primitives, design patterns and new workload types.
Bio
Bilgin Ibryam is a product manager and a former architect at Red Hat. In his day-to-day job, he works with customers from all sizes and locations, helping them to be successful with adoption of emerging technologies through proven and repeatable patterns and practises. His current interests include enterprise blockchains, cloud-native data and serverless.
About the conference
Software is changing the world. QCon empowers software development by facilitating the spread of knowledge and innovation in the developer community. A practitioner-driven conference, QCon is designed for technical team leads, architects, engineering directors, and project managers who influence innovation in the ... "
Kubernetes is an open-source container-orchestration system for automating application deployment, scaling, and management ...
The Evolution of Distributed Systems on Kubernetes
Bilgin Ibryam takes us on a journey exploring Kubernetes primitives, design patterns and new workload types.
Bio
Bilgin Ibryam is a product manager and a former architect at Red Hat. In his day-to-day job, he works with customers from all sizes and locations, helping them to be successful with adoption of emerging technologies through proven and repeatable patterns and practises. His current interests include enterprise blockchains, cloud-native data and serverless.
About the conference
Software is changing the world. QCon empowers software development by facilitating the spread of knowledge and innovation in the developer community. A practitioner-driven conference, QCon is designed for technical team leads, architects, engineering directors, and project managers who influence innovation in the ... "
Useful Plans vs the Activity of Planning
Below the intro to former IBMer Irving Wladawsky-Berger article on planning vs plans ... much more at the link. I like especially the differences when you meet with different kinds of events and seek cures for them. Resilience has to address all of them meaningfully, risk management is one key tool.
Even When Plans Are Useless, Planning Is Indispensable
“As scientists race to develop a cure for the coronavirus, businesses are trying to assess the impact of the outbreak on their own enterprises,” wrote MIT professor Yossi Sheffi in a February 18 article in the Wall Street Journal. “Just as scientists are confronting an unknown enemy, corporate executives are largely working blind because the coronavirus could cause supply-chain disruptions that are unlike anything we have seen in the past 70 years.”
Sheffi is Director of the MIT Center for Transportation and Logistics. He’s written extensively on the critical need for resilience in global enterprises and their supply chains, - including The Power of Resilience and The Resilient Enterprise, - so they can better react to major unexpected events. Covid-19 is the kind of massively disruptive event he had in mind when he wrote those books.
While learning from historical precedents is always a good idea, recent supply chain disruptions - the 2003 outbreak of SARS in Asia, the 2011 Fukushima nuclear disaster, or the 2011 Thailand floods, - were very different from our current pandemic. Those events were much more localized, lasted a relatively short time, and they mostly impacted supply, not demand. The impact of Covid-19 is much bigger, affecting consumer demand as well as supply chains all over the world, and likely to last quite a bit longer. “Today’s supply chains are global and more complex than they were in 2003,” with factories all over the world affected by lockdowns and quarantines. Apple, for example, works with suppliers in 43 countries. ... " (much more below in the article)
Even When Plans Are Useless, Planning Is Indispensable
“As scientists race to develop a cure for the coronavirus, businesses are trying to assess the impact of the outbreak on their own enterprises,” wrote MIT professor Yossi Sheffi in a February 18 article in the Wall Street Journal. “Just as scientists are confronting an unknown enemy, corporate executives are largely working blind because the coronavirus could cause supply-chain disruptions that are unlike anything we have seen in the past 70 years.”
Sheffi is Director of the MIT Center for Transportation and Logistics. He’s written extensively on the critical need for resilience in global enterprises and their supply chains, - including The Power of Resilience and The Resilient Enterprise, - so they can better react to major unexpected events. Covid-19 is the kind of massively disruptive event he had in mind when he wrote those books.
While learning from historical precedents is always a good idea, recent supply chain disruptions - the 2003 outbreak of SARS in Asia, the 2011 Fukushima nuclear disaster, or the 2011 Thailand floods, - were very different from our current pandemic. Those events were much more localized, lasted a relatively short time, and they mostly impacted supply, not demand. The impact of Covid-19 is much bigger, affecting consumer demand as well as supply chains all over the world, and likely to last quite a bit longer. “Today’s supply chains are global and more complex than they were in 2003,” with factories all over the world affected by lockdowns and quarantines. Apple, for example, works with suppliers in 43 countries. ... " (much more below in the article)
AI is Watching you Work
Expect much more of this in the future, starting with aggregated group data, but soon later individually. Can then be readily matched to goal achievements and predicted over time.
AI Is Watching You Work, With Mixed Results
As AI monitors more workers, in call centers and on the manufacturing floor, the technology is challenged to deliver empathy for humans.
By AI Trends Staff
Advances in AI and sensors are providing new ways to digitize manual labor, giving managers new insights and potentially new leverage on employees
Many jobs in manufacturing require a dexterity and creativity that robots and software are unlikely to match any time soon. However, manufacturing jobs are likely to change based on how it seems best to work with the AI going forward.
An experiment has been going on at an auto parts manufacturing plant in Battle Creek, Mich. since 2017 to capture worker movements all day long in the hopes of identifying bottlenecks in production. The plant of Denso, the global auto parts manufacturer, has been piping video into machine learning software from startup Drishti, according to a recent account in Wired.
“In the past, we would take a line that was struggling and bring a bunch of people down with stopwatches to try to make it better,” stated Tony Huffman says, a production supervisor at the plant. The Drishti system logs the “cycle time” for every worker all day, for every shift. Plant managers analyze the data to look for sometimes subtle bottlenecks. “Everything flows better and is smoother,” Huffman stated. ... "
AI Is Watching You Work, With Mixed Results
As AI monitors more workers, in call centers and on the manufacturing floor, the technology is challenged to deliver empathy for humans.
By AI Trends Staff
Advances in AI and sensors are providing new ways to digitize manual labor, giving managers new insights and potentially new leverage on employees
Many jobs in manufacturing require a dexterity and creativity that robots and software are unlikely to match any time soon. However, manufacturing jobs are likely to change based on how it seems best to work with the AI going forward.
An experiment has been going on at an auto parts manufacturing plant in Battle Creek, Mich. since 2017 to capture worker movements all day long in the hopes of identifying bottlenecks in production. The plant of Denso, the global auto parts manufacturer, has been piping video into machine learning software from startup Drishti, according to a recent account in Wired.
“In the past, we would take a line that was struggling and bring a bunch of people down with stopwatches to try to make it better,” stated Tony Huffman says, a production supervisor at the plant. The Drishti system logs the “cycle time” for every worker all day, for every shift. Plant managers analyze the data to look for sometimes subtle bottlenecks. “Everything flows better and is smoother,” Huffman stated. ... "
Free Fundamentals of Machine Learning Book
Free 185 page PDF: Comprehensive Guide to Machine learning Technical, but parts are useful generally.
Free Book: A Comprehensive Guide to Machine Learning (Berkeley University) via Capri Granville in DSC
By Soroush Nasiriany, Garrett Thomas, William Wang, Alex Yang. Department of Electrical Engineering and Computer Sciences, University of California, Berkeley. Dated June 24, 2019. This is not the same book as The Math of Machine Learning, also published by the same department at Berkeley, in 2018, and also authored by Garret Thomas....
Free Book: A Comprehensive Guide to Machine Learning (Berkeley University) via Capri Granville in DSC
By Soroush Nasiriany, Garrett Thomas, William Wang, Alex Yang. Department of Electrical Engineering and Computer Sciences, University of California, Berkeley. Dated June 24, 2019. This is not the same book as The Math of Machine Learning, also published by the same department at Berkeley, in 2018, and also authored by Garret Thomas....
Sunday, May 24, 2020
COVID-19 Resources from IEEE
Some good free resources, news highlights and more from an electrical engineering perspective.
YOUR IEEE RESOURCES
As we weather the COVID-19 pandemic together, check here for updates about IEEE members developing technologies to fight the virus, the resources available to you from across IEEE, coping strategies from engineers around the world, and opportunities to get involved in the fight. ... "
IEEE Spectrum is the flagship magazine and website of the IEEE, the world’s largest professional organization devoted to engineering and the applied sciences. Our charter is to keep over 400,000 members informed about major trends and developments in technology, engineering, and science. Our blogs, podcasts, news and features stories, videos and interactive infographics engage our visitors with clear explanations about emerging concepts and developments with details they can’t get elsewhere.
IEEE Spectrum touches our members on every platform, whether they are reading the print editions, coming to the site directly on their desktop, tablet or smartphone, through email newsletters or our digital facsimile edition, or following us via social networks like Facebook, Twitter and LinkedIn. ... '
YOUR IEEE RESOURCES
As we weather the COVID-19 pandemic together, check here for updates about IEEE members developing technologies to fight the virus, the resources available to you from across IEEE, coping strategies from engineers around the world, and opportunities to get involved in the fight. ... "
IEEE Spectrum is the flagship magazine and website of the IEEE, the world’s largest professional organization devoted to engineering and the applied sciences. Our charter is to keep over 400,000 members informed about major trends and developments in technology, engineering, and science. Our blogs, podcasts, news and features stories, videos and interactive infographics engage our visitors with clear explanations about emerging concepts and developments with details they can’t get elsewhere.
IEEE Spectrum touches our members on every platform, whether they are reading the print editions, coming to the site directly on their desktop, tablet or smartphone, through email newsletters or our digital facsimile edition, or following us via social networks like Facebook, Twitter and LinkedIn. ... '
Microsoft LearnTV: Python Coding
I was just asked about a simple intro to Python. There are many out there, but I noticed that the just introduced Microsoft LearnTV has a video on it. see:
I good intro and also an example of LearnTV.
Take your first steps with Python
4 hr 33 min
Learning Path
Interested in learning a programming language but aren't sure where to start? Start here! Learn the basic syntax and thought processes required to build simple applications using Python.
In this learning path, you'll:
Write your first lines of Python code
Store and manipulate data to modify its type and appearance
Execute built-in functionality available from libraries of code
Add logic to your code to enable complex business functionality
Once you complete this learning path, you will have a great foundation to build upon in subsequent Python Learning Paths. ... "
I good intro and also an example of LearnTV.
Take your first steps with Python
4 hr 33 min
Learning Path
Interested in learning a programming language but aren't sure where to start? Start here! Learn the basic syntax and thought processes required to build simple applications using Python.
In this learning path, you'll:
Write your first lines of Python code
Store and manipulate data to modify its type and appearance
Execute built-in functionality available from libraries of code
Add logic to your code to enable complex business functionality
Once you complete this learning path, you will have a great foundation to build upon in subsequent Python Learning Paths. ... "
Neuroimaging Data with Varying Results
We examined fMRI for possible uses in neuromarketing applications, and also found such variation in analysis results. Standards in the analysis approach are important. Another general caution for the results you achieve, even with very large databases.
Neuroimaging Results Altered by Varying Analysis Pipelines
Nature
A survey of neuroimaging studies found that nearly every study used a different analysis pipeline, and the analytical choices of individual researchers significantly impacted findings gleaned from a functional magnetic resonance imaging (fMRI) dataset. The team provided the same dataset to 70 independent research groups and asked them to test nine hypotheses, each of which asserted that activity in a specific brain region correlated with a specific task feature. There were considerable variations between each team's results, even when their underlying maps were highly correlated. The finding highlights the potential consequences of a lack of standardized pipelines for processing complex data. ... "
Neuroimaging Results Altered by Varying Analysis Pipelines
Nature
A survey of neuroimaging studies found that nearly every study used a different analysis pipeline, and the analytical choices of individual researchers significantly impacted findings gleaned from a functional magnetic resonance imaging (fMRI) dataset. The team provided the same dataset to 70 independent research groups and asked them to test nine hypotheses, each of which asserted that activity in a specific brain region correlated with a specific task feature. There were considerable variations between each team's results, even when their underlying maps were highly correlated. The finding highlights the potential consequences of a lack of standardized pipelines for processing complex data. ... "
Robots and Humor
A long term thread, the value of humor to learn, instruct, collaborate.
Comedy Club Performances Provide Insights on How Robots, Humans Connect via Humor
Oregon State University News
Steve Lundeberg
May 18, 2020
Two studies by Oregon State University (OSU) researchers evaluated a robot comedian's performance at comedy clubs to gather data to enable more effective robot-human interplay through humor. Human comics helped develop material used by John the Robot in 22 performances in Los Angeles and 10 in Oregon. The Los Angeles study concluded that audiences found a robot comic with good timing to be much funnier than one without good timing. The Oregon study found that an "adaptive performance"—delivering post-joke "tags" that acknowledge an audience's response—was not necessarily funnier overall, but nearly always improved audience perception of individual jokes. OSU's Naomi Fitter said the research has implications for artificial intelligence projects to understand group responses to entertaining social robots in the real world. ... "
Comedy Club Performances Provide Insights on How Robots, Humans Connect via Humor
Oregon State University News
Steve Lundeberg
May 18, 2020
Two studies by Oregon State University (OSU) researchers evaluated a robot comedian's performance at comedy clubs to gather data to enable more effective robot-human interplay through humor. Human comics helped develop material used by John the Robot in 22 performances in Los Angeles and 10 in Oregon. The Los Angeles study concluded that audiences found a robot comic with good timing to be much funnier than one without good timing. The Oregon study found that an "adaptive performance"—delivering post-joke "tags" that acknowledge an audience's response—was not necessarily funnier overall, but nearly always improved audience perception of individual jokes. OSU's Naomi Fitter said the research has implications for artificial intelligence projects to understand group responses to entertaining social robots in the real world. ... "
Microsoft Learn TV Announced
Out of their remote Build conference last week, Microsoft has announced free TV programs on technical topics. Including some content from Build. The programs will continue to expand in the coming weeks. Will mention some examples that I find particularly interesting.
Microsoft Learn TV (preview)
Learn how to build solutions and use Microsoft products from the experts that built them! Learn TV is the place to find the latest digital content so you can always keep updated on the latest announcements, features and products from Microsoft. Let us know what you think with the hashtag LearnTV. ... '
Microsoft Learn TV (preview)
Learn how to build solutions and use Microsoft products from the experts that built them! Learn TV is the place to find the latest digital content so you can always keep updated on the latest announcements, features and products from Microsoft. Let us know what you think with the hashtag LearnTV. ... '
Saturday, May 23, 2020
Microsoft Project Cortex
Brought to my attention from this weeks Build meetings as about to be launched. Form of knowledge management. Had been show very early version of this. Notion of a semantic Web has been around for a long time, though not used often enough. Will be following this.
Project Cortex
Today, we’re pleased to introduce Project Cortex, the first new service in Microsoft 365 since the launch of Microsoft Teams. Project Cortex uses advanced AI to deliver insights and expertise in the apps you use every day, to harness collective knowledge and to empower people and teams to learn, upskill and innovate faster.
Project Cortex uses AI to reason over content across teams and systems, recognizing content types, extracting important information, and automatically organizing content into shared topics like projects, products, processes and customers. Cortex then creates a knowledge network based on relationships among topics, content, and people.
New topic pages and knowledge centers—created and updated by AI—enable experts to curate and share knowledge with wiki-like simplicity. And topic cards deliver knowledge just-in-time to people in Outlook, Microsoft Teams, and Office. ... "
Project Cortex
Today, we’re pleased to introduce Project Cortex, the first new service in Microsoft 365 since the launch of Microsoft Teams. Project Cortex uses advanced AI to deliver insights and expertise in the apps you use every day, to harness collective knowledge and to empower people and teams to learn, upskill and innovate faster.
Project Cortex uses AI to reason over content across teams and systems, recognizing content types, extracting important information, and automatically organizing content into shared topics like projects, products, processes and customers. Cortex then creates a knowledge network based on relationships among topics, content, and people.
New topic pages and knowledge centers—created and updated by AI—enable experts to curate and share knowledge with wiki-like simplicity. And topic cards deliver knowledge just-in-time to people in Outlook, Microsoft Teams, and Office. ... "
Cyborg Eye Powered by Sunlight
A plantable sensor? Privacy considerations?
Human-like Cyborg Eye Could Power Itself Using Sunlight
New Scientist
Donna Lu
May 20, 2020
Researchers from the Hong Kong University of Science and Technology (HKUST) and the University of California, Berkeley, have developed a spherical visual sensor that mimics the structure of the human eye. The sensor contains a lens to focus light and a hemispherical retina filled with densely packed light-sensitive nanowires made from perovskite, which is commonly used in solar cells and could enable the eye to be self-powering, said HKUST's Zhiyong Fan, although the current version of the eye requires an outside power source. When images of letters were projected onto the artificial lens to test how well it worked, a computer hooked up to the eye successfully recognized the letters E, I, and Y. The artificial eye’s low image resolution compared with commercial sensors limits its utility, but existing visual prosthetic devices use a flat object for image sensing, said Fan, limiting the possible field of view compared with a human eye. .... "
Human-like Cyborg Eye Could Power Itself Using Sunlight
New Scientist
Donna Lu
May 20, 2020
Researchers from the Hong Kong University of Science and Technology (HKUST) and the University of California, Berkeley, have developed a spherical visual sensor that mimics the structure of the human eye. The sensor contains a lens to focus light and a hemispherical retina filled with densely packed light-sensitive nanowires made from perovskite, which is commonly used in solar cells and could enable the eye to be self-powering, said HKUST's Zhiyong Fan, although the current version of the eye requires an outside power source. When images of letters were projected onto the artificial lens to test how well it worked, a computer hooked up to the eye successfully recognized the letters E, I, and Y. The artificial eye’s low image resolution compared with commercial sensors limits its utility, but existing visual prosthetic devices use a flat object for image sensing, said Fan, limiting the possible field of view compared with a human eye. .... "
Supply Chain Priorities with COVID
A look at consumer brands and priorities in the pandemic.
Supply Chain Storytelling: Freeman Previews CBA's New Priorities Moving Through COVID
By Lisa Johnston - 05/19/2020 in Consumergoods
Geoff Freeman, CEO, Consumer Brands Association
Geoff Freeman, CBA president and CEO
The role of federal manufacturing regulations for consumer goods companies received a fresh look this spring when multiple meat processing plants were forced to shut down because of the spread of COVID-19.
In a call with CGT, Consumer Brands Association (CBA) president and CEO Geoff Freeman shared insight into the work the CBA is undertaking to implement such uniform standards, and previewed a set of new priorities the association is preparing to move forward through the health crisis.
An association founded upon a principle of driving uniform standards, the CBA has been working with a broad coalition of consumer goods stakeholders during the pandemic, as well as frequently engaging with the White House Coronavirus Task Force, the Centers for Disease Control and Prevention (CDC), the Occupational Safety and Health Administration (OSHA) and the Food and Drug Administration. ... '
“I'm pleased to say that in recent discussions there has been more and more acknowledgment from government leaders on the value of clear and specific guidance,” Freeman said. “I think at the outset of this — and let’s face it, this was a new challenge for just about everybody — there was a fear that if we give clear and specific guidance, we're somehow violating states’ rights, and states’ rights is an obvious principle that our country was founded upon.” ... '
Supply Chain Storytelling: Freeman Previews CBA's New Priorities Moving Through COVID
By Lisa Johnston - 05/19/2020 in Consumergoods
Geoff Freeman, CEO, Consumer Brands Association
Geoff Freeman, CBA president and CEO
The role of federal manufacturing regulations for consumer goods companies received a fresh look this spring when multiple meat processing plants were forced to shut down because of the spread of COVID-19.
In a call with CGT, Consumer Brands Association (CBA) president and CEO Geoff Freeman shared insight into the work the CBA is undertaking to implement such uniform standards, and previewed a set of new priorities the association is preparing to move forward through the health crisis.
An association founded upon a principle of driving uniform standards, the CBA has been working with a broad coalition of consumer goods stakeholders during the pandemic, as well as frequently engaging with the White House Coronavirus Task Force, the Centers for Disease Control and Prevention (CDC), the Occupational Safety and Health Administration (OSHA) and the Food and Drug Administration. ... '
“I'm pleased to say that in recent discussions there has been more and more acknowledgment from government leaders on the value of clear and specific guidance,” Freeman said. “I think at the outset of this — and let’s face it, this was a new challenge for just about everybody — there was a fear that if we give clear and specific guidance, we're somehow violating states’ rights, and states’ rights is an obvious principle that our country was founded upon.” ... '
IFTTT and Brands in the Smart Home
Recording of yesterdays Webinar
... How smart home companies decide which integrations to prioritize and why successful smart home companies optimize their integration strategies for scalability, The different integration strategies of 3 innovative smart home companies
IFTTT needs the contact information you provide to us to contact you about our products and services. You may unsubscribe from these communications at any time. For information on how to unsubscribe, as well as our privacy practices and commitment to protecting your privacy, please review our Privacy Policy.
Successful smart home companies like iRobot, LIFX, and Ring understand the importance of third-party integrations for driving product adoption and engagement. But in order to do so, smart home companies must navigate the overwhelming number of apps and devices to determine which integrations matter most to their customers and their business.
Join IFTTT Founder and CEO, Linden Tibbets, as he reviews the integration strategies of 3 smart home companies to understand how integrations help them scale their products in the smart home and beyond.
Linden_Tibbets, IFTTT Founder and CEO
Linden Tibbets is Founder and CEO of IFTTT, the essential integration and discovery platform for businesses to continuously connect, grow and maximize the value of their customers. ... '
... How smart home companies decide which integrations to prioritize and why successful smart home companies optimize their integration strategies for scalability, The different integration strategies of 3 innovative smart home companies
IFTTT needs the contact information you provide to us to contact you about our products and services. You may unsubscribe from these communications at any time. For information on how to unsubscribe, as well as our privacy practices and commitment to protecting your privacy, please review our Privacy Policy.
Successful smart home companies like iRobot, LIFX, and Ring understand the importance of third-party integrations for driving product adoption and engagement. But in order to do so, smart home companies must navigate the overwhelming number of apps and devices to determine which integrations matter most to their customers and their business.
Join IFTTT Founder and CEO, Linden Tibbets, as he reviews the integration strategies of 3 smart home companies to understand how integrations help them scale their products in the smart home and beyond.
Linden_Tibbets, IFTTT Founder and CEO
Linden Tibbets is Founder and CEO of IFTTT, the essential integration and discovery platform for businesses to continuously connect, grow and maximize the value of their customers. ... '
How to Learn without Supervision
Ultimately technical, but note the similarity to the alternate ways we learn ... by rote memorization, and the continuous repetition in a contextual background. Confirmed or not by the context. Useful for different things. The latter is more common, but also for different kinds of tasks.
Unsupervised Meta-Learning: Learning to Learn without Supervision
By Benjamin Eysenbach and Abhishek Gupta May 1, 2020
This post is cross-listed on the CMU ML blog.
The history of machine learning has largely been a story of increasing abstraction. In the dawn of ML, researchers spent considerable effort engineering features. As deep learning gained popularity, researchers then shifted towards tuning the update rules and learning rates for their optimizers. Recent research in meta-learning has climbed one level of abstraction higher: many researchers now spend their days manually constructing task distributions, from which they can automatically learn good optimizers. What might be the next rung on this ladder? In this post we introduce theory and algorithms for unsupervised meta-learning, where machine learning algorithms themselves propose their own task distributions. Unsupervised meta-learning further reduces the amount of human supervision required to solve tasks, potentially inserting a new rung on this ladder of abstraction.
We start by discussing how machine learning algorithms use human supervision to find patterns and extract knowledge from observed data. The most common machine learning setting is regression, where a human provides labels Y for a set of examples X. The aim is to return a predictor that correctly assigns labels to novel examples. Another common machine learning problem setting is reinforcement learning (RL), where an agent takes actions in an environment. In RL, humans indicate the desired behavior through a reward function that the agent seeks to maximize. To draw a crude analogy to regression, the environment dynamics are the examples X, and the reward function gives the labels Y. Algorithms for regression and RL employ many tools, including tabular methods (e.g., value iteration), linear methods (e.g., linear regression) kernel-methods (e.g., RBF-SVMs), and deep neural networks. Broadly, we call these algorithms learning procedures: processes that take as input a dataset (examples with labels, or transitions with rewards) and output a function that performs well (achieves high accuracy or large reward) on the dataset. ... "
Unsupervised Meta-Learning: Learning to Learn without Supervision
By Benjamin Eysenbach and Abhishek Gupta May 1, 2020
This post is cross-listed on the CMU ML blog.
The history of machine learning has largely been a story of increasing abstraction. In the dawn of ML, researchers spent considerable effort engineering features. As deep learning gained popularity, researchers then shifted towards tuning the update rules and learning rates for their optimizers. Recent research in meta-learning has climbed one level of abstraction higher: many researchers now spend their days manually constructing task distributions, from which they can automatically learn good optimizers. What might be the next rung on this ladder? In this post we introduce theory and algorithms for unsupervised meta-learning, where machine learning algorithms themselves propose their own task distributions. Unsupervised meta-learning further reduces the amount of human supervision required to solve tasks, potentially inserting a new rung on this ladder of abstraction.
We start by discussing how machine learning algorithms use human supervision to find patterns and extract knowledge from observed data. The most common machine learning setting is regression, where a human provides labels Y for a set of examples X. The aim is to return a predictor that correctly assigns labels to novel examples. Another common machine learning problem setting is reinforcement learning (RL), where an agent takes actions in an environment. In RL, humans indicate the desired behavior through a reward function that the agent seeks to maximize. To draw a crude analogy to regression, the environment dynamics are the examples X, and the reward function gives the labels Y. Algorithms for regression and RL employ many tools, including tabular methods (e.g., value iteration), linear methods (e.g., linear regression) kernel-methods (e.g., RBF-SVMs), and deep neural networks. Broadly, we call these algorithms learning procedures: processes that take as input a dataset (examples with labels, or transitions with rewards) and output a function that performs well (achieves high accuracy or large reward) on the dataset. ... "
Friday, May 22, 2020
A Virtual Innovation Challenge
My former employer puts on a challenge:
P&G Ventures Announces Procter & Gamble's First-Ever Virtual Innovation Challenge
CINCINNATI, May 21, 2020 /PRNewswire/ -- P&G Ventures, the early-stage startup studio within P&G (NYSE:PG), is inviting entrepreneurs, inventors, and startups to submit a product pitch for its first-ever Virtual Innovation Challenge. The challenge, now in its third iteration, seeks out budding entrepreneurs who are driving the next generation of technologies that will change consumers' lives, and helps fuel innovation amidst the global pandemic. Three finalists will be selected to pitch online .... "
P&G Ventures Announces Procter & Gamble's First-Ever Virtual Innovation Challenge
CINCINNATI, May 21, 2020 /PRNewswire/ -- P&G Ventures, the early-stage startup studio within P&G (NYSE:PG), is inviting entrepreneurs, inventors, and startups to submit a product pitch for its first-ever Virtual Innovation Challenge. The challenge, now in its third iteration, seeks out budding entrepreneurs who are driving the next generation of technologies that will change consumers' lives, and helps fuel innovation amidst the global pandemic. Three finalists will be selected to pitch online .... "
Another Take on the Search for Quantum Supremacy
This has come up a few times since last year with claims from major Tech. IEEE Spectrum looks at the supremacy claim again, intro below and more at the link. How do we measure real results on real problems of value?
How Many Qubits Are Needed For Quantum Supremacy?
Whether Google achieved quantum supremacy depends on perspective
By Charles Q. Choi in IEEE Spectrum
Quantum computers theoretically can prove more powerful than any supercomputer, and now scientists calculate just what quantum computers need to attain such "quantum supremacy," and whether or not Google achieved it with their claims last year.
Whereas classical computers switch transistors either on or off to symbolize data as ones and zeroes, quantum computers use quantum bits or qubits that, because of the bizarre nature of quantum physics, can be in a state of superposition where they are both 1 and 0 simultaneously.
Superposition lets one qubit perform two calculations at once, and if two qubits are linked through a quantum effect known as entanglement, they can help perform 22 or four calculations simultaneously; three qubits, 23 or eight calculations; and so on. In principle, a quantum computer with 300 qubits could perform more calculations in an instant than there are atoms in the visible universe.
It remains controversial how many qubits are needed to achieve quantum supremacy over standard computers. Last year, Google claimed to achieve quantum supremacy with just 53 qubits, performing a calculation in 200 seconds that the company estimated would take the world's most powerful supercomputer 10,000 years, but IBM researchers argued in a blog post “that an ideal simulation of the same task can be performed on a classical system in 2.5 days and with far greater fidelity.” ... '
How Many Qubits Are Needed For Quantum Supremacy?
Whether Google achieved quantum supremacy depends on perspective
By Charles Q. Choi in IEEE Spectrum
Quantum computers theoretically can prove more powerful than any supercomputer, and now scientists calculate just what quantum computers need to attain such "quantum supremacy," and whether or not Google achieved it with their claims last year.
Whereas classical computers switch transistors either on or off to symbolize data as ones and zeroes, quantum computers use quantum bits or qubits that, because of the bizarre nature of quantum physics, can be in a state of superposition where they are both 1 and 0 simultaneously.
Superposition lets one qubit perform two calculations at once, and if two qubits are linked through a quantum effect known as entanglement, they can help perform 22 or four calculations simultaneously; three qubits, 23 or eight calculations; and so on. In principle, a quantum computer with 300 qubits could perform more calculations in an instant than there are atoms in the visible universe.
It remains controversial how many qubits are needed to achieve quantum supremacy over standard computers. Last year, Google claimed to achieve quantum supremacy with just 53 qubits, performing a calculation in 200 seconds that the company estimated would take the world's most powerful supercomputer 10,000 years, but IBM researchers argued in a blog post “that an ideal simulation of the same task can be performed on a classical system in 2.5 days and with far greater fidelity.” ... '
USAF Looks at Blockchain Database for Data Share
Novel approach with implications for supply chain applications as well as other sensor based input of data.
U.S. Air Force to pilot blockchain-based database for data sharing
The blockchain ledger will enable data ingest from legacy data systems and wikis as well as a native web application build platforms
By Lucas Mearian Senior Reporter, Computerworld
Successful Digital Transformation Requires a Modern Retail Network Infrastructure
The U.S. Air Force (USAF) is planning to test a blockchain-based graph database that will allow it to share documents internally as well as throughout the various branches of the Department of Defense and allied governments.
The permissioned blockchain ledger comes from a small Winston-Salem, N.C. start-up, Fluree PBC, which announced the government contract this week. Fluree is working with Air Force’s Small Business Innovation Research AFWERX technology innovation program to launch a proof of concept of the distributed ledger technology (DLT) later this year.
The ledger could include intelligence gathered during military operations and supply chain parts tracking. ... "
U.S. Air Force to pilot blockchain-based database for data sharing
The blockchain ledger will enable data ingest from legacy data systems and wikis as well as a native web application build platforms
By Lucas Mearian Senior Reporter, Computerworld
Successful Digital Transformation Requires a Modern Retail Network Infrastructure
The U.S. Air Force (USAF) is planning to test a blockchain-based graph database that will allow it to share documents internally as well as throughout the various branches of the Department of Defense and allied governments.
The permissioned blockchain ledger comes from a small Winston-Salem, N.C. start-up, Fluree PBC, which announced the government contract this week. Fluree is working with Air Force’s Small Business Innovation Research AFWERX technology innovation program to launch a proof of concept of the distributed ledger technology (DLT) later this year.
The ledger could include intelligence gathered during military operations and supply chain parts tracking. ... "
Deloitte Resilience Podcast
Overall have liked the Deloitte writings on corporate strategic responses and risk analysis.
Resilient podcast: How businesses can confront the COVID-19 crisis
Actionable insights to help businesses respond and recover
In this special edition Resilient series, we shift our focus to the evolving COVID-19 crisis. From supply chain disruptions and economic scenarios to remote working challenges and crisis response strategies, these episodes feature actionable insights from leaders to help you think through what to do now—and next.
Recovering losses: A business perspective on expediting financial recovery
As the economic impact of the coronavirus pandemic continues to grow, many business leaders may be trying to understand the magnitude of their organizations’ financial losses. But attempting to quantify incurred loss during an ongoing and evolving crisis is nearly impossible—only after it’s over can the full business impact be determined. What steps can companies take now to calculate and recover losses sustained from the COVID-19 crisis? This episode of Resilient features Katie Pavlovsky, principal with Deloitte Risk & Financial Advisory, and Vincent Morgan, partner, Insurance Recovery and Advisory at Pillsbury Winthrop Shaw Pittman LLP. Together they explore what company executives may be able do to better to understand loss exposure and recovery coverage in the near-term, and how they might tap into potential relief. They also discuss the importance of cross-functional collaboration and data-driven analyses when evaluating the impact. And in a world where little is predictable, they share insights on how business leaders may be able to identify and plan for ongoing risk related to the crisis.
Business leaders in this episode:
Vincent Morgan, partner, Pillsbury Winthrop Shaw Pittman LLP
Katie Pavlovsky, principal, Deloitte Risk & Financial Advisory
----
Economic stimulus: Exploring the impact of phase three and beyond
The US government continues to execute on its multi-phased plan for providing economic relief to the many individuals and organizations impacted by COVID-19. Its most recent legislation (phase 3.5), earmarks an additional $484B for the Paycheck Protection Program, hospitals, and COVID-19 testing. In this episode, three Deloitte leaders share insights on the likely collective impact of the government relief packages to date, as well as the business and economic implications of the most recently enacted legislation. They also look ahead to the next wave of stimulus (phase 4.0), exploring how this developing legislation may help spur economic recovery for businesses and consumers. As the US Congress considers potential tax policy changes and funding for infrastructure enhancements to support telehealth, telework, tele-education, rural broadband, and more, we discuss how business leaders might consider the possible opportunities and risks related to these various sources of aid. ... "
Resilient podcast: How businesses can confront the COVID-19 crisis
Actionable insights to help businesses respond and recover
In this special edition Resilient series, we shift our focus to the evolving COVID-19 crisis. From supply chain disruptions and economic scenarios to remote working challenges and crisis response strategies, these episodes feature actionable insights from leaders to help you think through what to do now—and next.
Recovering losses: A business perspective on expediting financial recovery
As the economic impact of the coronavirus pandemic continues to grow, many business leaders may be trying to understand the magnitude of their organizations’ financial losses. But attempting to quantify incurred loss during an ongoing and evolving crisis is nearly impossible—only after it’s over can the full business impact be determined. What steps can companies take now to calculate and recover losses sustained from the COVID-19 crisis? This episode of Resilient features Katie Pavlovsky, principal with Deloitte Risk & Financial Advisory, and Vincent Morgan, partner, Insurance Recovery and Advisory at Pillsbury Winthrop Shaw Pittman LLP. Together they explore what company executives may be able do to better to understand loss exposure and recovery coverage in the near-term, and how they might tap into potential relief. They also discuss the importance of cross-functional collaboration and data-driven analyses when evaluating the impact. And in a world where little is predictable, they share insights on how business leaders may be able to identify and plan for ongoing risk related to the crisis.
Business leaders in this episode:
Vincent Morgan, partner, Pillsbury Winthrop Shaw Pittman LLP
Katie Pavlovsky, principal, Deloitte Risk & Financial Advisory
----
Economic stimulus: Exploring the impact of phase three and beyond
The US government continues to execute on its multi-phased plan for providing economic relief to the many individuals and organizations impacted by COVID-19. Its most recent legislation (phase 3.5), earmarks an additional $484B for the Paycheck Protection Program, hospitals, and COVID-19 testing. In this episode, three Deloitte leaders share insights on the likely collective impact of the government relief packages to date, as well as the business and economic implications of the most recently enacted legislation. They also look ahead to the next wave of stimulus (phase 4.0), exploring how this developing legislation may help spur economic recovery for businesses and consumers. As the US Congress considers potential tax policy changes and funding for infrastructure enhancements to support telehealth, telework, tele-education, rural broadband, and more, we discuss how business leaders might consider the possible opportunities and risks related to these various sources of aid. ... "
Better Bionic Eye
New imaging capabilities to improve eyesight.
New Bionic Eye Might See Better Than We Do
By Joel Hruska in Extremetech
The ability to restore sight to the blind is one of the most profound acts of healing medicine can achieve, in terms of the impact on the affected patient’s life — and one of the most difficult for modern medicine to achieve. We can restore vision in a limited number of scenarios and there are some early bionic eyes on the market that can restore limited vision in very specific scenarios. Researchers may have taken a dramatic step towards changing that in the future, with the results of a new experiment to design a bionic retina.
The research team in question has published a paper in Nature the construction of a hemispherical retina built out of high-density nanowires. The spherical shape of the retina has historically been a major challenge for biomimetic devices.... "
New Bionic Eye Might See Better Than We Do
By Joel Hruska in Extremetech
The ability to restore sight to the blind is one of the most profound acts of healing medicine can achieve, in terms of the impact on the affected patient’s life — and one of the most difficult for modern medicine to achieve. We can restore vision in a limited number of scenarios and there are some early bionic eyes on the market that can restore limited vision in very specific scenarios. Researchers may have taken a dramatic step towards changing that in the future, with the results of a new experiment to design a bionic retina.
The research team in question has published a paper in Nature the construction of a hemispherical retina built out of high-density nanowires. The spherical shape of the retina has historically been a major challenge for biomimetic devices.... "
Thursday, May 21, 2020
IBM Invests in Blockchain Network
More indications of the seriousness of the technology
IBM Takes 7% in Trade-Finance Blockchain Network We.Trade
IBM has become a shareholder in we.trade, the blockchain-based trade-finance platform jointly owned by 12 European banks.
Ciaran McGowan, we.trade’s CEO, said the deepening relationship with Big Blue will help the platform in its next phase of global expansion.
“Now we’ve got a very strong partnership with IBM for scaling globally, and we are working closely together on Asia, Africa and Latin America,” McGowan said.
We.trade has the distinction of being the first enterprise blockchain consortium to go live, which happened back in early 2018. The platform was formed by a group of banks to help European small and medium-sized enterprises (SMEs) get better access to trade finance. IBM has been the project’s technology partner from inception... "
IBM Takes 7% in Trade-Finance Blockchain Network We.Trade
IBM has become a shareholder in we.trade, the blockchain-based trade-finance platform jointly owned by 12 European banks.
Ciaran McGowan, we.trade’s CEO, said the deepening relationship with Big Blue will help the platform in its next phase of global expansion.
“Now we’ve got a very strong partnership with IBM for scaling globally, and we are working closely together on Asia, Africa and Latin America,” McGowan said.
We.trade has the distinction of being the first enterprise blockchain consortium to go live, which happened back in early 2018. The platform was formed by a group of banks to help European small and medium-sized enterprises (SMEs) get better access to trade finance. IBM has been the project’s technology partner from inception... "
How Significant is Climate on Coronavirus Transmission?
Considerable depth in this piece about the topic.
The Trillion-Dollar Question With COVID-19
How significant is climate on coronavirus transmission?
by Luke Shors and Michael Ferrari
Right now, India is completing more than a month of national lockdown and moving toward some loosening of restrictions in less-infected areas. The implications of stopping the movement and activity of 1.3 billion people would be hard to overstate. Ironically, one of the many side effects of the policy has been the migration of millions of poor people who lacked the resources to shelter in place for the duration of the lockdown. Indian Prime Minister Narendra Modi faced a terrible choice: lock down the country with massive economic and social repercussions, or leave the country open and allow an outbreak of perhaps unparalleled scale in the history of the world.... "
The Trillion-Dollar Question With COVID-19
How significant is climate on coronavirus transmission?
by Luke Shors and Michael Ferrari
Right now, India is completing more than a month of national lockdown and moving toward some loosening of restrictions in less-infected areas. The implications of stopping the movement and activity of 1.3 billion people would be hard to overstate. Ironically, one of the many side effects of the policy has been the migration of millions of poor people who lacked the resources to shelter in place for the duration of the lockdown. Indian Prime Minister Narendra Modi faced a terrible choice: lock down the country with massive economic and social repercussions, or leave the country open and allow an outbreak of perhaps unparalleled scale in the history of the world.... "
Albertsons to Launch Virtual Assistant
Details of its use and depth will be of interest.
So far, Albertsons has implemented the Nuance virtual assistant and live chat at its Vons supermarkets in Southern California.
Russell Redman
With online grocery sales booming during the coronavirus pandemic, Albertsons Cos. is stepping up customer service with digital assistance technology from Nuance Communications.
Albertsons plans to deploy virtual assistant and live chat solutions from the Nuance Intelligent Engagement Platform to provide real-time support to customers shopping for their groceries online or via the retailer’s mobile apps, Burlington, Mass.-based Nuance said yesterday.
The artificial intelligence (AI)-powered application helps customers through their shopping experience and answers questions on how the delivery service works, item availability, online order tracking status, and store locations and hours, among other inquiries .... "
So far, Albertsons has implemented the Nuance virtual assistant and live chat at its Vons supermarkets in Southern California.
Russell Redman
With online grocery sales booming during the coronavirus pandemic, Albertsons Cos. is stepping up customer service with digital assistance technology from Nuance Communications.
Albertsons plans to deploy virtual assistant and live chat solutions from the Nuance Intelligent Engagement Platform to provide real-time support to customers shopping for their groceries online or via the retailer’s mobile apps, Burlington, Mass.-based Nuance said yesterday.
The artificial intelligence (AI)-powered application helps customers through their shopping experience and answers questions on how the delivery service works, item availability, online order tracking status, and store locations and hours, among other inquiries .... "
Surge of Secondary Brands
Speculation as to how surging demands for new and secondary brands will effect the market and the supply chain. And can be better controlled with digital strategies.
As secondary brands surge, CPG leaders need a digital playbook
Margo Kahnrose in Smartbrief
In the mid ‘80s, Cabbage Patch Kids were all the rage. I wanted one, badly. But to my mother, those dolls symbolized everything detestable about the mega-consumerism rampant in the ‘60s and ‘70s, epitomizing the superficiality her generation had willfully rebelled against. She got me a knock-off -- a cheaper, also-cute, unbranded alternative. I was unimpressed. A fake held no social currency.
Until recently, generics stood for one thing only: price. As a result, a knockoff in any category may have done the job well enough, but there was a reluctance around them. They were a last resort.
Fast-forward to today, and alternatives to name brand goods have been steadily on the rise in all ways: proliferation, quality, adoption, even cool-factor. As of the last three months, when the COVID-19 global health pandemic began driving unprecedented e-commerce demand -- and simultaneous supply chain constraints -- generics have seen an even bigger boost, and not only in fast-moving “essentials” categories like toilet paper, but in a vast array of consumer goods, including beauty products, exercise equipment, consumer electronics and home furnishings. The reason comes down to a perfect storm of counterintuitively intersecting influences -- more impetus than ever to shop, and less money than before to spend.... " ... '
As secondary brands surge, CPG leaders need a digital playbook
Margo Kahnrose in Smartbrief
In the mid ‘80s, Cabbage Patch Kids were all the rage. I wanted one, badly. But to my mother, those dolls symbolized everything detestable about the mega-consumerism rampant in the ‘60s and ‘70s, epitomizing the superficiality her generation had willfully rebelled against. She got me a knock-off -- a cheaper, also-cute, unbranded alternative. I was unimpressed. A fake held no social currency.
Until recently, generics stood for one thing only: price. As a result, a knockoff in any category may have done the job well enough, but there was a reluctance around them. They were a last resort.
Fast-forward to today, and alternatives to name brand goods have been steadily on the rise in all ways: proliferation, quality, adoption, even cool-factor. As of the last three months, when the COVID-19 global health pandemic began driving unprecedented e-commerce demand -- and simultaneous supply chain constraints -- generics have seen an even bigger boost, and not only in fast-moving “essentials” categories like toilet paper, but in a vast array of consumer goods, including beauty products, exercise equipment, consumer electronics and home furnishings. The reason comes down to a perfect storm of counterintuitively intersecting influences -- more impetus than ever to shop, and less money than before to spend.... " ... '
GE Bores Like an Earthworm
Novel kind of robotic task and environment, example of biomimicry.
GE’s soft robot bores holes like a giant earthworm by Brian Heater in TechCrunch
A giant earthworm robot was not on the list of things I expected to see when I logged in this morning. But it’s here, and I’m here for it. Designed by a team at GE Research, the robot in question nabbed a $2.5 million award as part of DARPA’s Underminer. The program was created to foster rapid tunnel digging in military environments.As is all the rage in robotics these days, the GE team turned to biological inspiration to execute the task. What they came up with is a large, segmented and soft ..."
GE’s soft robot bores holes like a giant earthworm by Brian Heater in TechCrunch
A giant earthworm robot was not on the list of things I expected to see when I logged in this morning. But it’s here, and I’m here for it. Designed by a team at GE Research, the robot in question nabbed a $2.5 million award as part of DARPA’s Underminer. The program was created to foster rapid tunnel digging in military environments.As is all the rage in robotics these days, the GE team turned to biological inspiration to execute the task. What they came up with is a large, segmented and soft ..."
Wednesday, May 20, 2020
Geometry Points to Drug Target Candidates
The fact that geomety and math are indicative of targets is intriguing . Points perhaps to similarity to folding problems, with huge combinatorics.
Geometry Points to Coronavirus Drug Target Candidates
Scientific American
Michael Dhar
A study by Robert Penner at France’s Institute of Advanced Scientific Studies incorporates a mathematical model that predicts protein sites on viruses that might be particularly susceptible to disabling treatments, which has been used to identify potential drug and vaccine targets in the fight against the coronavirus that causes COVID-19. The technique exploits the fact that certain viral proteins change shape when viruses penetrate cells; Penner mathematically localized exotic sites that mediate this shift. John Yin, who studies viruses at the University of Wisconsin–Madison, said much work remains to verify the study’s predictions through experimentation. Said Arndt Benecke, a biological researcher at the French National Center for Scientific Research, “This could go far beyond the viruses.” ... "
Geometry Points to Coronavirus Drug Target Candidates
Scientific American
Michael Dhar
A study by Robert Penner at France’s Institute of Advanced Scientific Studies incorporates a mathematical model that predicts protein sites on viruses that might be particularly susceptible to disabling treatments, which has been used to identify potential drug and vaccine targets in the fight against the coronavirus that causes COVID-19. The technique exploits the fact that certain viral proteins change shape when viruses penetrate cells; Penner mathematically localized exotic sites that mediate this shift. John Yin, who studies viruses at the University of Wisconsin–Madison, said much work remains to verify the study’s predictions through experimentation. Said Arndt Benecke, a biological researcher at the French National Center for Scientific Research, “This could go far beyond the viruses.” ... "
Can AI tell Moral right from Wrong?
As suggested moral choices have much to do with context, and are hard to extract from just words. I like that changes over time are being considered. Time is often the most important metadata, and infers the trajectories of choices being made. We did something similar when seeking to understand the future of marketing messages in context.
Scientists claim they can teach AI to judge ‘right’ from ‘wrong’
The moral machine shows how moral values change over time
Story By Thomas Macaulay in NextWeb
Scientists claim they can “teach” an AI moral reasoning by training it to extract ideas of right and wrong from texts.
Researchers from Darmstadt University of Technology (DUT) in Germany fed their model books, news, and religious literature so it could learn the associations between different words and sentences. After training the system, they say it adopted the values of the texts.
As the team put it in their research paper:
The resulting model, called the Moral Choice Machine (MCM), calculates the bias score on a sentence level using embeddings of the Universal Sentence Encoder since the moral value of an action to be taken depends on its context.
This allows the system to understand contextual information by analyzing entire sentences rather than specific words. As a result, the AI could work out that it was objectionable to kill living beings, but fine to just kill time. ... "
Scientists claim they can teach AI to judge ‘right’ from ‘wrong’
The moral machine shows how moral values change over time
Story By Thomas Macaulay in NextWeb
Scientists claim they can “teach” an AI moral reasoning by training it to extract ideas of right and wrong from texts.
Researchers from Darmstadt University of Technology (DUT) in Germany fed their model books, news, and religious literature so it could learn the associations between different words and sentences. After training the system, they say it adopted the values of the texts.
As the team put it in their research paper:
The resulting model, called the Moral Choice Machine (MCM), calculates the bias score on a sentence level using embeddings of the Universal Sentence Encoder since the moral value of an action to be taken depends on its context.
This allows the system to understand contextual information by analyzing entire sentences rather than specific words. As a result, the AI could work out that it was objectionable to kill living beings, but fine to just kill time. ... "
Subscribe to:
Posts (Atom)