Good piece in O'Reilly about this question, largely-non technical and useful thoughts. Passing it on ...
Why Best-of-Breed is a Better Choice than All-in-One Platforms for Data Science
All-in-one platforms built from open source software make it easy to perform certain workflows, but make it hard to explore and grow beyond those boundaries.
By Matthew Rocklin and Hugo Bowne-Anderson, in O' Reilly
Do you buy a solution from a big integration company like IBM, Cloudera, or Amazon? Do you engage many small startups, each focused on one part of the problem? A little of both? We see trends shifting towards focused best-of-breed platforms. That is, products that are laser-focused on one aspect of the data science and machine learning workflows, in contrast to all-in-one platforms that attempt to solve the entire space of data workflows.
This article, which examines this shift in more depth, is an opinionated result of countless conversations with data scientists about their needs in modern data science workflows. ... "
(Much more at the link)
Monday, August 31, 2020
Doorbell Cameras Used by Criminals too
I have recently been asked about security issues with doorbell and other smart home camera systems. The following excerpt from The Verge shows a perhaps less than obvious use by criminals too, they could record/alert themselves whenever the FBI shows up at the door. An embedded link in the article points to details. A notable alternative for security concerns in the Smart Home. More also in the full TheVerge article pointed too.
FBI worried Ring and other doorbell cameras could tip owners off to police searches
Cameras capture everybody — even the cops
By Adi Robertson@thedextriarchy in TheVerge (Excerpt)
.... Federal Bureau of Investigation documents warned that owners of Amazon’s Ring and similar video doorbells can use the systems — which collect video footage sometimes used to investigate crimes — in order to watch police instead.
The Intercept spotted the files in the BlueLeaks data trove aggregated from law enforcement agencies. One 2019 analysis describes numerous ways police and the FBI could use Ring surveillance footage, but it also cites “new challenges” involving sensor- and camera-equipped smart home devices. Specifically, they can offer an early warning when officers are approaching a house to search it; give away officer locations in a standoff; or let the owner capture pictures of law enforcement, “presenting a risk to their present and future safety.”
These are partly hypothetical concerns. The standoff issue, for instance, was noted in a report about motion-activated panoramic cameras. But the FBI points to a 2017 incident where agents approached the home of someone with a video doorbell, seeking to search the premises. The resident wasn’t home but saw them approach by watching a remote video feed, then preemptively contacted his neighbor and landlord about the FBI’s approach. He may also have “been able to covertly monitor law enforcement activity” with the camera. .... "
FBI worried Ring and other doorbell cameras could tip owners off to police searches
Cameras capture everybody — even the cops
By Adi Robertson@thedextriarchy in TheVerge (Excerpt)
.... Federal Bureau of Investigation documents warned that owners of Amazon’s Ring and similar video doorbells can use the systems — which collect video footage sometimes used to investigate crimes — in order to watch police instead.
The Intercept spotted the files in the BlueLeaks data trove aggregated from law enforcement agencies. One 2019 analysis describes numerous ways police and the FBI could use Ring surveillance footage, but it also cites “new challenges” involving sensor- and camera-equipped smart home devices. Specifically, they can offer an early warning when officers are approaching a house to search it; give away officer locations in a standoff; or let the owner capture pictures of law enforcement, “presenting a risk to their present and future safety.”
These are partly hypothetical concerns. The standoff issue, for instance, was noted in a report about motion-activated panoramic cameras. But the FBI points to a 2017 incident where agents approached the home of someone with a video doorbell, seeking to search the premises. The resident wasn’t home but saw them approach by watching a remote video feed, then preemptively contacted his neighbor and landlord about the FBI’s approach. He may also have “been able to covertly monitor law enforcement activity” with the camera. .... "
Amazon Prime Air Approved for Trials
Promised some time ago. Now FAA approved. In specifically what parts of the supply chain is this expected to be useful? Some years away?
Amazon's Prime Air can officially begin drone delivery trials in the US
The FAA granted Amazon 'air carrier' status.
Christine Fisher, @cfisherwrites in Engadget
As of today, Amazon is officially an “air carrier.” The Federal Aviation Administration (FAA) granted Amazon Prime Air the designation, which allows Amazon to begin its first commercial delivery trials in the US, Bloomberg reports. The company will use the hexagon-shaped next-gen hybrid drone it showed off last year.
Amazon has not revealed when or where it will begin its commercial delivery trials, but as Bloomberg points out, it does have test sites in the Northwest and in the nearby Vancouver area. Amazon has also tested drones in the UK. Still, we’re probably a few years away from a commercial drone delivery service. In part because the FAA still needs to define regulations beyond the trial phase. ... '
Amazon's Prime Air can officially begin drone delivery trials in the US
The FAA granted Amazon 'air carrier' status.
Christine Fisher, @cfisherwrites in Engadget
As of today, Amazon is officially an “air carrier.” The Federal Aviation Administration (FAA) granted Amazon Prime Air the designation, which allows Amazon to begin its first commercial delivery trials in the US, Bloomberg reports. The company will use the hexagon-shaped next-gen hybrid drone it showed off last year.
Amazon has not revealed when or where it will begin its commercial delivery trials, but as Bloomberg points out, it does have test sites in the Northwest and in the nearby Vancouver area. Amazon has also tested drones in the UK. Still, we’re probably a few years away from a commercial drone delivery service. In part because the FAA still needs to define regulations beyond the trial phase. ... '
Ikea Tries Virtual Influence, Happiness and More
So what is this? An assistant? An influencer in what sense? Just to engage us? An Android? To do what for whom? Assemble furniture or prepare meatballs? Explores 'happiness at home'. There is a video at the link, but we are warned 'nothing much happens'? Ikea's innovation always gets my attention at least.
Ikea turned a virtual influencer into a physical installation in the Verge
Imma stopped by a store in Harajuku
By Andrew Webster n TheVerge
This past weekend, the Ikea Harajuku location in Tokyo was home to something unique: an installation starring a virtual person. The retail giant partnered with Imma, a virtual influencer, in order to explore the concept of “happiness at home.” Over three days, those passing by the first floor could peer into Imma’s living room, watching as she lounged on a couch, mindlessly browsing her phone. Meanwhile, a view into her bedroom was streamed on a screen on the second floor, viewable from Harajuku Station.
Of course, these weren’t real living spaces, since Imma is a CG model and not an actual human. But Ikea says that it created the installation using LED screens inside of the physical rooms, which were “curated” by Imma, to give the appearance of Imma being in a real place. You can watch a recap of the lengthy event below, though be warned not much happens. ... "
Ikea turned a virtual influencer into a physical installation in the Verge
Imma stopped by a store in Harajuku
By Andrew Webster n TheVerge
This past weekend, the Ikea Harajuku location in Tokyo was home to something unique: an installation starring a virtual person. The retail giant partnered with Imma, a virtual influencer, in order to explore the concept of “happiness at home.” Over three days, those passing by the first floor could peer into Imma’s living room, watching as she lounged on a couch, mindlessly browsing her phone. Meanwhile, a view into her bedroom was streamed on a screen on the second floor, viewable from Harajuku Station.
Of course, these weren’t real living spaces, since Imma is a CG model and not an actual human. But Ikea says that it created the installation using LED screens inside of the physical rooms, which were “curated” by Imma, to give the appearance of Imma being in a real place. You can watch a recap of the lengthy event below, though be warned not much happens. ... "
Knowledge Graphs vs Property Graphs
Robert Coyne Writes and points to a number of their resources on this topic:
Thank you for attending our webinar: "Knowledge Graphs vs. Property Graphs --a brief overview and comparison". We hope you enjoyed the event. .... .
The recording and slides from the webinar are available here: https://www.topquadrant.com/knowledge-graphs-vs-property-graphs-a-brief-overview-and-comparison/. We have also linked a blog post with the questions and answers from attendees here: https://www.topquadrant.com/graphs-overview-and-comparison/
Also, as mentioned in the webinar, you can find the "Property Graphs vs. Knowledge Graphs" white paper here: https://www.topquadrant.com/knowledge-assets/whitepapers/
Should you have any follow-up questions or would like to explore all of the capabilities in TopBraid EDG (http://www.topquadrant.com/products/topbraid-enterprise-data-governance/).) ....
Thank you for attending our webinar: "Knowledge Graphs vs. Property Graphs --a brief overview and comparison". We hope you enjoyed the event. .... .
The recording and slides from the webinar are available here: https://www.topquadrant.com/knowledge-graphs-vs-property-graphs-a-brief-overview-and-comparison/. We have also linked a blog post with the questions and answers from attendees here: https://www.topquadrant.com/graphs-overview-and-comparison/
Also, as mentioned in the webinar, you can find the "Property Graphs vs. Knowledge Graphs" white paper here: https://www.topquadrant.com/knowledge-assets/whitepapers/
Should you have any follow-up questions or would like to explore all of the capabilities in TopBraid EDG (http://www.topquadrant.com/products/topbraid-enterprise-data-governance/).) ....
Quantum Accelerated Computation as Disruption
Some bits of thoughts on the topic. And new form of high performance computing (HPC)? Ultimately what are the limits of capabilities here, versus more standard supercomputing?
Is Quantum-Accelerated Computation the Next Big Disruption? by 7wData
Many people are looking to quantum computing as the next revolutionary technology. Nature analyzed that in 2017 and 2018 alone, more than $450 million of private funding was poured into the quantum industry. Even the classical finance community starts to smell an opportunity. Xavier Rolet, the former CEO of the London Stock Exchange and well-respected industry veteran, told The Quantum Daily that he considers such investments a solid bet on the future and believes in the transformational change of quantum computers.
If not all, the exciting topic made its way to a more mainstream audience. Even the tabloids have been writing extensively and with very catchy headlines about a Nature article published in 2019. Researchers at Google announced that they achieved what is called quantum supremacy. On their quantum processor named Sycamore (see Fig. 1), they ran some calculations within 200 seconds that would have taken the world’s most powerful (classical) supercomputer 10,000 years — at least they claim. It has to be added that the setup was very specific and the results are heavily debated by competitor IBM. But certainly, the expectation towards the field has been starting to skyrocket.
As smart and quirky physicists move towards the field of quantum computation, build hyped startups and get huge funding, it is very interesting to follow this space. Will we have the chance to see disruptive innovation live and in action? ... (More with sign in to 7wData ) ... '
Is Quantum-Accelerated Computation the Next Big Disruption? by 7wData
Many people are looking to quantum computing as the next revolutionary technology. Nature analyzed that in 2017 and 2018 alone, more than $450 million of private funding was poured into the quantum industry. Even the classical finance community starts to smell an opportunity. Xavier Rolet, the former CEO of the London Stock Exchange and well-respected industry veteran, told The Quantum Daily that he considers such investments a solid bet on the future and believes in the transformational change of quantum computers.
If not all, the exciting topic made its way to a more mainstream audience. Even the tabloids have been writing extensively and with very catchy headlines about a Nature article published in 2019. Researchers at Google announced that they achieved what is called quantum supremacy. On their quantum processor named Sycamore (see Fig. 1), they ran some calculations within 200 seconds that would have taken the world’s most powerful (classical) supercomputer 10,000 years — at least they claim. It has to be added that the setup was very specific and the results are heavily debated by competitor IBM. But certainly, the expectation towards the field has been starting to skyrocket.
As smart and quirky physicists move towards the field of quantum computation, build hyped startups and get huge funding, it is very interesting to follow this space. Will we have the chance to see disruptive innovation live and in action? ... (More with sign in to 7wData ) ... '
Sunday, August 30, 2020
More Seeing Around the Corner
More on the topic. Mentioned a number of times here, a seemingly impossible idea. But very useful for driving applications to anticipate and avoid problems. Thought I have no application for this idea, the metaphor of getting data from sensors that can see beyond normal barriers is interesting to consider. Sensing.
Engineering Researchers Develop Camera System to See Around Corners
UCLA Samueli Newsroom
July 14, 2020
Researchers at the University of California, Los Angeles (UCLA) and the Nara Institute of Science and Technology in Japan have developed a camera that can see around corners using the same phenomenon one observes when looking through polarized sunglasses. The researchers set in front of the camera lens a polarizer that only allows certain oscillation states to reach the lens. They also developed a novel algorithm that rearranges the polarization of light off a wall to show objects hidden around a corner. Said UCLA's Achuta Kadambi, "If the technology can be successfully applied to cameras enabling them to see around corners, it could help autonomous cars avoid accidents at blind spots, or allow biomedical engineers to create endoscopes that can help doctors see around organs." ... '
Engineering Researchers Develop Camera System to See Around Corners
UCLA Samueli Newsroom
July 14, 2020
Researchers at the University of California, Los Angeles (UCLA) and the Nara Institute of Science and Technology in Japan have developed a camera that can see around corners using the same phenomenon one observes when looking through polarized sunglasses. The researchers set in front of the camera lens a polarizer that only allows certain oscillation states to reach the lens. They also developed a novel algorithm that rearranges the polarization of light off a wall to show objects hidden around a corner. Said UCLA's Achuta Kadambi, "If the technology can be successfully applied to cameras enabling them to see around corners, it could help autonomous cars avoid accidents at blind spots, or allow biomedical engineers to create endoscopes that can help doctors see around organs." ... '
Exoskeletons Advancing
The exoskeleton idea is fascinating ... already in use in automotive. Anywhere that needs improved physical strength and manipulation. Military another area very likely.
U.S. Marines to Get 'Alpha' Exoskeleton for Super Strength
ZDNet
Greg Nichols
July 28, 2020
The U.S. Marine Corp will test use cases of Guardian XO Alpha, a wearable robotic exoskeleton from the defense-focused subsidiary of Sarcos Robotics. The powered exoskeleton, developed for industrial use, aims to give users enhanced strength. Its immediate applications will be in logistics, particularly heavy lifting. Said Sarcos Defense's Ben Wolff, "Our military branches need to regularly address changing personnel issues and reduce the risk of injury from performing heavy-lifting tasks. We believe that our full-body, powered exoskeletons will be a huge benefit to the Marines, as well as the U.S. Air Force, U.S. Navy, and [the U.S. Army Special Operations Command], who we are also working with on our exoskeleton technology." ... '
U.S. Marines to Get 'Alpha' Exoskeleton for Super Strength
ZDNet
Greg Nichols
July 28, 2020
The U.S. Marine Corp will test use cases of Guardian XO Alpha, a wearable robotic exoskeleton from the defense-focused subsidiary of Sarcos Robotics. The powered exoskeleton, developed for industrial use, aims to give users enhanced strength. Its immediate applications will be in logistics, particularly heavy lifting. Said Sarcos Defense's Ben Wolff, "Our military branches need to regularly address changing personnel issues and reduce the risk of injury from performing heavy-lifting tasks. We believe that our full-body, powered exoskeletons will be a huge benefit to the Marines, as well as the U.S. Air Force, U.S. Navy, and [the U.S. Army Special Operations Command], who we are also working with on our exoskeleton technology." ... '
Surveillance Capitalism
I was just pointed to Shoshanna Zuboff's latest Book: The Age of Surveillance Capitalism:
Our company had been interviewed for her 1988 book: In the Age of the Smart Machine: The Future of Work and Power. as we ramped up our AI efforts. It had a similar take, but was before tech companies had become powerful. And it bought much of the hype of these times in believing how powerful AI was. Although we attempted to push back on that, it was not heard. Looking at that past book now, its a weak look at the reality of the times.
A quick scan of reviews of writing about the latest book would seem, as the title suggests, suggests that Google and other Tech giants are taking over capitalism via surveillance, to the detriment of all. But now the power is available to many more than it was then. If I read it, mention talk about it some more.
Our company had been interviewed for her 1988 book: In the Age of the Smart Machine: The Future of Work and Power. as we ramped up our AI efforts. It had a similar take, but was before tech companies had become powerful. And it bought much of the hype of these times in believing how powerful AI was. Although we attempted to push back on that, it was not heard. Looking at that past book now, its a weak look at the reality of the times.
A quick scan of reviews of writing about the latest book would seem, as the title suggests, suggests that Google and other Tech giants are taking over capitalism via surveillance, to the detriment of all. But now the power is available to many more than it was then. If I read it, mention talk about it some more.
Content Factories Emerge
We have many means of converting data and hypotheses to useful testable directions.
The Data-Driven Tech Engine at the Heart of Hollywood's Content Factories
The Wall Street Journal
Christopher Mims
July 11, 2020
A new breed of film-industry technology companies are developing data-driven tools to help producers generate and refine content in the hope of capturing and retaining audiences. One such firm is audience-research software startup Pilotly, which streams content to people in their homes. Bryon Schafer at music-video distributor Vevo said this capability ensures a larger test audience size, and enables more granular queries to audiences by creatives and marketers. Other systems like MarketCast passively collect information as audiences watch content, gathering biometric data like facial expressions to measure responses. Pilotly's James Norman said the industry relies on a continuous feedback loop of recommendation algorithms, viewing-habit trackers, and studio production teams to sustain audience interest by keeping content fresh; Pilotly replicates those loops across a wide array of audiences by applying the expertise of the company's diverse workforce. ... "
The Data-Driven Tech Engine at the Heart of Hollywood's Content Factories
The Wall Street Journal
Christopher Mims
July 11, 2020
A new breed of film-industry technology companies are developing data-driven tools to help producers generate and refine content in the hope of capturing and retaining audiences. One such firm is audience-research software startup Pilotly, which streams content to people in their homes. Bryon Schafer at music-video distributor Vevo said this capability ensures a larger test audience size, and enables more granular queries to audiences by creatives and marketers. Other systems like MarketCast passively collect information as audiences watch content, gathering biometric data like facial expressions to measure responses. Pilotly's James Norman said the industry relies on a continuous feedback loop of recommendation algorithms, viewing-habit trackers, and studio production teams to sustain audience interest by keeping content fresh; Pilotly replicates those loops across a wide array of audiences by applying the expertise of the company's diverse workforce. ... "
Saturday, August 29, 2020
Connecting Computers to the Brain: Update NeuraLink
Had heard of this some time ago. Apparently this updates. Still skeptical as to what the word 'connecting' means here. We can read brain signals readily, but can we control things that way? Look forward to see more. No comments from the pigs. Suggest some caution before you volunteer
Elon Musk is one step closer to connecting a computer to your brain
Neuralink has demonstrated a prototype of its brain-machine interface that currently works in pigs.
By Rebecca Heilweil in Vox
At a Friday event, Elon Musk revealed more details about his mysterious neuroscience company Neuralink and its plans to connect computers to human brains. While the development of this futuristic-sounding tech is still in its early stages, the presentation was expected to demonstrate the second version of a small, robotic device that inserts tiny electrode threads through the skull and into the brain. Musk said ahead of the event he would “show neurons firing in real-time. The matrix in the matrix.”
And he did just that. At the event, Musk showed off several pigs that had prototypes of the neural links implanted in their head, and machinery that was tracking those pigs’ brain activity in real time. The billionaire also announced that the Food and Drug Administration had awarded the company a breakthrough device authorization, which can help expedite research on a medical device.
Like building underground car tunnels and sending private rockets to Mars, this Musk-backed endeavor is incredibly ambitious, but Neuralink builds on years of research into brain-machine interfaces. A brain-machine interface is technology that allows for a device, like a computer, to interact and communicate with a brain. Neuralink, in particular, aims to build an incredibly powerful brain-machine interface, a device with the power to handle lots of data that can be inserted in a relatively simple surgery. Its short-term goal is to build a device that can help people with specific health conditions.
The actual status of Neuralink’s research has been somewhat murky, and Friday’s big announcement happened as ex-employees complain of internal chaos at the company. Musk has already said the project allowed a monkey to control a computer device with its mind, and as the New York Times reported in 2019, Neuralink had demonstrated a system with 1,500 electrodes connected to a lab rat. Since then, Musk has hinted at the company’s progress (at times on Twitter), though those involved have generally been close-lipped about the status of the research. .... "
Elon Musk is one step closer to connecting a computer to your brain
Neuralink has demonstrated a prototype of its brain-machine interface that currently works in pigs.
By Rebecca Heilweil in Vox
At a Friday event, Elon Musk revealed more details about his mysterious neuroscience company Neuralink and its plans to connect computers to human brains. While the development of this futuristic-sounding tech is still in its early stages, the presentation was expected to demonstrate the second version of a small, robotic device that inserts tiny electrode threads through the skull and into the brain. Musk said ahead of the event he would “show neurons firing in real-time. The matrix in the matrix.”
And he did just that. At the event, Musk showed off several pigs that had prototypes of the neural links implanted in their head, and machinery that was tracking those pigs’ brain activity in real time. The billionaire also announced that the Food and Drug Administration had awarded the company a breakthrough device authorization, which can help expedite research on a medical device.
Like building underground car tunnels and sending private rockets to Mars, this Musk-backed endeavor is incredibly ambitious, but Neuralink builds on years of research into brain-machine interfaces. A brain-machine interface is technology that allows for a device, like a computer, to interact and communicate with a brain. Neuralink, in particular, aims to build an incredibly powerful brain-machine interface, a device with the power to handle lots of data that can be inserted in a relatively simple surgery. Its short-term goal is to build a device that can help people with specific health conditions.
The actual status of Neuralink’s research has been somewhat murky, and Friday’s big announcement happened as ex-employees complain of internal chaos at the company. Musk has already said the project allowed a monkey to control a computer device with its mind, and as the New York Times reported in 2019, Neuralink had demonstrated a system with 1,500 electrodes connected to a lab rat. Since then, Musk has hinted at the company’s progress (at times on Twitter), though those involved have generally been close-lipped about the status of the research. .... "
Considering the AI Factory
Nicely covered by Irving:
The AI Factory: A New Kind of Digital Operating Model
“Whether you’re leading a digital start-up or working to revamp a traditional enterprise, it’s essential to understand the revolutionary impact AI has on operations, strategy, and competition,” wrote Harvard professors Marco Iansiti and Karim Lakhani in “Competing in the Age of AI”, a recently published article in the Harvard Business Review (HBR). Earlier this year, they also published a book of the same title, which expands on the ideas in the article and illustrates them with a number of concrete use cases.
The age of AI is being ushered by the emergence of a new kind of digital firm. Rather than just relying on traditional business processes operated by its workers, these firms are leveraging software and data-driven algorithms to eliminate traditional constraints and transform the rules of competition. Managers and engineers are responsible for the design of the new AI-based operational systems, but the system then runs the operations pretty much on its own.
“At the core of the new firm is a decision factory - what we call the AI factory,” note the authors. “[T]he AI factory treats decision-making as a science. Analytics systematically convert internal and external data into predictions, insights, and choices, which in turn guide and automate operational workflows… As digital networks and algorithms are woven into the fabric of firms, industries begin to function differently and the lines between them blur.”
The Industrial Revolution transformed the economy by developing a scalable, repeatable approach to manufacturing. The AI factory is now driving another fundamental transformation by industrializing the data gathering, decision making, and overall digital operations of 21st century firms.
The AI factory involves four key components:
Data pipeline - the process which systematically gathers, cleans, integrates, and safeguards data;
Algorithm development, - the component which generate predictions about the future states of the business and drives its most critical operating activities;
Experimentation platform - the mechanism on which predictions are tested to ensure that they will have the intended effect; and
Software infrastructure, the systems that embed these various components in software and connect it as needed to the appropriate internal and external users. .....
(Much more at the link) Will look at this more closely.
The AI Factory: A New Kind of Digital Operating Model
“Whether you’re leading a digital start-up or working to revamp a traditional enterprise, it’s essential to understand the revolutionary impact AI has on operations, strategy, and competition,” wrote Harvard professors Marco Iansiti and Karim Lakhani in “Competing in the Age of AI”, a recently published article in the Harvard Business Review (HBR). Earlier this year, they also published a book of the same title, which expands on the ideas in the article and illustrates them with a number of concrete use cases.
The age of AI is being ushered by the emergence of a new kind of digital firm. Rather than just relying on traditional business processes operated by its workers, these firms are leveraging software and data-driven algorithms to eliminate traditional constraints and transform the rules of competition. Managers and engineers are responsible for the design of the new AI-based operational systems, but the system then runs the operations pretty much on its own.
“At the core of the new firm is a decision factory - what we call the AI factory,” note the authors. “[T]he AI factory treats decision-making as a science. Analytics systematically convert internal and external data into predictions, insights, and choices, which in turn guide and automate operational workflows… As digital networks and algorithms are woven into the fabric of firms, industries begin to function differently and the lines between them blur.”
The Industrial Revolution transformed the economy by developing a scalable, repeatable approach to manufacturing. The AI factory is now driving another fundamental transformation by industrializing the data gathering, decision making, and overall digital operations of 21st century firms.
The AI factory involves four key components:
Data pipeline - the process which systematically gathers, cleans, integrates, and safeguards data;
Algorithm development, - the component which generate predictions about the future states of the business and drives its most critical operating activities;
Experimentation platform - the mechanism on which predictions are tested to ensure that they will have the intended effect; and
Software infrastructure, the systems that embed these various components in software and connect it as needed to the appropriate internal and external users. .....
(Much more at the link) Will look at this more closely.
Honeywell Talks Future of Quantum Computing
Honeywell gives a non technical (but necessarily incomplete) view of the state and future of quantum computing. And they are apparently bullish on its future. Note mention of their work with Microsoft on the topic.
The Future of Quantum Computing
The three things you will want to know about the technology
It's time — quantum computing is going from something that's theoretical to practical. And it's on its way to having real impact.
A new partnership between Microsoft’s Azure Quantum and Honeywell offers another way for organizations across the globe to be introduced to quantum computing.
“The era of quantum computing is just beginning, and we are looking forward to bringing that capability to a broader audience,” said Tony Uttley, president of Honeywell Quantum Solutions.
Here are three to know:
1. What’s a ‘qubit’ and how it works
Computers traditionally use bits to process information. But quantum computing depends on bits that have properties of quantum physics – called qubits.
Traditional computing bits are either “0” or “1,” but qubits can be in both states at the same time, a quantum property called superposition.
Another quantum property, called entanglement, allows for qubits to be quantum mechanically connected to other qubits in the system.
As a result, quantum computers leverage entanglement and superposition to solve previously impossible computational problems. ... (More at link) .... '
The Future of Quantum Computing
The three things you will want to know about the technology
It's time — quantum computing is going from something that's theoretical to practical. And it's on its way to having real impact.
A new partnership between Microsoft’s Azure Quantum and Honeywell offers another way for organizations across the globe to be introduced to quantum computing.
“The era of quantum computing is just beginning, and we are looking forward to bringing that capability to a broader audience,” said Tony Uttley, president of Honeywell Quantum Solutions.
Here are three to know:
1. What’s a ‘qubit’ and how it works
Computers traditionally use bits to process information. But quantum computing depends on bits that have properties of quantum physics – called qubits.
Traditional computing bits are either “0” or “1,” but qubits can be in both states at the same time, a quantum property called superposition.
Another quantum property, called entanglement, allows for qubits to be quantum mechanically connected to other qubits in the system.
As a result, quantum computers leverage entanglement and superposition to solve previously impossible computational problems. ... (More at link) .... '
Training Autonomous Drones
More advances in the use of data to train autonomous drones in many contexts. Likely to see many more autonomous drone applications. Moving the sensors to the data.
CMU Researchers Train Autonomous Drones Using Cross-Modal Simulated Data
Carnegie Mellon University
Virginia Alvino Young
August 25, 2020
Researchers at Carnegie Mellon University (CMU) developed a two-step approach to teaching autonomous drones perception and action, providing a safe way to deploy drones trained entirely on simulated data into real-world course navigation. In the first step, the researchers used a photorealistic simulator to train the drone on image perception by creating an environment including the drone, a soccer field, and elevated red square gates positioned randomly to create a track. Thousands of randomly generated drone and gate configurations were used to create a large dataset employed in the second step to teach the drone perception of positions and orientations in space. Said CMU's Rogerio Bonatti, "The robot is not learning to recreate going through any specific track. Rather, by strategically directing the simulated drone, it's learning all of the elements and types of movements to race autonomously." .... '
CMU Researchers Train Autonomous Drones Using Cross-Modal Simulated Data
Carnegie Mellon University
Virginia Alvino Young
August 25, 2020
Researchers at Carnegie Mellon University (CMU) developed a two-step approach to teaching autonomous drones perception and action, providing a safe way to deploy drones trained entirely on simulated data into real-world course navigation. In the first step, the researchers used a photorealistic simulator to train the drone on image perception by creating an environment including the drone, a soccer field, and elevated red square gates positioned randomly to create a track. Thousands of randomly generated drone and gate configurations were used to create a large dataset employed in the second step to teach the drone perception of positions and orientations in space. Said CMU's Rogerio Bonatti, "The robot is not learning to recreate going through any specific track. Rather, by strategically directing the simulated drone, it's learning all of the elements and types of movements to race autonomously." .... '
Friday, August 28, 2020
Amazon's Smart Carts Launch in LA
Update on Amazon work with the 'smart cart,' which I have mentioned many times in this space. Note also Alexas as assistants. A new integration. Could an Alexa assistant provide enough help? Will this get other grocery players to join in.
Amazon Launches Grocery Store with 'Smart' Shopping Carts, Alexa Guides
The Washington Post
Hamza Shaban
August 27, 2020
Amazon on Thursday opened its first Fresh grocery store in the Woodland Hills neighborhood of Los Angeles, featuring no checkout lines, smart shopping carts, and Alexa virtual assistant-powered guides. Shoppers sign into the Amazon app and put items in a sensor-equipped Dash Cart that identifies each item as it is added and enables customers to use a dedicated checkout lane to pay for those groceries. The store also is integrated with Alexa and Alexa shopping lists, while Echo Show devices can help shoppers navigate the outlet. Unlike the smaller Amazon Go markets, Fresh customers can choose between smart or traditional shopping carts, pay at a checkout lane with cashiers if they prefer, and ask Alexa guides for help. Forrester Research's Sucharita Kodali said, "Obviously, they thought that building something from scratch would be better than to try to retrofit a Whole Foods store. .... "
Amazon Launches Grocery Store with 'Smart' Shopping Carts, Alexa Guides
The Washington Post
Hamza Shaban
August 27, 2020
Amazon on Thursday opened its first Fresh grocery store in the Woodland Hills neighborhood of Los Angeles, featuring no checkout lines, smart shopping carts, and Alexa virtual assistant-powered guides. Shoppers sign into the Amazon app and put items in a sensor-equipped Dash Cart that identifies each item as it is added and enables customers to use a dedicated checkout lane to pay for those groceries. The store also is integrated with Alexa and Alexa shopping lists, while Echo Show devices can help shoppers navigate the outlet. Unlike the smaller Amazon Go markets, Fresh customers can choose between smart or traditional shopping carts, pay at a checkout lane with cashiers if they prefer, and ask Alexa guides for help. Forrester Research's Sucharita Kodali said, "Obviously, they thought that building something from scratch would be better than to try to retrofit a Whole Foods store. .... "
Simulating Mask Effectiveness
Was alway surprised this was not better understood. Had for years done simulation of complex systems using high performance computing methods. Why not for mask applications? Too many possible variants in initial conditions? Here what seems to be a something related, but with supercomputing. Can this kind of simulation could be improved with quantum computing?
Do Cloth Masks Work? Supercomputer Fugaku Says Yes
Nikkei Asian Review
Yuki Misumi
Japan's Riken Institute said the Fugaku supercomputer, recently crowned the world's fastest, developed a model that showed nonwoven fabric masks block virus-laden respiratory droplets more effectively than cotton or polyester masks (although all three types were deemed effective at slowing the spread of the coronavirus). The system simulated the performance of the three types of fabric masks in blocking the spray of virus-carrying respiratory droplets from coughing by the wearer, demonstrating that all three types stopped at least about 80% of spray. The team also simulated a virus spreading through a 2,000-seat auditorium, and found little danger of proliferation if visitors are masked and sitting spaced apart.... '
Do Cloth Masks Work? Supercomputer Fugaku Says Yes
Nikkei Asian Review
Yuki Misumi
Japan's Riken Institute said the Fugaku supercomputer, recently crowned the world's fastest, developed a model that showed nonwoven fabric masks block virus-laden respiratory droplets more effectively than cotton or polyester masks (although all three types were deemed effective at slowing the spread of the coronavirus). The system simulated the performance of the three types of fabric masks in blocking the spray of virus-carrying respiratory droplets from coughing by the wearer, demonstrating that all three types stopped at least about 80% of spray. The team also simulated a virus spreading through a 2,000-seat auditorium, and found little danger of proliferation if visitors are masked and sitting spaced apart.... '
Faster Testing
Iproving the sensor to be better faster, or cheaper is a way we did quick classification of possibilities. And thoughts to risk of application.
For Quick Coronavirus Testing, Israel Turns to Clever Algorithm
The New York Times
David M. Halbfinger
August 21, 2020
Israeli scientists have created a coronavirus testing procedure that they claim is faster and more efficient than any currently in use, testing samples in pools of up to 48 people simultaneously. The Pooling-Based Efficient SARS-CoV-2 Testing (P-Best) method requires only a single round of testing, based on an algorithm created by the Open University of Israel's Noam Shental. The P-Best algorithm optimizes pool design based on the expected prevalence of the virus, enabling all positive individuals in a batch to be localized, provided the total number of positives does not sharply surpass the expected number. Although the technique is less effective the higher a community's positivity rate is, it offers dramatically greater efficiency when rates are lower. P-Best accurately screened 1,115 healthcare workers with just 144 tests. ... "
For Quick Coronavirus Testing, Israel Turns to Clever Algorithm
The New York Times
David M. Halbfinger
August 21, 2020
Israeli scientists have created a coronavirus testing procedure that they claim is faster and more efficient than any currently in use, testing samples in pools of up to 48 people simultaneously. The Pooling-Based Efficient SARS-CoV-2 Testing (P-Best) method requires only a single round of testing, based on an algorithm created by the Open University of Israel's Noam Shental. The P-Best algorithm optimizes pool design based on the expected prevalence of the virus, enabling all positive individuals in a batch to be localized, provided the total number of positives does not sharply surpass the expected number. Although the technique is less effective the higher a community's positivity rate is, it offers dramatically greater efficiency when rates are lower. P-Best accurately screened 1,115 healthcare workers with just 144 tests. ... "
Robotic Smartphone Cases
Clever idea,but for what purpose? Made me think. To test some proposed interaction with people or things?
CaseCrawler Adds Tiny Robotic Legs to Your Phone
A phone case with legs is the accessory your life has been missing
By Evan Ackerman ... "
CaseCrawler Adds Tiny Robotic Legs to Your Phone
A phone case with legs is the accessory your life has been missing
By Evan Ackerman ... "
Pandemic Will Change Uncertainty
We have never looked at uncertainty enough, but has the pandemic made us more sensitive. Also how much risk is associated with the uncertainty. Good piece:
An Agent of Change
A look into the Covid-19 pandemic's influence on how we think, spend, and manage our businesses.
By Q Ethan McCallum and Mike Loukides in O'Reilly
The Covid-19 pandemic has changed how people and businesses spend and operate. Over the coming pages we’ll explore ways in which our current world is already very different from the one we knew just a few months ago, as well as predictions of our “new normal” once the proverbial boat stops rocking. Specifically, we’ll see this through the lens of decision-making: how has Covid-19 changed the way we think? And what does this mean for our purchase patterns and business models?
Welcome to Uncertainty
You’re used to a certain level of uncertainty in your life, sure. But the pandemic has quickly turned up the uncertainty on even basic planning.
Your dishwasher, piano, or clothes dryer is making an odd sound. Do you proactively call a repair service to check it out? Your ounce of prevention will also cost you two weeks’ wondering whether the repair technician was an asymptomatic carrier. If you hold off, you’re placing a bet that the appliance lasts long enough for treatment to become widely available, because you certainly don’t want it to break down just as infection rates spike.
Stresses on a system reveal that some of our constants were really variables in disguise. “I can always leave my house.” “I can get to the gym on Friday.” “If I don’t go grocery shopping tonight, I can always do it tomorrow. It’s not like they’ll run out of food.” These weren’t exactly bold statements in January. But by March, many cities’ shelter-in-place orders had turned those periods into question marks. Even as cities are starting to relax those restrictions, there’s the worry that they may suddenly return as the virus continues to spread.
As this reality sets in, some of us are even weighing what we call “acceptance purchases”: items which show that we’re in this for the long haul. Your gym isn’t closed, but it’s as good as closed since the city can quickly order it to shut down if local case counts climb again. So maybe it’s time to buy that fancy exercise bike. And ride-hailing services were appealing until using them increased your exposure to the virus. Maybe now you’ll buy that car you sometimes think about? You had considered downsizing your home, but you’ll appreciate the extra space if you’re spending more time indoors. .... "
An Agent of Change
A look into the Covid-19 pandemic's influence on how we think, spend, and manage our businesses.
By Q Ethan McCallum and Mike Loukides in O'Reilly
The Covid-19 pandemic has changed how people and businesses spend and operate. Over the coming pages we’ll explore ways in which our current world is already very different from the one we knew just a few months ago, as well as predictions of our “new normal” once the proverbial boat stops rocking. Specifically, we’ll see this through the lens of decision-making: how has Covid-19 changed the way we think? And what does this mean for our purchase patterns and business models?
Welcome to Uncertainty
You’re used to a certain level of uncertainty in your life, sure. But the pandemic has quickly turned up the uncertainty on even basic planning.
Your dishwasher, piano, or clothes dryer is making an odd sound. Do you proactively call a repair service to check it out? Your ounce of prevention will also cost you two weeks’ wondering whether the repair technician was an asymptomatic carrier. If you hold off, you’re placing a bet that the appliance lasts long enough for treatment to become widely available, because you certainly don’t want it to break down just as infection rates spike.
Stresses on a system reveal that some of our constants were really variables in disguise. “I can always leave my house.” “I can get to the gym on Friday.” “If I don’t go grocery shopping tonight, I can always do it tomorrow. It’s not like they’ll run out of food.” These weren’t exactly bold statements in January. But by March, many cities’ shelter-in-place orders had turned those periods into question marks. Even as cities are starting to relax those restrictions, there’s the worry that they may suddenly return as the virus continues to spread.
As this reality sets in, some of us are even weighing what we call “acceptance purchases”: items which show that we’re in this for the long haul. Your gym isn’t closed, but it’s as good as closed since the city can quickly order it to shut down if local case counts climb again. So maybe it’s time to buy that fancy exercise bike. And ride-hailing services were appealing until using them increased your exposure to the virus. Maybe now you’ll buy that car you sometimes think about? You had considered downsizing your home, but you’ll appreciate the extra space if you’re spending more time indoors. .... "
Thursday, August 27, 2020
Valuing Spatio-Temporal Information
The valuation of data is a long time interest, have consulted with large companies on the topic. Lately about automotive connections. Technical detail on the topic at the link.
Computing Value of Spatio-temporal Information
By Heba Aly, John Krumm, Gireeja Ranade, Eric Horvitz
Communications of the ACM, September 2020, Vol. 63 No. 9, Pages 85-92
10.1145/3410387
Location data from mobile devices is a sensitive yet valuable commodity for location-based services and advertising. We investigate the intrinsic value of location data in the context of strong privacy, where location information is only available from end users via purchase. We present an algorithm to compute the expected value of location data from a user, without access to the specific coordinates of the location data point. We use decision-theoretic techniques to provide a principled way for a potential buyer to make purchasing decisions about private user location data. We illustrate our approach in three scenarios: the delivery of targeted ads specific to a user's home location, the estimation of traffic speed, and the prediction of location. In all three cases, the methodology leads to quantifiably better purchasing decisions than competing approaches.
1. Introduction
As people carry and interact with their connected devices, they create spatiotemporal data that can be harnessed by them and others to generate a variety of insights. Proposals have been made for creating markets for personal data1 rather than for people either to provide their behavioral data freely or to refuse sharing. Some of these proposals are specific to location data.6 Several studies have explored the price that people would seek for sharing their GPS data.5, 13, 9 However, little has been published on determining the value of location data from a buyer's point of view. For instance, a Wall Street Journal blog says10:
"What groceries you buy, what Facebook posts you 'like' and how you use GPS in your car:
Companies are building their entire businesses around the collection and sale of such data. The problem is that no one really knows what all that information is worth. Data isn't a physical asset like a factory or cash, and there aren't any official guidelines for assessing its value."
We present a principled method for computing the value of spatiotemporal data from the perspective of a buyer. Knowledge of this value could guide pursuit of the most informative data and would provide insights about potential markets for location data.
We consider situations where a buyer is presented with a set of location data points for sale, and we provide estimates of the value of information (VOI) for these points. Because the coordinates of the location data points are unknown, we compute the VOI based on the prior knowledge that is available to the buyer and on side information that a user may provide (e.g., the time of day or location granularity). The VOI computation is customized to the specific goals of the buyer, such as targeting ad delivery for home services, offering efficient driving routes, or predicting a person's location in advance. We account for the fact that location data and user state are both uncertain. Additional data purchases can help reduce this uncertainty, and we quantify this reduction as well.
In the next section, we introduce a decision-making framework with a detailed analysis of geo-targeted advertising. We focus on the buyer's goal of delivering ads to people living within a certain region. We show that our method performs better than alternate approaches in terms of inferential accuracy, data efficiency, and cost. In Section 3, we apply the methodology to a traffic estimation scenario using real and simulated spatiotemporal data. We present our last scenario in Section 4, where we show how to make good data-buying decisions for predicting a person's future location. ...
"
Computing Value of Spatio-temporal Information
By Heba Aly, John Krumm, Gireeja Ranade, Eric Horvitz
Communications of the ACM, September 2020, Vol. 63 No. 9, Pages 85-92
10.1145/3410387
Location data from mobile devices is a sensitive yet valuable commodity for location-based services and advertising. We investigate the intrinsic value of location data in the context of strong privacy, where location information is only available from end users via purchase. We present an algorithm to compute the expected value of location data from a user, without access to the specific coordinates of the location data point. We use decision-theoretic techniques to provide a principled way for a potential buyer to make purchasing decisions about private user location data. We illustrate our approach in three scenarios: the delivery of targeted ads specific to a user's home location, the estimation of traffic speed, and the prediction of location. In all three cases, the methodology leads to quantifiably better purchasing decisions than competing approaches.
1. Introduction
As people carry and interact with their connected devices, they create spatiotemporal data that can be harnessed by them and others to generate a variety of insights. Proposals have been made for creating markets for personal data1 rather than for people either to provide their behavioral data freely or to refuse sharing. Some of these proposals are specific to location data.6 Several studies have explored the price that people would seek for sharing their GPS data.5, 13, 9 However, little has been published on determining the value of location data from a buyer's point of view. For instance, a Wall Street Journal blog says10:
"What groceries you buy, what Facebook posts you 'like' and how you use GPS in your car:
Companies are building their entire businesses around the collection and sale of such data. The problem is that no one really knows what all that information is worth. Data isn't a physical asset like a factory or cash, and there aren't any official guidelines for assessing its value."
We present a principled method for computing the value of spatiotemporal data from the perspective of a buyer. Knowledge of this value could guide pursuit of the most informative data and would provide insights about potential markets for location data.
We consider situations where a buyer is presented with a set of location data points for sale, and we provide estimates of the value of information (VOI) for these points. Because the coordinates of the location data points are unknown, we compute the VOI based on the prior knowledge that is available to the buyer and on side information that a user may provide (e.g., the time of day or location granularity). The VOI computation is customized to the specific goals of the buyer, such as targeting ad delivery for home services, offering efficient driving routes, or predicting a person's location in advance. We account for the fact that location data and user state are both uncertain. Additional data purchases can help reduce this uncertainty, and we quantify this reduction as well.
In the next section, we introduce a decision-making framework with a detailed analysis of geo-targeted advertising. We focus on the buyer's goal of delivering ads to people living within a certain region. We show that our method performs better than alternate approaches in terms of inferential accuracy, data efficiency, and cost. In Section 3, we apply the methodology to a traffic estimation scenario using real and simulated spatiotemporal data. We present our last scenario in Section 4, where we show how to make good data-buying decisions for predicting a person's future location. ...
"
Automated Math Reasoning
In our earliest AI courses,we learned about theorem proving using AI. And yes, it was not automated math reasoning. But it gave you the hope that it could be done, if only you could state the problem at hand as purely mathematical. Or even parts of it. But it was never so. Like the article says, it rarely intersects exactly with the real world, except for elements of the real world that are also approximations within contexts. Bottom line, its still hard. Good article explains it, with hopes for the next generation.
How Close Are Computers to Automating Mathematical Reasoning? in Quanta Mag. Stephen Ornes
Contributing Writer
AI tools are shaping next-generation theorem provers, and with them the relationship between math and machine.
n the 1970s, the late mathematician Paul Cohen, the only person to ever win a Fields Medal for work in mathematical logic, reportedly made a sweeping prediction that continues to excite and irritate mathematicians — that “at some unspecified future time, mathematicians would be replaced by computers.” Cohen, legendary for his daring methods in set theory, predicted that all of mathematics could be automated, including the writing of proofs.
A proof is a step-by-step logical argument that verifies the truth of a conjecture, or a mathematical proposition. (Once it’s proved, a conjecture becomes a theorem.) It both establishes the validity of a statement and explains why it’s true. A proof is strange, though. It’s abstract and untethered to material experience. “They’re this crazy contact between an imaginary, nonphysical world and biologically evolved creatures,” said the cognitive scientist Simon DeDeo of Carnegie Mellon University, who studies mathematical certainty by analyzing the structure of proofs. “We did not evolve to do this.”
Computers are useful for big calculations, but proofs require something different. Conjectures arise from inductive reasoning — a kind of intuition about an interesting problem — and proofs generally follow deductive, step-by-step logic. They often require complicated creative thinking as well as the more laborious work of filling in the gaps, and machines can’t achieve this combination.
Computerized theorem provers can be broken down into two categories. Automated theorem provers, or ATPs, typically use brute-force methods to crunch through big calculations. Interactive theorem provers, or ITPs, act as proof assistants that can verify the accuracy of an argument and check existing proofs for errors. But these two strategies, even when combined (as is the case with newer theorem provers), don’t add up to automated reasoning. .... "
How Close Are Computers to Automating Mathematical Reasoning? in Quanta Mag. Stephen Ornes
Contributing Writer
AI tools are shaping next-generation theorem provers, and with them the relationship between math and machine.
n the 1970s, the late mathematician Paul Cohen, the only person to ever win a Fields Medal for work in mathematical logic, reportedly made a sweeping prediction that continues to excite and irritate mathematicians — that “at some unspecified future time, mathematicians would be replaced by computers.” Cohen, legendary for his daring methods in set theory, predicted that all of mathematics could be automated, including the writing of proofs.
A proof is a step-by-step logical argument that verifies the truth of a conjecture, or a mathematical proposition. (Once it’s proved, a conjecture becomes a theorem.) It both establishes the validity of a statement and explains why it’s true. A proof is strange, though. It’s abstract and untethered to material experience. “They’re this crazy contact between an imaginary, nonphysical world and biologically evolved creatures,” said the cognitive scientist Simon DeDeo of Carnegie Mellon University, who studies mathematical certainty by analyzing the structure of proofs. “We did not evolve to do this.”
Computers are useful for big calculations, but proofs require something different. Conjectures arise from inductive reasoning — a kind of intuition about an interesting problem — and proofs generally follow deductive, step-by-step logic. They often require complicated creative thinking as well as the more laborious work of filling in the gaps, and machines can’t achieve this combination.
Computerized theorem provers can be broken down into two categories. Automated theorem provers, or ATPs, typically use brute-force methods to crunch through big calculations. Interactive theorem provers, or ITPs, act as proof assistants that can verify the accuracy of an argument and check existing proofs for errors. But these two strategies, even when combined (as is the case with newer theorem provers), don’t add up to automated reasoning. .... "
Update on Fusion Power
Following this for years. Another decade needed? In IEEE Spectrum, an update on progress.
ITER Celebrates Milestone, Still at Least a Decade Away From Fusing Atoms
Machine assembly has commenced, but this gigantic nuclear fusion experiment costing tens of billions of dollars is nowhere near starting up By Payal Dhar
TerraPower’s Nuclear Reactor Could Power the 21st Century
It was a twinkle in U.S. President Ronald Reagan’s eye, an enthusiasm he shared with General Secretary Mikhail Gorbachev of the Soviet Union: boundless stores of clean energy from nuclear fusion.
That was 35 years ago.
On July 28, 2020, the product of these Cold Warriors’ mutual infatuation with fusion, the International Thermonuclear Experimental Reactor (ITER) in Saint-Paul-lès-Durance, France inaugurated the start of the machine assembly phase of this industrial-scale tokamak nuclear fusion reactor.
An experiment to demonstrate the feasibility of nuclear fusion as a virtually inexhaustible, waste-free and non-polluting source of energy, ITER has already been 30-plus years in planning, with tens of billions invested. And if there are new fusion reactors designed based on research conducted here, they won’t be powering anything until the latter half of this century.
Speaking from Elysée Palace in Paris via an internet link during last month’s launch ceremony, President Emmanuel Macron said, “[ITER] is proof that what brings together people and nations is stronger than what pulls them apart. [It is] a promise of progress, and of confidence in science.” Indeed, as the COVID-19 pandemic continues to baffle modern science around the world, ITER is a welcome beacon of hope. ... "
ITER Celebrates Milestone, Still at Least a Decade Away From Fusing Atoms
Machine assembly has commenced, but this gigantic nuclear fusion experiment costing tens of billions of dollars is nowhere near starting up By Payal Dhar
TerraPower’s Nuclear Reactor Could Power the 21st Century
It was a twinkle in U.S. President Ronald Reagan’s eye, an enthusiasm he shared with General Secretary Mikhail Gorbachev of the Soviet Union: boundless stores of clean energy from nuclear fusion.
That was 35 years ago.
On July 28, 2020, the product of these Cold Warriors’ mutual infatuation with fusion, the International Thermonuclear Experimental Reactor (ITER) in Saint-Paul-lès-Durance, France inaugurated the start of the machine assembly phase of this industrial-scale tokamak nuclear fusion reactor.
An experiment to demonstrate the feasibility of nuclear fusion as a virtually inexhaustible, waste-free and non-polluting source of energy, ITER has already been 30-plus years in planning, with tens of billions invested. And if there are new fusion reactors designed based on research conducted here, they won’t be powering anything until the latter half of this century.
Speaking from Elysée Palace in Paris via an internet link during last month’s launch ceremony, President Emmanuel Macron said, “[ITER] is proof that what brings together people and nations is stronger than what pulls them apart. [It is] a promise of progress, and of confidence in science.” Indeed, as the COVID-19 pandemic continues to baffle modern science around the world, ITER is a welcome beacon of hope. ... "
Robo Teammates in their Context
A kind of digital twin and context interaction with sensors and updates.
U.S. Army Robo-Teammate Can Detect, Share 3D Changes in Real Time
U.S. Army Research Laboratory Public Affairs
Researchers at the U.S. Army Combat Capabilities Development Command's Army Research Laboratory (ARL) have shown that robots equipped with LiDAR can detect physical changes in a three-dimensional, real-world environment, and share that information with a person in real time via augmented reality eyewear. The human observer then can evaluate the information and determine how to respond. Said ARL's Christopher Reardon, "This could let robots inform their soldier teammates of changes in the environment that might be overlooked by or not perceptible to the soldier, giving them increased situational awareness and offset from potential adversaries.” Robots equipped with LiDAR, Reardon said, “could detect anything from camouflaged enemy soldiers to IEDs.” ... '
U.S. Army Robo-Teammate Can Detect, Share 3D Changes in Real Time
U.S. Army Research Laboratory Public Affairs
Researchers at the U.S. Army Combat Capabilities Development Command's Army Research Laboratory (ARL) have shown that robots equipped with LiDAR can detect physical changes in a three-dimensional, real-world environment, and share that information with a person in real time via augmented reality eyewear. The human observer then can evaluate the information and determine how to respond. Said ARL's Christopher Reardon, "This could let robots inform their soldier teammates of changes in the environment that might be overlooked by or not perceptible to the soldier, giving them increased situational awareness and offset from potential adversaries.” Robots equipped with LiDAR, Reardon said, “could detect anything from camouflaged enemy soldiers to IEDs.” ... '
Wednesday, August 26, 2020
And Smaller Robots Yet
Only 5 microns in size, paramecium sized. Likely applications in healthcare. And?
Microscopic robots 'walk' thanks to laser tech in TechXPlore
by Cornell University
A Cornell University-led collaboration has created the first microscopic robots that incorporate semiconductor components, allowing them to be controlled—and made to walk—with standard electronic signals.
These robots, roughly the size of paramecium, provide a template for building even more complex versions that utilize silicon-based intelligence, can be mass produced, and may someday travel through human tissue and blood.
The collaboration is led by Itai Cohen, professor of physics, Paul McEuen, the John A. Newman Professor of Physical Science and their former postdoctoral researcher Marc Miskin, who is now an assistant professor at the University of Pennsylvania.
The team's paper, "Electronically Integrated, Mass-Manufactured, Microscopic Robots," published in Nature.
The walking robots are the latest iteration, and in many ways an evolution, of Cohen and McEuen's previous nanoscale creations, from microscopic sensors to graphene-based origami machines. ... "
Electronically integrated, mass-manufactured, microscopic robots, Nature (2020). DOI: 10.1038/s41586-020-2626-9 , www.nature.com/articles/s41586-020-2626-9
Microscopic robots 'walk' thanks to laser tech in TechXPlore
by Cornell University
A Cornell University-led collaboration has created the first microscopic robots that incorporate semiconductor components, allowing them to be controlled—and made to walk—with standard electronic signals.
These robots, roughly the size of paramecium, provide a template for building even more complex versions that utilize silicon-based intelligence, can be mass produced, and may someday travel through human tissue and blood.
The collaboration is led by Itai Cohen, professor of physics, Paul McEuen, the John A. Newman Professor of Physical Science and their former postdoctoral researcher Marc Miskin, who is now an assistant professor at the University of Pennsylvania.
The team's paper, "Electronically Integrated, Mass-Manufactured, Microscopic Robots," published in Nature.
The walking robots are the latest iteration, and in many ways an evolution, of Cohen and McEuen's previous nanoscale creations, from microscopic sensors to graphene-based origami machines. ... "
Electronically integrated, mass-manufactured, microscopic robots, Nature (2020). DOI: 10.1038/s41586-020-2626-9 , www.nature.com/articles/s41586-020-2626-9
Virtual Collaboration in the Pandemic
Nice look at how the pandemic has changed collaboration among computer science students and faculty. Very similar to the experiences we have had. In the past week, for example have used three different multiple collaboration tools.
Virtual Collaboration in the Age of the Coronavirus By Paul Marks
Communications of the ACM, September 2020, Vol. 63 No. 9, Pages 21-23
10.1145/3409803
When the COVID-19 pandemic began sweeping the globe early in the year, and governments began enforcing lockdowns that forced people to stay at home to depress infection rates, videoconferencing technologies rocketed into public consciousness as never before.
Professional apps including Zoom, Skype, Webex, and Microsoft Teams were suddenly thrown into the hands of people who had never used them, alongside more social-media-oriented ones like Houseparty and Whereby, as people sought virtual connection and collaboration tools to cope with the stay-at-home and work-from-home orders.
The effect of this rapid adoption of video chat systems was dramatic. Suddenly, debate in the media and on social networks centered on which was the best app or desktop package, with users treating it almost like an exercise in comparative religion.
Uses for the technologies flourished along with those ballooning user numbers, with video livestreams suddenly dominating locked-down domestic and work agendas. From live exercise workouts to yoga and meditation sessions before breakfast, to gaming at a distance, to attending virtual church services and craft lessons, to online school classes and workplace meetings, as well as convivial drinking and socializing sessions with friends of an evening, Internet-based videoconferencing finally came into its own.
Enduring memes were born, too: perhaps one of the most memorable being Sting's online, at-home jam with The Roots, aired on NBC's Tonight Show. The combo played a "quarantine remix" of "Don't Stand So Close To Me" (surely one of the social distancing anthems of the lockdown) with improvised musical instruments. .... " (full article at the link)
Virtual Collaboration in the Age of the Coronavirus By Paul Marks
Communications of the ACM, September 2020, Vol. 63 No. 9, Pages 21-23
10.1145/3409803
When the COVID-19 pandemic began sweeping the globe early in the year, and governments began enforcing lockdowns that forced people to stay at home to depress infection rates, videoconferencing technologies rocketed into public consciousness as never before.
Professional apps including Zoom, Skype, Webex, and Microsoft Teams were suddenly thrown into the hands of people who had never used them, alongside more social-media-oriented ones like Houseparty and Whereby, as people sought virtual connection and collaboration tools to cope with the stay-at-home and work-from-home orders.
The effect of this rapid adoption of video chat systems was dramatic. Suddenly, debate in the media and on social networks centered on which was the best app or desktop package, with users treating it almost like an exercise in comparative religion.
Uses for the technologies flourished along with those ballooning user numbers, with video livestreams suddenly dominating locked-down domestic and work agendas. From live exercise workouts to yoga and meditation sessions before breakfast, to gaming at a distance, to attending virtual church services and craft lessons, to online school classes and workplace meetings, as well as convivial drinking and socializing sessions with friends of an evening, Internet-based videoconferencing finally came into its own.
Enduring memes were born, too: perhaps one of the most memorable being Sting's online, at-home jam with The Roots, aired on NBC's Tonight Show. The combo played a "quarantine remix" of "Don't Stand So Close To Me" (surely one of the social distancing anthems of the lockdown) with improvised musical instruments. .... " (full article at the link)
Creation of AI, Quantum Research Institutes
Yet more in place to address emerging application and infrastructure. Like to see the USDA connections where I think there are still many possibilities.
White House Announces New AI, Quantum Research Institutes
VentureBeat via ACM
Kyle Wiggers
August 26, 2020
The White House announced the creation of 12 new artificial intelligence (AI) and quantum information science research institutes, to be funded by federal agencies. The Trump Administration said the U.S. National Science Foundation will invest $100 million in five AI institutes over five years, in partnership with the Department of Agriculture (USDA)'s National Institute of Food and Agriculture, the Department of Homeland Security's Security Science and Technology Directorate, and the Department of Transportation's Federal Highway Administration. USDA will separately fund two institutes of its own, with focus areas including "user-driven trustworthy AI" for weather, climate, and coastal hazards applications, and theoretical challenges like neural architecture optimization. Meanwhile, the Department of Energy will invest $625 million in five quantum information science research centers, whose objectives will include surmounting obstacles in quantum state resilience, controllability, and scalability. ... "
White House Announces New AI, Quantum Research Institutes
VentureBeat via ACM
Kyle Wiggers
August 26, 2020
The White House announced the creation of 12 new artificial intelligence (AI) and quantum information science research institutes, to be funded by federal agencies. The Trump Administration said the U.S. National Science Foundation will invest $100 million in five AI institutes over five years, in partnership with the Department of Agriculture (USDA)'s National Institute of Food and Agriculture, the Department of Homeland Security's Security Science and Technology Directorate, and the Department of Transportation's Federal Highway Administration. USDA will separately fund two institutes of its own, with focus areas including "user-driven trustworthy AI" for weather, climate, and coastal hazards applications, and theoretical challenges like neural architecture optimization. Meanwhile, the Department of Energy will invest $625 million in five quantum information science research centers, whose objectives will include surmounting obstacles in quantum state resilience, controllability, and scalability. ... "
Exxon Mobil Removed from Dow
More indication that technical continues the climb:
" .... CHANGE: Exxon Mobil dropped from the Dow after nearly a century. “Exxon Mobil, which joined the Dow Jones Industrial Average in 1928, is being removed from the blue-chip stock market index. Its replacement: enterprise software company Salesforce.com.” ... "
(Via NewsAlert.)
" .... CHANGE: Exxon Mobil dropped from the Dow after nearly a century. “Exxon Mobil, which joined the Dow Jones Industrial Average in 1928, is being removed from the blue-chip stock market index. Its replacement: enterprise software company Salesforce.com.” ... "
(Via NewsAlert.)
Taking Vital Signs with a Dog-Like Robot
The well known Spot Boston Dynamics dog becomes 'Dr Spot' and can take some contactless vitals during Corona Pandemic.
MIT, Boston Dynamics Team Up on Robot for Remote Covid-19 Vital Sign Measurement
TechCrunch
Darrell Etherington
August 19, 2020
The Massachusetts Institute of Technology, Brigham and Women's Hospital, Boston Dynamics, and other collaborators have developed a robot designed to remotely measure vital signs in patients for Covid-19 with contactless equipment. Dubbed Dr. Spot, the robot is a customized version of Boston Dynamics' four-legged robot, outfitted with a tablet to enable medical staff to conduct "face-to-face" interviews with patients while they perform exams. The robot can measure multiple vital signs like skin temperature, respiratory rate, heart rate, and blood oxygen saturation at once. Dr. Spot was deployed in a hospital as a test study to offer proof of the technology's potential application. ... "
MIT, Boston Dynamics Team Up on Robot for Remote Covid-19 Vital Sign Measurement
TechCrunch
Darrell Etherington
August 19, 2020
The Massachusetts Institute of Technology, Brigham and Women's Hospital, Boston Dynamics, and other collaborators have developed a robot designed to remotely measure vital signs in patients for Covid-19 with contactless equipment. Dubbed Dr. Spot, the robot is a customized version of Boston Dynamics' four-legged robot, outfitted with a tablet to enable medical staff to conduct "face-to-face" interviews with patients while they perform exams. The robot can measure multiple vital signs like skin temperature, respiratory rate, heart rate, and blood oxygen saturation at once. Dr. Spot was deployed in a hospital as a test study to offer proof of the technology's potential application. ... "
P&G among Others Lead Patents in 3D Printing
Was unaware that my former employer, P&G, was so involved in the patenting, innovation and use of 3D printing capabilities:
Future of 3D Printing Is in U.S., Europe Patenting
Bloomberg
By Susan Decker; Ryan Beene
July 14, 2020
A study by the European Patent Office (EPO) found that the U.S. and Europe are spearheading innovation in three-dimensional (3D) printing, which is the fastest-growing technology field. The agency determined that established multinationals like General Electric, Airbus, Johnson & Johnson, and Procter & Gamble are producing the majority of 3D printing-related patents. However, 20% of new 3D printing-related European patents are owned by small companies, and another 10% by universities; top patent recipients among research institutions include Harvard University, the Massachusetts Institute of Technology, and the University of California. EPO's Antonio Campino said the boom in 3D printing reflects the rapid growth of digital technologies overall. ... "
Future of 3D Printing Is in U.S., Europe Patenting
Bloomberg
By Susan Decker; Ryan Beene
July 14, 2020
A study by the European Patent Office (EPO) found that the U.S. and Europe are spearheading innovation in three-dimensional (3D) printing, which is the fastest-growing technology field. The agency determined that established multinationals like General Electric, Airbus, Johnson & Johnson, and Procter & Gamble are producing the majority of 3D printing-related patents. However, 20% of new 3D printing-related European patents are owned by small companies, and another 10% by universities; top patent recipients among research institutions include Harvard University, the Massachusetts Institute of Technology, and the University of California. EPO's Antonio Campino said the boom in 3D printing reflects the rapid growth of digital technologies overall. ... "
Processing, Communication in One
Tuning accurate communications for lower errors. This is a tough one,more at the link.
Processing, Communication in One
MIT News
Michaela Jarvis
Massachusetts Institute of Technology (MIT) researchers have unveiled a quantum computing architecture that performs low-error computations while rapidly sharing quantum data between processors. Key to this was the construction of "giant atoms" from superconducting quantum bits (qubits), linked in a tunable configuration to a waveguide. This enables researchers to tune the strength of qubit-waveguide interactions to protect the qubits from decoherence, while conducting high-fidelity operations. Once those computations are completed, the strength of the qubit-waveguide parings is readjusted, allowing the qubits to emit quantum data into the waveguide as photons. MIT's Bharath Kannan said, "The tricks we employed are relatively simple and, as such, one can imagine using this for further applications without a great deal of additional overhead." ... '
Processing, Communication in One
MIT News
Michaela Jarvis
Massachusetts Institute of Technology (MIT) researchers have unveiled a quantum computing architecture that performs low-error computations while rapidly sharing quantum data between processors. Key to this was the construction of "giant atoms" from superconducting quantum bits (qubits), linked in a tunable configuration to a waveguide. This enables researchers to tune the strength of qubit-waveguide interactions to protect the qubits from decoherence, while conducting high-fidelity operations. Once those computations are completed, the strength of the qubit-waveguide parings is readjusted, allowing the qubits to emit quantum data into the waveguide as photons. MIT's Bharath Kannan said, "The tricks we employed are relatively simple and, as such, one can imagine using this for further applications without a great deal of additional overhead." ... '
Tuesday, August 25, 2020
AI Based Traffic Management
Or other kinds of of process management? Could be integrated with RPA?
AI-Based Traffic Management Gets Green Light By ZDNet
The new system switches traffic-light coordination from a timer-based model to one based on demand.
The NoTraffic autonomous traffic management company has deployed an artificial intelligence-driven traffic management system in Phoenix, AZ.
The NoTraffic autonomous traffic management company has deployed an artificial intelligence (AI)-driven traffic management system in Phoenix, AZ, switching traffic-light coordination from a timer-based model to one based on demand.
The goal is to improve traffic flow and cut vehicle and pedestrian delays at intersections, and the system has reduced vehicle delays by up to 40% in some instances.
The NoTraffic platform monitors road assets as they approach an intersection and calculates optimal service for the intersection in real time, autonomously changing signals accordingly.
Phoenix Street Transportation director Kini Knudson said, "We are now seeing the convergence of technology-enabled automobiles and traffic management systems working together to move vehicles more effectively through busy corridors."
From ZDNet
View Full Article
AI-Based Traffic Management Gets Green Light By ZDNet
The new system switches traffic-light coordination from a timer-based model to one based on demand.
The NoTraffic autonomous traffic management company has deployed an artificial intelligence-driven traffic management system in Phoenix, AZ.
The NoTraffic autonomous traffic management company has deployed an artificial intelligence (AI)-driven traffic management system in Phoenix, AZ, switching traffic-light coordination from a timer-based model to one based on demand.
The goal is to improve traffic flow and cut vehicle and pedestrian delays at intersections, and the system has reduced vehicle delays by up to 40% in some instances.
The NoTraffic platform monitors road assets as they approach an intersection and calculates optimal service for the intersection in real time, autonomously changing signals accordingly.
Phoenix Street Transportation director Kini Knudson said, "We are now seeing the convergence of technology-enabled automobiles and traffic management systems working together to move vehicles more effectively through busy corridors."
From ZDNet
View Full Article
Update on IOTA
Had taken a closer look at the IOTA system, primarily because it operated as a mining-less blockchain capability. And had potential for smart contract style operation. See it characterized as a ' fast probabilistic consensus mechanism'. Here now an update:
IOTA Foundation Enters Base Layer Race With ‘2.0’ Testnet
IOTA is addressing the technical feature that nuked the blockchain-like network for nearly two weeks earlier this year.
The IOTA Foundation announced Wednesday it is doing away with the “coordinator” that previously validated the blockchain’s transactions.
The new “coordinator-less” network, billed as IOTA 2.0, is meant to rival other smart-contract platforms such as Ethereum, EOS, Tron and Cardano.
IOTA’s MIOTA token currently ranks as the 24th largest cryptocurrency by market cap, according to CoinGecko data, besting zcash (ZEC), cosmos (ATOM) and basic attention token (BAT), among others.
IOTA’s new “Pollen” testnet will serve as a research testbed for a new “fast probabilistic consensus” mechanism.
The IOTA Foundation claims the new network will support decentralized applications (dapps) and smart contracts that can transact without incurring fees. .... "
IOTA Foundation Enters Base Layer Race With ‘2.0’ Testnet
IOTA is addressing the technical feature that nuked the blockchain-like network for nearly two weeks earlier this year.
The IOTA Foundation announced Wednesday it is doing away with the “coordinator” that previously validated the blockchain’s transactions.
The new “coordinator-less” network, billed as IOTA 2.0, is meant to rival other smart-contract platforms such as Ethereum, EOS, Tron and Cardano.
IOTA’s MIOTA token currently ranks as the 24th largest cryptocurrency by market cap, according to CoinGecko data, besting zcash (ZEC), cosmos (ATOM) and basic attention token (BAT), among others.
IOTA’s new “Pollen” testnet will serve as a research testbed for a new “fast probabilistic consensus” mechanism.
The IOTA Foundation claims the new network will support decentralized applications (dapps) and smart contracts that can transact without incurring fees. .... "
AI and Machine Learning Imperative of a Strategy
I see only rarely see complete strategies in the space. Its mostly solving problems in narrow contexts. Would be good to at least lay out a outline strategy for implementation.
THE AI & MACHINE LEARNING IMPERATIVE
The Building Blocks of an AI Strategy
Organizations need to transition from opportunistic and tactical AI decision-making to a more strategic orientation.
By Amit Joshi and Michael Wade in MIT Sloan Review
The AI & Machine Learning Imperative
“The AI & Machine Learning Imperative” offers new insights from leading academics and practitioners in data science and artificial intelligence. The Executive Guide, published as a series over three weeks, explores how managers and companies can overcome challenges and identify opportunities by assembling the right talent, stepping up their own leadership, and reshaping organizational strategy.
As the popularity of artificial intelligence waxes and wanes, it feels like we are at a peak. Hardly a day goes by without an organization announcing “a pivot toward AI” or an aspiration to “become AI-driven.” Banks and fintechs are using facial recognition to support know-your-customer guidelines; marketing companies are deploying unsupervised learning to capture new consumer insights; and retailers are experimenting with AI-fueled sentiment analysis, natural language processing, and gamification.
A close examination of the activities undertaken by these organizations reveals that AI is mainly being used for tactical rather than strategic purposes — in fact, finding a cohesive long-term AI strategic vision is rare. Even in well-funded companies, AI capabilities are mostly siloed or unevenly distributed. ... "
THE AI & MACHINE LEARNING IMPERATIVE
The Building Blocks of an AI Strategy
Organizations need to transition from opportunistic and tactical AI decision-making to a more strategic orientation.
By Amit Joshi and Michael Wade in MIT Sloan Review
The AI & Machine Learning Imperative
“The AI & Machine Learning Imperative” offers new insights from leading academics and practitioners in data science and artificial intelligence. The Executive Guide, published as a series over three weeks, explores how managers and companies can overcome challenges and identify opportunities by assembling the right talent, stepping up their own leadership, and reshaping organizational strategy.
As the popularity of artificial intelligence waxes and wanes, it feels like we are at a peak. Hardly a day goes by without an organization announcing “a pivot toward AI” or an aspiration to “become AI-driven.” Banks and fintechs are using facial recognition to support know-your-customer guidelines; marketing companies are deploying unsupervised learning to capture new consumer insights; and retailers are experimenting with AI-fueled sentiment analysis, natural language processing, and gamification.
A close examination of the activities undertaken by these organizations reveals that AI is mainly being used for tactical rather than strategic purposes — in fact, finding a cohesive long-term AI strategic vision is rare. Even in well-funded companies, AI capabilities are mostly siloed or unevenly distributed. ... "
Monday, August 24, 2020
Amazon-Go to be Put into Whole Foods?
This would be interesting, broadening the AI item ID capability to determine purchase. Still it seems a rumor. Would seem other grocery would be driven that way because of lower labor costs.
Amazon Go’s cashierless tech may come to Whole Foods as soon as next year
Amazon introduced its cashierless tech in 2016
By Taylor Lyles@TayNixster
Amazon may be looking to bring the cashierless tech found at its Go convenience stores to Whole Foods supermarkets as early as next year, the New York Post reports.
Amazon may start implementing the tech in Whole Foods sometime during the second quarter of 2021, according to the New York Post’s source. This technology, which is currently available in more than 20 Amazon Go convenience store locations, uses cameras, sensors, and computer vision to let customers walk out the store with groceries in hand and avoid cashier checkout lines.
The New York Post’s source claims the rollout of the new technology into Whole Foods is one of two final projects that Jeff Wilke, CEO of Amazon’s worldwide consumer division, is focusing on before he retires early next year. ... "
Amazon Go’s cashierless tech may come to Whole Foods as soon as next year
Amazon introduced its cashierless tech in 2016
By Taylor Lyles@TayNixster
Amazon may be looking to bring the cashierless tech found at its Go convenience stores to Whole Foods supermarkets as early as next year, the New York Post reports.
Amazon may start implementing the tech in Whole Foods sometime during the second quarter of 2021, according to the New York Post’s source. This technology, which is currently available in more than 20 Amazon Go convenience store locations, uses cameras, sensors, and computer vision to let customers walk out the store with groceries in hand and avoid cashier checkout lines.
The New York Post’s source claims the rollout of the new technology into Whole Foods is one of two final projects that Jeff Wilke, CEO of Amazon’s worldwide consumer division, is focusing on before he retires early next year. ... "
DARPA Test AI Beats Human Fighter Pilot
Quite an event, will this be the future of many kinds military interactions? Some of the details in the interaction, comparing AI, human training and ingrained human behaviors, are quite interesting.
Artificial Intelligence Easily Beats Human Fighter Pilot in DARPA Trial
Aug. 20, 2020 | By Brian W. Everstine in Air Force Magazine
In the battle of artificial intelligence versus a human fighter pilot, it wasn’t even close.
The artificial intelligence algorithm, developed by Heron Systems, swept a human F-16 pilot in a simulated dogfight 5-0 in the Defense Advanced Research Projects Agency’s AlphaDogfight Trials on Aug. 20. The company beat out seven other companies before going head to head with “Banger,” a pilot from the District of Columbia Air National Guard and a recent graduate of the Air Force Weapons School’s F-16 Weapons Instructor Course. The pilot, whose full name was not provided, is an operational fighter pilot with more than 2,000 hours in the F-16.
Banger and Heron Systems’ AI fought in five different basic fighter maneuver scenarios with the simulated fight only using the Fighting Falcon’s guns, and each time the AI was able to out maneuver and take out Banger. The algorithm operated within the limits of the F-16—meaning it did not pull Gs beyond what a real-world aircraft could do. However, Banger said after the event that the jet was not limited by the training and thinking that is engrained in an Air Force pilot.
For example, Air Force Instructions outline how an F-16 pilot performs basic fighter maneuvers and establishes some limits such as not passing within 500 feet or a limit on the angle of attack when firing the gun. The AI did not need to follow these instructions, which helped it gain an advantage. Pilots habits are built based on procedures and adhering to training rules, and the AI exploited that.
The AI also is able to make adjustments on a “nanosecond level” where the human “OODA loop”—observe, orient, decide, and act—takes longer, giving the algorithm another advantage.
Banger survived longer in each successive round, though he was not able to hit the AI’s F-16, which was “flying” with the callsign “Falco.” He started the contest following the basic rules, and in following rounds tried to learn the methods of the algorithm, which flew more aggressively. ... "
Artificial Intelligence Easily Beats Human Fighter Pilot in DARPA Trial
Aug. 20, 2020 | By Brian W. Everstine in Air Force Magazine
In the battle of artificial intelligence versus a human fighter pilot, it wasn’t even close.
The artificial intelligence algorithm, developed by Heron Systems, swept a human F-16 pilot in a simulated dogfight 5-0 in the Defense Advanced Research Projects Agency’s AlphaDogfight Trials on Aug. 20. The company beat out seven other companies before going head to head with “Banger,” a pilot from the District of Columbia Air National Guard and a recent graduate of the Air Force Weapons School’s F-16 Weapons Instructor Course. The pilot, whose full name was not provided, is an operational fighter pilot with more than 2,000 hours in the F-16.
Banger and Heron Systems’ AI fought in five different basic fighter maneuver scenarios with the simulated fight only using the Fighting Falcon’s guns, and each time the AI was able to out maneuver and take out Banger. The algorithm operated within the limits of the F-16—meaning it did not pull Gs beyond what a real-world aircraft could do. However, Banger said after the event that the jet was not limited by the training and thinking that is engrained in an Air Force pilot.
For example, Air Force Instructions outline how an F-16 pilot performs basic fighter maneuvers and establishes some limits such as not passing within 500 feet or a limit on the angle of attack when firing the gun. The AI did not need to follow these instructions, which helped it gain an advantage. Pilots habits are built based on procedures and adhering to training rules, and the AI exploited that.
The AI also is able to make adjustments on a “nanosecond level” where the human “OODA loop”—observe, orient, decide, and act—takes longer, giving the algorithm another advantage.
Banger survived longer in each successive round, though he was not able to hit the AI’s F-16, which was “flying” with the callsign “Falco.” He started the contest following the basic rules, and in following rounds tried to learn the methods of the algorithm, which flew more aggressively. ... "
AI Where it is Needed
A reasonable explanation ..... though not enough about security concerns, that need to be better addressed when we move most AI to the Edge.
AI on Edge By Samuel Greengard
Communications of the ACM, September 2020, Vol. 63 No. 9, Pages 18-20 10.1145/3409977
A remarkable thing about artificial intelligence (AI) is how rapidly and dramatically it has crept into the mainstream of society. Automobiles, robots, smartphones, televisions, smart speakers, wearables, buildings, and industrial systems have all gained features and capabilities that would have once seemed futuristic. Today, they can see, they can listen, and they can sense. They can make decisions that approximate—and sometimes exceed—human thought, behavior, and actions.
Yet, for all the remarkable advancements, there's a pesky reality: smart devices could still be a whole lot more intelligent—and tackle far more difficult tasks. What's more, as the Internet of Things (IoT) takes shape, the need for low latency and ultra-low energy sensors with on-board processing is vital. Without this framework, "Systems must depend on distant clouds and data centers to process data. The full value of AI cannot be realized," says Mahadev Satyanarayanan, Carnegie Group Professor of Computer Science at Carnegie Mellon University.
Edge AI takes direct aim at these issues. "To truly and pervasively engage AI in the processes within our lives, there's a need to push AI computation away from the data center and toward the edge," says Naveen Verma, a professor of electrical engineering at Princeton University. This approach reduces latency by minimizing—and sometimes complexly bypassing—the need for a distant datacenter. In many cases, computation takes place on the device itself. "Edge AI will enable new types of systems that can operate all around us at the beat of life and with data that is intimate and important to us," Verma explains.
The power of this framework lies in processing data exactly when and where it is needed. "Edge AI introduces new computational layers between the cloud and the user devices. It distributes application computations between these layers," says Lauri Lovén, a doctoral researcher and data scientist at the University of Oulu in Finland.
Pushing intelligence to the edge could also fundamentally alter data privacy. Specialized chips and cloudlets—essentially micro-clouds or ad hoc clouds that would function in a home, business, or vehicle—could control what information is sent from smart devices, such as TVs and digital speakers. .... " (Much More)
AI on Edge By Samuel Greengard
Communications of the ACM, September 2020, Vol. 63 No. 9, Pages 18-20 10.1145/3409977
A remarkable thing about artificial intelligence (AI) is how rapidly and dramatically it has crept into the mainstream of society. Automobiles, robots, smartphones, televisions, smart speakers, wearables, buildings, and industrial systems have all gained features and capabilities that would have once seemed futuristic. Today, they can see, they can listen, and they can sense. They can make decisions that approximate—and sometimes exceed—human thought, behavior, and actions.
Yet, for all the remarkable advancements, there's a pesky reality: smart devices could still be a whole lot more intelligent—and tackle far more difficult tasks. What's more, as the Internet of Things (IoT) takes shape, the need for low latency and ultra-low energy sensors with on-board processing is vital. Without this framework, "Systems must depend on distant clouds and data centers to process data. The full value of AI cannot be realized," says Mahadev Satyanarayanan, Carnegie Group Professor of Computer Science at Carnegie Mellon University.
Edge AI takes direct aim at these issues. "To truly and pervasively engage AI in the processes within our lives, there's a need to push AI computation away from the data center and toward the edge," says Naveen Verma, a professor of electrical engineering at Princeton University. This approach reduces latency by minimizing—and sometimes complexly bypassing—the need for a distant datacenter. In many cases, computation takes place on the device itself. "Edge AI will enable new types of systems that can operate all around us at the beat of life and with data that is intimate and important to us," Verma explains.
The power of this framework lies in processing data exactly when and where it is needed. "Edge AI introduces new computational layers between the cloud and the user devices. It distributes application computations between these layers," says Lauri Lovén, a doctoral researcher and data scientist at the University of Oulu in Finland.
Pushing intelligence to the edge could also fundamentally alter data privacy. Specialized chips and cloudlets—essentially micro-clouds or ad hoc clouds that would function in a home, business, or vehicle—could control what information is sent from smart devices, such as TVs and digital speakers. .... " (Much More)
Cases: NASA and Others Using Knowledge Graphs
This is particularly interesting because NASA has a very broad use context for analytics, and thus the underlying knowledge that drives them. Upcoming talk should be of interest, I plan to attend. Note all of their sessions are stored and easy to retrieve.
Don't miss the NASA session! (Tomorrow)
Hi Franz,
Let’s talk Knowledge Graphs! At this month’s Connections, a digital event series, we will be exploring all things graph technology. We’re taking a deep dive into the world of Knowledge Graphs and the potential contextual searches hold for your business.
Knowledge graphs relate structured and unstructured data (often disparate and spread across your organization) to help identify information and reveal important but hidden facts. They are also necessary for creating semantic AI applications that inherently thrive on contextual connections.
Join us on Tuesday, August 25 from 07:30-11:30 PT / 14:30-18:30 UTC for a full day of sessions with speakers from NASA, BMO Financial Group, and more!
Can’t make it for the full day? Don’t worry! All talks will be shared with registered attendees after the event. Save your spot now! http://neo4j.com
REGISTER FOR CONNECTIONS
https://message.neo4j.com/CRn00x0G10GR0NYv2w0EC01
Don't miss the NASA session! (Tomorrow)
Hi Franz,
Let’s talk Knowledge Graphs! At this month’s Connections, a digital event series, we will be exploring all things graph technology. We’re taking a deep dive into the world of Knowledge Graphs and the potential contextual searches hold for your business.
Knowledge graphs relate structured and unstructured data (often disparate and spread across your organization) to help identify information and reveal important but hidden facts. They are also necessary for creating semantic AI applications that inherently thrive on contextual connections.
Join us on Tuesday, August 25 from 07:30-11:30 PT / 14:30-18:30 UTC for a full day of sessions with speakers from NASA, BMO Financial Group, and more!
Can’t make it for the full day? Don’t worry! All talks will be shared with registered attendees after the event. Save your spot now! http://neo4j.com
REGISTER FOR CONNECTIONS
https://message.neo4j.com/CRn00x0G10GR0NYv2w0EC01
Google Plans to Disrupt the College Degree
Most recently have been in conversations about how teaching and accreditation now can effectively proceed. Google has an idea. Expect some push back from Universities. Will companies like Google become the arbiter of technical education?
Google Has a Plan to Disrupt the College Degree Google's new certificate program takes only six months to complete, and will be a fraction of the cost of college. By Justin Bariso @Justinjbariso Author, EQ Applied
Google recently made a huge announcement that could change the future of work and higher education: It's launching a selection of professional courses that teach candidates how to perform in-demand jobs.
These courses, which the company is calling Google Career Certificates, teach foundational skills that can help job-seekers immediately find employment. However, instead of taking years to finish like a traditional university degree, these courses are designed to be completed in about six months. ... "
Followup article.
Google Has a Plan to Disrupt the College Degree Google's new certificate program takes only six months to complete, and will be a fraction of the cost of college. By Justin Bariso @Justinjbariso Author, EQ Applied
Google recently made a huge announcement that could change the future of work and higher education: It's launching a selection of professional courses that teach candidates how to perform in-demand jobs.
These courses, which the company is calling Google Career Certificates, teach foundational skills that can help job-seekers immediately find employment. However, instead of taking years to finish like a traditional university degree, these courses are designed to be completed in about six months. ... "
Followup article.
Fully Homomorphic Encryption
Despite my previous experience in crypto, new to me: Homomorphic Encryption.
IBM completes successful field trials on Fully Homomorphic Encryption
FHE allows computation of still-encrypted data, without sharing the secrets.
Jim Salter, Arstechnica
We're already accustomed to data being encrypted while at rest or in flight—FHE offers the possibility of doing computations on it as well, without ever actually decrypting it.
Yesterday, Ars spoke with IBM Senior Research Scientist Flavio Bergamaschi about the company's recent successful field trials of Fully Homomorphic Encryption. We suspect many of you will have the same questions that we did—beginning with "what is Fully Homomorphic Encryption?"
FHE is a type of encryption that allows direct mathematical operations on the encrypted data. Upon decryption, the results will be correct. For example, you might encrypt 2, 3, and 7 and send the three encrypted values to a third party. If you then ask the third party to add the first and second values, then multiply the result by the third value and return the result to you, you can then decrypt that result—and get 35.
You don't ever have to share a key with the third party doing the computation; the data remains encrypted with a key the third party never received. So, while the third party performed the operations you asked it to, it never knew the values of either the inputs or the output. You can also ask the third party to perform mathematical or logical operations of the encrypted data with non-encrypted data—for example, in pseudocode, FHE_decrypt(FHE_encrypt(2) * 5) equals 10. .... "
IBM completes successful field trials on Fully Homomorphic Encryption
FHE allows computation of still-encrypted data, without sharing the secrets.
Jim Salter, Arstechnica
We're already accustomed to data being encrypted while at rest or in flight—FHE offers the possibility of doing computations on it as well, without ever actually decrypting it.
Yesterday, Ars spoke with IBM Senior Research Scientist Flavio Bergamaschi about the company's recent successful field trials of Fully Homomorphic Encryption. We suspect many of you will have the same questions that we did—beginning with "what is Fully Homomorphic Encryption?"
FHE is a type of encryption that allows direct mathematical operations on the encrypted data. Upon decryption, the results will be correct. For example, you might encrypt 2, 3, and 7 and send the three encrypted values to a third party. If you then ask the third party to add the first and second values, then multiply the result by the third value and return the result to you, you can then decrypt that result—and get 35.
You don't ever have to share a key with the third party doing the computation; the data remains encrypted with a key the third party never received. So, while the third party performed the operations you asked it to, it never knew the values of either the inputs or the output. You can also ask the third party to perform mathematical or logical operations of the encrypted data with non-encrypted data—for example, in pseudocode, FHE_decrypt(FHE_encrypt(2) * 5) equals 10. .... "
Sunday, August 23, 2020
Connected Cars Run on Open Source
Why and how this is being put together as a broader solution.
Why the connected car rides on open source in VentureBeat
Tom Canning
August 23, 2020 12:12 PM
Transportation
The automobile is one of the most exciting frontiers in our connected lives today. Infotainment systems, real-time maps, and advanced driver assistance systems are already commonplace in newer vehicles, but it is still relatively early days for the fully connected car. The exhilarating vision of the driving experience of the near future includes augmented reality dashboards, progressively more autonomous operations, and increased integration with the outside world, from interacting with smart home devices to automatically finding parking spots nearby. ... "
Why the connected car rides on open source in VentureBeat
Tom Canning
August 23, 2020 12:12 PM
Transportation
The automobile is one of the most exciting frontiers in our connected lives today. Infotainment systems, real-time maps, and advanced driver assistance systems are already commonplace in newer vehicles, but it is still relatively early days for the fully connected car. The exhilarating vision of the driving experience of the near future includes augmented reality dashboards, progressively more autonomous operations, and increased integration with the outside world, from interacting with smart home devices to automatically finding parking spots nearby. ... "
Foiling illicit cryptocurrency mining with artificial intelligence
Again matching, finding patterns that are malicious.
Foiling illicit cryptocurrency mining with artificial intelligence
by James Riordon, Los Alamos National Laboratory in TechXplore
Los Alamos National Laboratory computer scientists have developed a new artificial intelligence (AI) system that may be able to identify malicious codes that hijack supercomputers to mine for cryptocurrency such as Bitcoin and Monero.
"Based on recent computer break-ins in Europe and elsewhere, this type of software watchdog will soon be crucial to prevent cryptocurrency miners from hacking into high-performance computing facilities and stealing precious computing resources," said Gopinath Chennupati, a researcher at Los Alamos National Laboratory and co-author of a new paper in the journal IEEE Access. "Our deep learning artificial intelligence model is designed to detect the abusive use of supercomputers specifically for the purpose of cryptocurrency mining." ... "
More information: Poornima Haridas et al, Code Characterization with Graph Convolutions and Capsule Networks, IEEE Access (2020). DOI: 10.1109/ACCESS.2020.3011909
Foiling illicit cryptocurrency mining with artificial intelligence
by James Riordon, Los Alamos National Laboratory in TechXplore
Los Alamos National Laboratory computer scientists have developed a new artificial intelligence (AI) system that may be able to identify malicious codes that hijack supercomputers to mine for cryptocurrency such as Bitcoin and Monero.
"Based on recent computer break-ins in Europe and elsewhere, this type of software watchdog will soon be crucial to prevent cryptocurrency miners from hacking into high-performance computing facilities and stealing precious computing resources," said Gopinath Chennupati, a researcher at Los Alamos National Laboratory and co-author of a new paper in the journal IEEE Access. "Our deep learning artificial intelligence model is designed to detect the abusive use of supercomputers specifically for the purpose of cryptocurrency mining." ... "
More information: Poornima Haridas et al, Code Characterization with Graph Convolutions and Capsule Networks, IEEE Access (2020). DOI: 10.1109/ACCESS.2020.3011909
Robotics of the Small
A favorite topic, here another:
This tiny robotic beetle travels for two hours without a battery
Liquid methanol powers RoBeetle’s artificial muscles.
Christine Fisher, @cfisherwrites
August 21, 2020
A team of researchers from the University of Southern California have created a miniscule autonomous robotic beetle, RoBeetle, that can travel for more than two hours without a battery. The 88-milligram, insect-inspired robot runs on liquid methanol, which powers its artificial muscles, and it can carry payloads 2.6 times its own body weight. ... "
This tiny robotic beetle travels for two hours without a battery
Liquid methanol powers RoBeetle’s artificial muscles.
Christine Fisher, @cfisherwrites
August 21, 2020
A team of researchers from the University of Southern California have created a miniscule autonomous robotic beetle, RoBeetle, that can travel for more than two hours without a battery. The 88-milligram, insect-inspired robot runs on liquid methanol, which powers its artificial muscles, and it can carry payloads 2.6 times its own body weight. ... "
What are Our Models Learning?
In some of my earliest work in statistics, we were informed of the "Clever Hans Effect" Which basically means your model is learning a pattern in the data which is unrelated to an the answer you seek. It may work on the training and test sets, but the prediction evidence comes from some other evidence hidden in the system. This can happen when you have too much data, or you are carrying along variables that are irrelevant. The model might now work appropriately, but later drift to other results. Dangerous in autonomous systems. In social data it can occur when humans hint at results they want, which is why double-blind tests are constructed. Overall rather important as we depend more on ML. ....
Unmasking Clever Hans predictors and assessing what machines really learn
Sebastian Lapuschkin, Stephan Wäldchen, Alexander Binder, Grégoire Montavon, Wojciech Samek & Klaus-Robert Müller
Nature Communications volume 10, Article number: 1096 (2019)
Abstract
Current learning machines have successfully solved hard application problems, reaching high accuracy and displaying seemingly intelligent behavior. Here we apply recent techniques for explaining decisions of state-of-the-art learning machines and analyze various tasks from computer vision and arcade games. This showcases a spectrum of problem-solving behaviors ranging from naive and short-sighted, to well-informed and strategic. We observe that standard performance evaluation metrics can be oblivious to distinguishing these diverse problem solving behaviors.
Furthermore, we propose our semi-automated Spectral Relevance Analysis that provides a practically effective way of characterizing and validating the behavior of nonlinear learning machines. This helps to assess whether a learned model indeed delivers reliably for the problem that it was conceived for. Furthermore, our work intends to add a voice of caution to the ongoing excitement about machine intelligence and pledges to evaluate and judge some of these recent successes in a more nuanced manner.
Introduction
Artificial intelligence systems, based on machine learning (ML), are increasingly assisting our daily life. They enable industry and the sciences to convert a never ending stream of data—which per se is not informative—into information that may be helpful and actionable. ML has become a basis of many services and products that we use.
While it is broadly accepted that the nonlinear ML methods being used as predictors to maximize some prediction accuracy, are effectively (with few exceptions, such as shallow decision trees) black boxes; this intransparency regarding explanation and reasoning is preventing a wider usage of nonlinear prediction methods in the sciences (see Fig. 1a why understanding nonlinear learning machines is difficult). Due to this black-box character, a scientist may not be able to extract deep insights about what the nonlinear system has learned, despite the urge to unveil the underlying natural structures. In particular, the conclusion in many scientific fields has so far been to prefer linear models1,2,3,4 in order to rather gain insight (e.g. regression coefficients and correlations) even if this comes at the expense of predictivity. ... "
The article is also discussed here:
https://towardsdatascience.com/deep-learning-meet-clever-hans-3576144dc5a9
Unmasking Clever Hans predictors and assessing what machines really learn
Sebastian Lapuschkin, Stephan Wäldchen, Alexander Binder, Grégoire Montavon, Wojciech Samek & Klaus-Robert Müller
Nature Communications volume 10, Article number: 1096 (2019)
Abstract
Current learning machines have successfully solved hard application problems, reaching high accuracy and displaying seemingly intelligent behavior. Here we apply recent techniques for explaining decisions of state-of-the-art learning machines and analyze various tasks from computer vision and arcade games. This showcases a spectrum of problem-solving behaviors ranging from naive and short-sighted, to well-informed and strategic. We observe that standard performance evaluation metrics can be oblivious to distinguishing these diverse problem solving behaviors.
Furthermore, we propose our semi-automated Spectral Relevance Analysis that provides a practically effective way of characterizing and validating the behavior of nonlinear learning machines. This helps to assess whether a learned model indeed delivers reliably for the problem that it was conceived for. Furthermore, our work intends to add a voice of caution to the ongoing excitement about machine intelligence and pledges to evaluate and judge some of these recent successes in a more nuanced manner.
Introduction
Artificial intelligence systems, based on machine learning (ML), are increasingly assisting our daily life. They enable industry and the sciences to convert a never ending stream of data—which per se is not informative—into information that may be helpful and actionable. ML has become a basis of many services and products that we use.
While it is broadly accepted that the nonlinear ML methods being used as predictors to maximize some prediction accuracy, are effectively (with few exceptions, such as shallow decision trees) black boxes; this intransparency regarding explanation and reasoning is preventing a wider usage of nonlinear prediction methods in the sciences (see Fig. 1a why understanding nonlinear learning machines is difficult). Due to this black-box character, a scientist may not be able to extract deep insights about what the nonlinear system has learned, despite the urge to unveil the underlying natural structures. In particular, the conclusion in many scientific fields has so far been to prefer linear models1,2,3,4 in order to rather gain insight (e.g. regression coefficients and correlations) even if this comes at the expense of predictivity. ... "
The article is also discussed here:
https://towardsdatascience.com/deep-learning-meet-clever-hans-3576144dc5a9
Software Suite Expedites Reproducible Computer Simulations
Broadly an excellent idea, here for molecular simulation and design. But in general its useful to have locally useful and validated libraries of simulations that can be selected for application. Especially if 'reproducible' is important. And typically it should be if you care about risk. It is rarer than you think to to have this in place.
Software Suite Expedites Reproducible Computer Simulations
Vanderbilt School of Engineering
July 8, 2020
Researchers at Vanderbilt University have developed a suite of open source software tools that expedites the process of reproducing computer simulations. It can be difficult to reproduce experiments and obtain the same results in some fields, mainly because details provided in peer-reviewed publications aren't sufficient to recreate the conditions of the original experiment. The Molecular Simulation and Design Framework (MoSDeF) aims to eliminate those by automating as many research steps as possible, with one component providing validated parameters and applying forcefields (mathematical models for how molecules interact with each other) automatically. All modules and workflows developed for MoSDeF build on the scientific Python stack, which simplifies the creation, atom-typing, and simulation of complex molecular models. Vanderbilt’s Clare McCabe said, “By using freely available tools designed for collaborative code development, such as GitHub and Slack, we are creating a community-developed effort.” ... .'
Software Suite Expedites Reproducible Computer Simulations
Vanderbilt School of Engineering
July 8, 2020
Researchers at Vanderbilt University have developed a suite of open source software tools that expedites the process of reproducing computer simulations. It can be difficult to reproduce experiments and obtain the same results in some fields, mainly because details provided in peer-reviewed publications aren't sufficient to recreate the conditions of the original experiment. The Molecular Simulation and Design Framework (MoSDeF) aims to eliminate those by automating as many research steps as possible, with one component providing validated parameters and applying forcefields (mathematical models for how molecules interact with each other) automatically. All modules and workflows developed for MoSDeF build on the scientific Python stack, which simplifies the creation, atom-typing, and simulation of complex molecular models. Vanderbilt’s Clare McCabe said, “By using freely available tools designed for collaborative code development, such as GitHub and Slack, we are creating a community-developed effort.” ... .'
Saturday, August 22, 2020
Digital Twins in Wargaming and Beyond
An example of 'Digital Twins' in wargaming. In the DOD we used many forms where the twin was an enhanced image with underlying technical details to emulate its operation, on a large scale terminal. Specifically agents usually in observable environments. As the piece suggests these don't need to be military examples, but any operational, typically spatial simulation, that has many operating agents and parameters. Operational disaster relief is another good example. Good general description below.
Covid-19 Crisis Accelerates U.K. Military's Push Into Virtual War Gaming By Financial Times
The U.K. Ministry of Defence is looking to fast-track new virtual reality technology from software developer Improbable that would create a digital replica of the country. The technology could be used to test resilience to future pandemics, natural disasters, and attacks by hostile states.
Known as a "single synthetic environment," the technology layers maps of geographical terrain and critical infrastructure with details of fuel, power, and water supplies, telecom networks, weather patterns, and more. This virtual "twin" uses artificial intelligence to simulate future scenarios and offers the ability to "war game" responses.
The Ministry of Defence hopes eventually to build simulations of likely conflict zones. "What we're aiming for in the longer term is . . . to enable governments to test ideas and test choices of action in a virtual world before implementing them in the real world," says Improbable's Chief Executive Joe Robinson.
From Financial Times
View Full Article – May Require Paid Subscription
Covid-19 Crisis Accelerates U.K. Military's Push Into Virtual War Gaming By Financial Times
The U.K. Ministry of Defence is looking to fast-track new virtual reality technology from software developer Improbable that would create a digital replica of the country. The technology could be used to test resilience to future pandemics, natural disasters, and attacks by hostile states.
Known as a "single synthetic environment," the technology layers maps of geographical terrain and critical infrastructure with details of fuel, power, and water supplies, telecom networks, weather patterns, and more. This virtual "twin" uses artificial intelligence to simulate future scenarios and offers the ability to "war game" responses.
The Ministry of Defence hopes eventually to build simulations of likely conflict zones. "What we're aiming for in the longer term is . . . to enable governments to test ideas and test choices of action in a virtual world before implementing them in the real world," says Improbable's Chief Executive Joe Robinson.
From Financial Times
View Full Article – May Require Paid Subscription
AI versus Human Perception Performance
Interesting challenge because we always emphasize that we need to be able to measure something to use/improve it. Which leads to our design of the measurement system. My response is that you can build measurement systems for particular use contexts. Reading the noted paper which emphasises " .. compare deep neural networks and the human vision system ... " which is a very broad statement for the problem. Like the discussion here.
Why AI and human perception are too complex to be compared By Ben Dickson in Tnw
Human-level performance. Human-level accuracy. Those are terms you hear a lot from companies developing artificial intelligence systems, whether it’s facial recognition, object detection, or question answering. And to their credit, the recent years have seen many great products powered by AI algorithms, mostly thanks to advances in machine learning and deep learning.
But many of these comparisons only take into account the end-result of testing the deep learning algorithms on limited data sets. This approach can create false expectations about AI systems and yield dangerous results when they are entrusted with critical tasks.
In a recent study, a group of researchers from various German organizations and universities has highlighted the challenges of evaluating the performance of deep learning in processing visual data. In their paper, titled, “The Notorious Difficulty of Comparing Human and Machine Perception,” the researchers highlight the problems in current methods that compare deep neural networks and the human vision system. ... "
In their research, the scientist conducted a series of experiments that dig beneath the surface of deep learning results and compare them to the workings of the human visual system. Their findings are a reminder that we must be cautious when comparing AI to humans, even if it shows equal or better performance on the same task. ... "
Why AI and human perception are too complex to be compared By Ben Dickson in Tnw
Human-level performance. Human-level accuracy. Those are terms you hear a lot from companies developing artificial intelligence systems, whether it’s facial recognition, object detection, or question answering. And to their credit, the recent years have seen many great products powered by AI algorithms, mostly thanks to advances in machine learning and deep learning.
But many of these comparisons only take into account the end-result of testing the deep learning algorithms on limited data sets. This approach can create false expectations about AI systems and yield dangerous results when they are entrusted with critical tasks.
In a recent study, a group of researchers from various German organizations and universities has highlighted the challenges of evaluating the performance of deep learning in processing visual data. In their paper, titled, “The Notorious Difficulty of Comparing Human and Machine Perception,” the researchers highlight the problems in current methods that compare deep neural networks and the human vision system. ... "
In their research, the scientist conducted a series of experiments that dig beneath the surface of deep learning results and compare them to the workings of the human visual system. Their findings are a reminder that we must be cautious when comparing AI to humans, even if it shows equal or better performance on the same task. ... "
Coordinating Complex Behaviors Between Hundreds of Robots
Back to the control of swarms. Intelligence is good, but collective intelligence is better? The process described here is very interesting.
Coordinating Complex Behaviors Among Hundreds of Robots
Duke University Pratt School of Engineering
Ken Kingery
July 1, 2020
Duke University researchers have proposed a new approach for coordinating complex tasks between hundreds of robots while satisfying logic-based rules. The method, called STyLuS* (large-Scale optimal Temporal Logic Synthesis), bypasses the traditional requirement of building incredibly large graphs of each robot's locations or nodes by producing smaller approximations with a tree structure. At each step of the process, the algorithm randomly chooses one node from the large graph, adds it to the tree, and rewires the existing paths between tree nodes to find more direct paths from start to finish. STyLuS* also selects the next node to add based on data about the task at hand, allowing the tree to quickly approximate a good solution to the problem. The algorithm solves problems exponentially: it answered the challenge of 10 robots searching through a 50-by-50 grid space in about 20 seconds, while state-of-the-art algorithms would take 30 minutes. ... "
Coordinating Complex Behaviors Among Hundreds of Robots
Duke University Pratt School of Engineering
Ken Kingery
July 1, 2020
Duke University researchers have proposed a new approach for coordinating complex tasks between hundreds of robots while satisfying logic-based rules. The method, called STyLuS* (large-Scale optimal Temporal Logic Synthesis), bypasses the traditional requirement of building incredibly large graphs of each robot's locations or nodes by producing smaller approximations with a tree structure. At each step of the process, the algorithm randomly chooses one node from the large graph, adds it to the tree, and rewires the existing paths between tree nodes to find more direct paths from start to finish. STyLuS* also selects the next node to add based on data about the task at hand, allowing the tree to quickly approximate a good solution to the problem. The algorithm solves problems exponentially: it answered the challenge of 10 robots searching through a 50-by-50 grid space in about 20 seconds, while state-of-the-art algorithms would take 30 minutes. ... "
Friday, August 21, 2020
Considering, Designing the Quantum Internet
Most interesting upcoming ACM talk. An area we are now exploring. Register at the link.
Title: Quantum Networks: From a Physics Experiment to a Quantum Network System
Date: Tuesday, September 01, 2020 Duration: 1 hour
Time: 11:00 AM Eastern Daylight Time
Summary
The internet has had a revolutionary impact on our world. The vision of a quantum internet is to provide fundamentally new internet technology by enabling quantum communication between any two points on Earth. Such a quantum internet can —in synergy with the “classical” internet that we have today—connect quantum information processors in order to achieve unparalleled capabilities that are provably impossible by using only classical information.
At present, such technology is under development in physics labs around the globe, but no large-scale quantum network systems exist. In this talk, we will discuss some of the efforts to move from an ad-hoc physics experiment to a scalable quantum network system. We start by providing a gentle introduction to quantum networks for computer scientists, and briefly review the state of the art. We continue to present a network stack for quantum networks, and a give an overview of a link layer protocol for producing quantum entanglement as an example.
We close by providing a series of pointers to learn more, as well as tools to download that allow play with simulated quantum networks without leaving your home.
SPEAKER
Stephanie Wehner
Delft University of Technology
Stephanie Wehner is Antoni van Leeuwenhoek Professor, and Roadmap Leader in Quantum Internet and Networked Computing at QuTech, Delft University of Technology. Her goal is to understand the world of small particles – the laws of quantum mechanics – in order to construct better networks and computers. She has written numerous scientific articles in both physics and computer science, and is one the founders of QCRYPT, which has become the largest conference in quantum cryptography. At present, she serves as the coordinator of the European Quantum Internet Alliance. From 2010 to 2014, her research group was located at the Centre for Quantum Technologies, National University of Singapore. Previously, she was a postdoctoral scholar at the California Institute of Technology. In a former life, she worked as a professional hacker in industry.
MODERATOR
Travis Humble
Director, Quantum Computing Institute, Oak Ridge National Laboratory; Co-EiC, ACM Transactions on Quantum Computing
Travis Humble is a Distinguished Scientist at Oak Ridge National Laboratory and Director of the lab's Quantum Computing Institute.
Title: Quantum Networks: From a Physics Experiment to a Quantum Network System
Date: Tuesday, September 01, 2020 Duration: 1 hour
Time: 11:00 AM Eastern Daylight Time
Summary
The internet has had a revolutionary impact on our world. The vision of a quantum internet is to provide fundamentally new internet technology by enabling quantum communication between any two points on Earth. Such a quantum internet can —in synergy with the “classical” internet that we have today—connect quantum information processors in order to achieve unparalleled capabilities that are provably impossible by using only classical information.
At present, such technology is under development in physics labs around the globe, but no large-scale quantum network systems exist. In this talk, we will discuss some of the efforts to move from an ad-hoc physics experiment to a scalable quantum network system. We start by providing a gentle introduction to quantum networks for computer scientists, and briefly review the state of the art. We continue to present a network stack for quantum networks, and a give an overview of a link layer protocol for producing quantum entanglement as an example.
We close by providing a series of pointers to learn more, as well as tools to download that allow play with simulated quantum networks without leaving your home.
SPEAKER
Stephanie Wehner
Delft University of Technology
Stephanie Wehner is Antoni van Leeuwenhoek Professor, and Roadmap Leader in Quantum Internet and Networked Computing at QuTech, Delft University of Technology. Her goal is to understand the world of small particles – the laws of quantum mechanics – in order to construct better networks and computers. She has written numerous scientific articles in both physics and computer science, and is one the founders of QCRYPT, which has become the largest conference in quantum cryptography. At present, she serves as the coordinator of the European Quantum Internet Alliance. From 2010 to 2014, her research group was located at the Centre for Quantum Technologies, National University of Singapore. Previously, she was a postdoctoral scholar at the California Institute of Technology. In a former life, she worked as a professional hacker in industry.
MODERATOR
Travis Humble
Director, Quantum Computing Institute, Oak Ridge National Laboratory; Co-EiC, ACM Transactions on Quantum Computing
Travis Humble is a Distinguished Scientist at Oak Ridge National Laboratory and Director of the lab's Quantum Computing Institute.
IBM Hits Quantum Volume of 64 in Deployed System
How useful is this for real problems in context? Looking for some write up on that and will report here. Statement of volume metric below is useful. Note this is a 6 Cubit system.
IBM hits new quantum computing milestone
The company has achieved a Quantum Volume of 64 in one of its client-deployed systems, putting it on par with a Honeywell quantum computer.
By Stephanie Condon for Between the Lines reported in ZDT
' .... IBM on Thursday announced it's reached a new quantum computing milestone, hitting its highest Quantum Volume to date. Using a 27-qubit client-deployed system, IBM achieved a Quantum Volume of 64.
Quantum Volume is a metric that determines how powerful a quantum computer is. It measures the length and complexity of quantum circuits, the building blocks of quantum applications. Just two months ago, Honeywell similarly announced it had a quantum computer running client jobs with a Quantum Volume of 64. Honeywell reached the milestone with just a 6-qubit system.
IBM's previous Quantum Volume milestone, announced in January, was 32. The company said it reached a Quantum Volume of 64 through a series of new software and hardware techniques applied to a system already deployed within the IBM Q Network, a network of developers and industry professionals designed to collectively advance quantum computing. ... "
IBM hits new quantum computing milestone
The company has achieved a Quantum Volume of 64 in one of its client-deployed systems, putting it on par with a Honeywell quantum computer.
By Stephanie Condon for Between the Lines reported in ZDT
' .... IBM on Thursday announced it's reached a new quantum computing milestone, hitting its highest Quantum Volume to date. Using a 27-qubit client-deployed system, IBM achieved a Quantum Volume of 64.
Quantum Volume is a metric that determines how powerful a quantum computer is. It measures the length and complexity of quantum circuits, the building blocks of quantum applications. Just two months ago, Honeywell similarly announced it had a quantum computer running client jobs with a Quantum Volume of 64. Honeywell reached the milestone with just a 6-qubit system.
IBM's previous Quantum Volume milestone, announced in January, was 32. The company said it reached a Quantum Volume of 64 through a series of new software and hardware techniques applied to a system already deployed within the IBM Q Network, a network of developers and industry professionals designed to collectively advance quantum computing. ... "
Wal-Mart is Growing
Though I rarely use them online, their stores are more impressive these days. Apparently adjusted well to Covid.
Walmart keeps growing and growing and … in Retailwire
by George Anderson in Retailwire, with expert discussion.
Walmart came out with its second-quarter numbers and the retailer did not disappoint investors. The chain reported that same-store sales, excluding fuel, were up 9.3 percent during the quarter and online sales jumped 97 percent.
Walmart’s strong performance followed a first-quarter for which it reported a 10 percent gain in same-store sales and e-commerce growth of 74 percent.
Speaking on Walmart’s earnings call with analysts yesterday, CEO Doug McMillon said the retailer benefited from tailwinds, including the federal government’s stimulus payments and more people eating at home. The chain also saw sales increase as customers spent money on products to entertain themselves in their homes and also to fix them up.
He also noted that reduced store hours and out-of-stocks in some categories had a dampening effect on sales. Walmart recently announced that it is moving its closing time from 8:30 p.m. to 10:00 in 4,000 of its 4,700 stores. He also said that the chain experienced out-of-stocks in some states where COVID-19 cases were on the rise.
Mr. McMillon said that the retailer has completed the integration of its store and online merchant teams as it continues on its “path to transform into an omnichannel organization.” The company pointed to improvements in online margins as an indication of a more balanced product mix and operational efficiencies it has achieved. ... "
Walmart keeps growing and growing and … in Retailwire
by George Anderson in Retailwire, with expert discussion.
Walmart came out with its second-quarter numbers and the retailer did not disappoint investors. The chain reported that same-store sales, excluding fuel, were up 9.3 percent during the quarter and online sales jumped 97 percent.
Walmart’s strong performance followed a first-quarter for which it reported a 10 percent gain in same-store sales and e-commerce growth of 74 percent.
Speaking on Walmart’s earnings call with analysts yesterday, CEO Doug McMillon said the retailer benefited from tailwinds, including the federal government’s stimulus payments and more people eating at home. The chain also saw sales increase as customers spent money on products to entertain themselves in their homes and also to fix them up.
He also noted that reduced store hours and out-of-stocks in some categories had a dampening effect on sales. Walmart recently announced that it is moving its closing time from 8:30 p.m. to 10:00 in 4,000 of its 4,700 stores. He also said that the chain experienced out-of-stocks in some states where COVID-19 cases were on the rise.
Mr. McMillon said that the retailer has completed the integration of its store and online merchant teams as it continues on its “path to transform into an omnichannel organization.” The company pointed to improvements in online margins as an indication of a more balanced product mix and operational efficiencies it has achieved. ... "
Scientists Build Ultra-High-Speed Terahertz Wireless Chip
More chips to support high speed AI applications.
Nanyang Technological University (Singapore)
August 5, 2020
Scientists at Nanyang Technological University, Singapore (NTU Singapore) and Japan's Osaka University have constructed an ultra-high-speed terahertz (THz) wireless chip using photonic topological insulators (PTIs). This enabled an 11 gigabits-per-second (Gbps) data rate, which can support real-time streaming of 4K high-definition video. The PTIs avoid the material defects and transmission error rates of conventional waveguides by directing light waves along the surface and edges of the insulators. Light waves are "topologically protected" via a small silicon chip with rows of triangular holes, with smaller triangles oriented opposite to larger triangles. NTU's Ranjan Singh said, "THz technology ... can potentially boost intra-chip and inter-chip communication to support artificial intelligence and cloud-based technologies, such as interconnected self-driving cars, which will need to transmit data quickly to other nearby cars and infrastructure to navigate better and also to avoid accidents."
Nanyang Technological University (Singapore)
August 5, 2020
Scientists at Nanyang Technological University, Singapore (NTU Singapore) and Japan's Osaka University have constructed an ultra-high-speed terahertz (THz) wireless chip using photonic topological insulators (PTIs). This enabled an 11 gigabits-per-second (Gbps) data rate, which can support real-time streaming of 4K high-definition video. The PTIs avoid the material defects and transmission error rates of conventional waveguides by directing light waves along the surface and edges of the insulators. Light waves are "topologically protected" via a small silicon chip with rows of triangular holes, with smaller triangles oriented opposite to larger triangles. NTU's Ranjan Singh said, "THz technology ... can potentially boost intra-chip and inter-chip communication to support artificial intelligence and cloud-based technologies, such as interconnected self-driving cars, which will need to transmit data quickly to other nearby cars and infrastructure to navigate better and also to avoid accidents."
Search Algorithm Fairness Adjustments
In particular looking at search results.
Algorithm Improves Fairness of Search Results
Cornell Chronicle
Melanie Lefkowitz
August 17, 2020
Cornell University researchers have developed an algorithm to improve the fairness of online search rankings while retaining their utility or relevance. Unfairness stems from search algorithms prioritizing more popular items, which means that the higher a choice appears in the list, the more likely users are to click on and respond to it, reinforcing one item's popularity while others go unnoticed. When seeking the most relevant items, small variations can cause major exposure disparities, because most people select one of the first few listed items. Cornell's Thorsten Joachims said, "We came up with computational tools that let you specify fairness criteria, as well as the algorithm that will provably enforce them." The FairCo tool allocates approximately equal exposure to equally relevant choices and avoids preference for items that are already highly ranked; this can remedy the innate unfairness in current algorithms. .... "
Algorithm Improves Fairness of Search Results
Cornell Chronicle
Melanie Lefkowitz
August 17, 2020
Cornell University researchers have developed an algorithm to improve the fairness of online search rankings while retaining their utility or relevance. Unfairness stems from search algorithms prioritizing more popular items, which means that the higher a choice appears in the list, the more likely users are to click on and respond to it, reinforcing one item's popularity while others go unnoticed. When seeking the most relevant items, small variations can cause major exposure disparities, because most people select one of the first few listed items. Cornell's Thorsten Joachims said, "We came up with computational tools that let you specify fairness criteria, as well as the algorithm that will provably enforce them." The FairCo tool allocates approximately equal exposure to equally relevant choices and avoids preference for items that are already highly ranked; this can remedy the innate unfairness in current algorithms. .... "
Cars Hands Free Very soon?
This suggests only lane keeping approaches. Though they see to be seriously on the way. Seems to be less regulatory interference.
Hands-Free Driving Could Be on U.K. Roads by Spring
BBC News
August 19, 2020
The U.K. government suggests hands-free driving could be on the country's roads by spring of 2021, with the Department for Transport (DfT) issuing a call for evidence into automated lane keeping systems (ALKS). The technology controls vehicle movements and can keep cars in lane for prolonged periods, with drivers ready to take over. The DfT said ALKS could be sanctioned to speeds of up to 70 miles per hour, and the government is seeking input from the motoring industry in order to decide how to safely deploy the technology. These experts would determine if ALKS-enabled cars should be designated automated, assigning the technology provider responsibility for safety rather than drivers while the system is engaged. Society for Motor Manufacturers and Traders CEO Mike Hawes said automated technologies could prevent 47,000 serious accidents over the next decade. ... "
Hands-Free Driving Could Be on U.K. Roads by Spring
BBC News
August 19, 2020
The U.K. government suggests hands-free driving could be on the country's roads by spring of 2021, with the Department for Transport (DfT) issuing a call for evidence into automated lane keeping systems (ALKS). The technology controls vehicle movements and can keep cars in lane for prolonged periods, with drivers ready to take over. The DfT said ALKS could be sanctioned to speeds of up to 70 miles per hour, and the government is seeking input from the motoring industry in order to decide how to safely deploy the technology. These experts would determine if ALKS-enabled cars should be designated automated, assigning the technology provider responsibility for safety rather than drivers while the system is engaged. Society for Motor Manufacturers and Traders CEO Mike Hawes said automated technologies could prevent 47,000 serious accidents over the next decade. ... "
Thursday, August 20, 2020
Scaling Quantum Chips
Implications of this as it matures? Scaling up the process of making real quantum processors
Scaling up the quantum chip
MIT engineers develop a hybrid process that connects photonics with “artificial atoms,” to produce the largest quantum chip of its type.
Becky Ham | MIT News correspondent
July 8, 2020
MIT researchers have developed a process to manufacture and integrate “artificial atoms,” created by atomic-scale defects in microscopically thin slices of diamond, with photonic circuitry, producing the largest quantum chip of its type.
The accomplishment “marks a turning point” in the field of scalable quantum processors, says Dirk Englund, an associate professor in MIT’s Department of Electrical Engineering and Computer Science. Millions of quantum processors will be needed to build quantum computers, and the new research demonstrates a viable way to scale up processor production, he and his colleagues note.
Unlike classical computers, which process and store information using bits represented by either 0s and 1s, quantum computers operate using quantum bits, or qubits, which can represent 0, 1, or both at the same time. This strange property allows quantum computers to simultaneously perform multiple calculations, solving problems that would be intractable for classical computers.
The qubits in the new chip are artificial atoms made from defects in diamond, which can be prodded with visible light and microwaves to emit photons that carry quantum information. The process, which Englund and his team describe today in Nature, is a hybrid approach, in which carefully selected “quantum micro chiplets” containing multiple diamond-based qubits are placed on an aluminum nitride photonic integrated circuit. .... "
Scaling up the quantum chip
MIT engineers develop a hybrid process that connects photonics with “artificial atoms,” to produce the largest quantum chip of its type.
Becky Ham | MIT News correspondent
July 8, 2020
MIT researchers have developed a process to manufacture and integrate “artificial atoms,” created by atomic-scale defects in microscopically thin slices of diamond, with photonic circuitry, producing the largest quantum chip of its type.
The accomplishment “marks a turning point” in the field of scalable quantum processors, says Dirk Englund, an associate professor in MIT’s Department of Electrical Engineering and Computer Science. Millions of quantum processors will be needed to build quantum computers, and the new research demonstrates a viable way to scale up processor production, he and his colleagues note.
Unlike classical computers, which process and store information using bits represented by either 0s and 1s, quantum computers operate using quantum bits, or qubits, which can represent 0, 1, or both at the same time. This strange property allows quantum computers to simultaneously perform multiple calculations, solving problems that would be intractable for classical computers.
The qubits in the new chip are artificial atoms made from defects in diamond, which can be prodded with visible light and microwaves to emit photons that carry quantum information. The process, which Englund and his team describe today in Nature, is a hybrid approach, in which carefully selected “quantum micro chiplets” containing multiple diamond-based qubits are placed on an aluminum nitride photonic integrated circuit. .... "
Quiet Ornithopter Drones
Clear surveillance will take a different angle soon. Their use of 'biomimicry' uses birds to some degree. Differences are interesting, they can point to changes that would be useful. Biomimicry for emergent tech is often covered here.
High Performance Ornithopter Drone Is Quiet, Efficient, and Safe
Flapping wings instead of propellers help this bird-inspired drone hold its own against quadrotors By Evan Ackerman in IEEE Spectrum
The vast majority of drones are rotary-wing systems (like quadrotors), and for good reason: They’re cheap, they’re easy, they scale up and down well, and we’re getting quite good at controlling them, even in very challenging environments. For most applications, though, drones lose out to birds and their flapping wings in almost every way—flapping wings are very efficient, enable astonishing agility, and are much safer, able to make compliant contact with surfaces rather than shredding them like a rotor system does. But flapping wing have their challenges too: Making flapping-wing robots is so much more difficult than just duct taping spinning motors to a frame that, with a few exceptions, we haven’t seen nearly as much improvement as we have in more conventional drones. .... "
High Performance Ornithopter Drone Is Quiet, Efficient, and Safe
Flapping wings instead of propellers help this bird-inspired drone hold its own against quadrotors By Evan Ackerman in IEEE Spectrum
The vast majority of drones are rotary-wing systems (like quadrotors), and for good reason: They’re cheap, they’re easy, they scale up and down well, and we’re getting quite good at controlling them, even in very challenging environments. For most applications, though, drones lose out to birds and their flapping wings in almost every way—flapping wings are very efficient, enable astonishing agility, and are much safer, able to make compliant contact with surfaces rather than shredding them like a rotor system does. But flapping wing have their challenges too: Making flapping-wing robots is so much more difficult than just duct taping spinning motors to a frame that, with a few exceptions, we haven’t seen nearly as much improvement as we have in more conventional drones. .... "
Problems from Data Intensive Science
Quite interesting thoughts. Basically managing data for many needs, motivations, coming from many directions. And please, consider too the metadata, brought us to our knees many times!
Thorny Problems in Data (-Intensive) Science
By Christine L. Borgman, Michael J. Scroggins, Irene V. Pasquetto, R. Stuart Geiger, Bernadette M. Boscoe, Peter T. Darch, Charlotte Cabasse-Mazel, Cheryl Thompson, Milena S. Golshan
Communications of the ACM, August 2020, Vol. 63 No. 8, Pages 30-32 10.1145/3408047
As science comes to depend ever more heavily on computational methods and complex data pipelines, many non-tenure track scientists find themselves precariously employed in positions grouped under the catch-all term "data science." Over the last decade, we have worked in a diverse array of scientific fields, specializations, and sectors, across the physical, life, and social sciences; professional fields such as medicine, business, and engineering; mathematics, statistics, and computer and information science; the digital humanities; and data-intensive citizen science and peer production projects inside and out of the academy.3,7,8,15 We have used ethnographic methods to observe and participate in scientific research, semi-structured interviews to understand the motivations of scientists, and document analysis to illustrate how science is assembled with data and code. Our research subjects range from principal investigators at the top of their fields to first-year graduate students trying to find their footing. Throughout, we have focused on the multiple challenges faced by scientists who, through inclination or circumstance, work as data scientists.
The "thorny problems" we identify are brambly institutional challenges associated with data in data-intensive science. While many of these problems are specific to academe, some may be shared by data scientists outside the university. These problems are not readily curable, hence we conclude with guidance to stakeholders in data-intensive research. ... ' (full article at the link)
Thorny Problems in Data (-Intensive) Science
By Christine L. Borgman, Michael J. Scroggins, Irene V. Pasquetto, R. Stuart Geiger, Bernadette M. Boscoe, Peter T. Darch, Charlotte Cabasse-Mazel, Cheryl Thompson, Milena S. Golshan
Communications of the ACM, August 2020, Vol. 63 No. 8, Pages 30-32 10.1145/3408047
As science comes to depend ever more heavily on computational methods and complex data pipelines, many non-tenure track scientists find themselves precariously employed in positions grouped under the catch-all term "data science." Over the last decade, we have worked in a diverse array of scientific fields, specializations, and sectors, across the physical, life, and social sciences; professional fields such as medicine, business, and engineering; mathematics, statistics, and computer and information science; the digital humanities; and data-intensive citizen science and peer production projects inside and out of the academy.3,7,8,15 We have used ethnographic methods to observe and participate in scientific research, semi-structured interviews to understand the motivations of scientists, and document analysis to illustrate how science is assembled with data and code. Our research subjects range from principal investigators at the top of their fields to first-year graduate students trying to find their footing. Throughout, we have focused on the multiple challenges faced by scientists who, through inclination or circumstance, work as data scientists.
The "thorny problems" we identify are brambly institutional challenges associated with data in data-intensive science. While many of these problems are specific to academe, some may be shared by data scientists outside the university. These problems are not readily curable, hence we conclude with guidance to stakeholders in data-intensive research. ... ' (full article at the link)
Subscribe to:
Posts (Atom)