Microsoft seems to continue to fall back from having Cortana as a general assistant. but has implied they want to tp continue it as a productivity aid for their business tools. They are removing it from a number of well known third parties. Decreasing support for skill development. Would think they would continue to keep the presence to learn from it.
Microsoft will shut down the Cortana iOS and Android apps in 2021
It will also remove the digital assistant from Harman Kardon Invoke speakers.
Igor Bonifacic, @igorbonifacic in Engadget
We already knew Cortana’s days as a consumer-facing digital assistant were numbered after Microsoft said earlier this year it would remove the AI from its Android launcher app. But the company has now detailed additional cuts that users are likely to feel more keenly.
The changes won't happen all at once. To start, Microsoft will end support for third-party Cortana skills on September 7th, 2020. In early 2021, Microsoft then plans to discontinue the Cortana apps on iOS and Android, as well as remove the current Cortana functionality the first-generation Surface Headphones feature. Sometime in early 2021, Harman Kardon Invoke speakers will lose access to the digital assistant as well. ... "
Friday, July 31, 2020
Conversing Between Soldiers, Robots
Will be interesting to see how contextual such conversation can be. Would seem risk and embedded goals would also be important. Following to see what more I can learn.
Army Research Enables Conversations Between Soldiers, Robots
U.S. Army Research Laboratory
July 27, 2020
Researchers from the U.S. Army Combat Capabilities Development Command's Army Research Laboratory (ARL) and the University of Southern California's Institute for Creative Technologies have developed the Joint Understanding and Dialogue Interface (JUDI) capability, enabling conversations between soldiers and autonomous systems. ARL's Matthew Marge said JUDI enables interactions in tactical operations in which verbal task instructions can be employed for command and control of a mobile robot, and allows such a robot to request clarification or provide status updates as tasks are completed. Said Marge, "JUDI's ability to leverage natural language will reduce the learning curve for soldiers who will need to control or team with robots, some of which may contribute different capabilities to a mission, like scouting or delivery of supplies." ...
Army Research Enables Conversations Between Soldiers, Robots
U.S. Army Research Laboratory
July 27, 2020
Researchers from the U.S. Army Combat Capabilities Development Command's Army Research Laboratory (ARL) and the University of Southern California's Institute for Creative Technologies have developed the Joint Understanding and Dialogue Interface (JUDI) capability, enabling conversations between soldiers and autonomous systems. ARL's Matthew Marge said JUDI enables interactions in tactical operations in which verbal task instructions can be employed for command and control of a mobile robot, and allows such a robot to request clarification or provide status updates as tasks are completed. Said Marge, "JUDI's ability to leverage natural language will reduce the learning curve for soldiers who will need to control or team with robots, some of which may contribute different capabilities to a mission, like scouting or delivery of supplies." ...
Bad Habit or Addiction to Technology?
A thing to look at again .... ever since gaming started we have been examining, but now its everywhere, all the time. Examining addiction vs negative behavior. I think with the integration of a stronger social component, amplified by social pressure, its harder yet to distinguish
Are We Addicted to Technology? By Logan KuglerCommunications of the ACM, August 2020, Vol. 63 No. 8, Pages 15-16
10.1145/3403966
It's easy to think the world is suffering from full-blown technology addiction.
We read daily headlines about how social media platforms threaten our mental health, our relationships, and even democratic society itself. We hear smartphone addiction is the latest scourge sweeping the nation's youth, and we even see tech leaders like Chris Hughes, who co-founded Facebook, publicly call for the break-up of the firm he created because of its addictive content and features.
It certainly seems like "technology addiction" is a real condition and that it is everywhere. But the truth is a little less black and white.
Technology addiction is a broad term that isn't always well defined. It can mean any type of negative behavior across video gaming, smartphone usage, and use of social media platforms like Facebook. It is medically unclear if these negative behaviors are actually addictive, and it is difficult to tell if these behaviors are due to the way the technology in question works or because we have a hard time controlling our own use of individual technologies.
Video game addiction was added by the World Health Organization (WHO) in 2018 to its International Classification of Diseases, which the organization describes as the international standard for disease reporting. The move was welcomed by some who see video game addiction as a real disease, but it was contested by others who argued that video game addictions—and other types of technology addiction—do not meet clinical standards of addiction.
While everybody seems to agree video gaming in excess can cause harm, there is less consensus on whether or not smartphones and consumer technology have negative effects on our behavior and, if so, how to classify these effects.
Bad Habit or Actual Addiction?
WHO says video game addiction occurs when gaming interferes with life, and the individual is unable to stop gaming despite this interference. It also says this severity of behavior must occur for a year or more to classify as an addiction.
Clearly, some people experience real physical and mental harm from overusing video games.
"For gamers who struggle with video game addiction, it's a real condition that impacts many areas of life, including school, employment, mental and physical health, and relationships," says Cam Adair, founder of Game Quitters, a video game addiction support group. Adair describes himself as a video game addict who was hooked for 10 years, playing up to 16 hours a day, until the habit caused problems in his life, including forcing him to drop out of school. Today, he speaks and writes about his recovery, and helps other video game addicts kick the habit. He sees validation for video game addiction as a harmful condition worth treating in the 75,000 people in 95 countries looking for help on Game Quitters every month.
Adair sees clear negative effects from excessive video gaming every day in the people he helps. Extreme video game addicts, he says, may neglect to eat, sleep, or to perform work or school duties. "The most common case I see is a college student, usually male, who is now beginning to fail school and can't seem to get themselves away from games," says Adair. ... "
Are We Addicted to Technology? By Logan KuglerCommunications of the ACM, August 2020, Vol. 63 No. 8, Pages 15-16
10.1145/3403966
It's easy to think the world is suffering from full-blown technology addiction.
We read daily headlines about how social media platforms threaten our mental health, our relationships, and even democratic society itself. We hear smartphone addiction is the latest scourge sweeping the nation's youth, and we even see tech leaders like Chris Hughes, who co-founded Facebook, publicly call for the break-up of the firm he created because of its addictive content and features.
It certainly seems like "technology addiction" is a real condition and that it is everywhere. But the truth is a little less black and white.
Technology addiction is a broad term that isn't always well defined. It can mean any type of negative behavior across video gaming, smartphone usage, and use of social media platforms like Facebook. It is medically unclear if these negative behaviors are actually addictive, and it is difficult to tell if these behaviors are due to the way the technology in question works or because we have a hard time controlling our own use of individual technologies.
Video game addiction was added by the World Health Organization (WHO) in 2018 to its International Classification of Diseases, which the organization describes as the international standard for disease reporting. The move was welcomed by some who see video game addiction as a real disease, but it was contested by others who argued that video game addictions—and other types of technology addiction—do not meet clinical standards of addiction.
While everybody seems to agree video gaming in excess can cause harm, there is less consensus on whether or not smartphones and consumer technology have negative effects on our behavior and, if so, how to classify these effects.
Bad Habit or Actual Addiction?
WHO says video game addiction occurs when gaming interferes with life, and the individual is unable to stop gaming despite this interference. It also says this severity of behavior must occur for a year or more to classify as an addiction.
Clearly, some people experience real physical and mental harm from overusing video games.
"For gamers who struggle with video game addiction, it's a real condition that impacts many areas of life, including school, employment, mental and physical health, and relationships," says Cam Adair, founder of Game Quitters, a video game addiction support group. Adair describes himself as a video game addict who was hooked for 10 years, playing up to 16 hours a day, until the habit caused problems in his life, including forcing him to drop out of school. Today, he speaks and writes about his recovery, and helps other video game addicts kick the habit. He sees validation for video game addiction as a harmful condition worth treating in the 75,000 people in 95 countries looking for help on Game Quitters every month.
Adair sees clear negative effects from excessive video gaming every day in the people he helps. Extreme video game addicts, he says, may neglect to eat, sleep, or to perform work or school duties. "The most common case I see is a college student, usually male, who is now beginning to fail school and can't seem to get themselves away from games," says Adair. ... "
Why Isn't AI used More?
Still narrowly defined, fear of bias claims, hype is creating a caution reaction. Link it to other analytics.
AI Is All the Rage. So Why Aren’t More Businesses Using It?
By Wired via ACM
In late 2017, AB InBev, the Belgian giant behind Budweiser and other beers, began adding a little artificial intelligence to its brewing recipe. Using data collected from a brewery in Newark, NJ, the company developed an AI algorithm to predict potential problems with the filtration process used to remove impurities from beer.
Paul Silverman, who runs the New Jersey Beer Company, a small operation not far from the AB InBev brewery, says his team isn't even using computers, let alone artificial intelligence (AI). "We sit around tasting beer and thinking about what to make next," he says. "We're very un-computerized."
The divide between the two breweries highlights the pace at which AI is being adopted by U.S. companies. With so much hype around artificial intelligence, you might imagine that it's everywhere. In fact, a new report says fewer than 10 percent of companies—primarily larger ones—are using the technology.
The findings emerge from one of the broadest efforts to date to gauge the use of AI. The US Census Bureau surveyed 583,000 US businesses in late 2018 about their use of AI and other advanced technologies. The results were revealed in a research paper presented at a virtual conference held by the National Bureau of Economic Research on July 16. ... "
From Wired https://www.wired.com/story/ai-why-not-more-businesses-use/
AI Is All the Rage. So Why Aren’t More Businesses Using It?
By Wired via ACM
In late 2017, AB InBev, the Belgian giant behind Budweiser and other beers, began adding a little artificial intelligence to its brewing recipe. Using data collected from a brewery in Newark, NJ, the company developed an AI algorithm to predict potential problems with the filtration process used to remove impurities from beer.
Paul Silverman, who runs the New Jersey Beer Company, a small operation not far from the AB InBev brewery, says his team isn't even using computers, let alone artificial intelligence (AI). "We sit around tasting beer and thinking about what to make next," he says. "We're very un-computerized."
The divide between the two breweries highlights the pace at which AI is being adopted by U.S. companies. With so much hype around artificial intelligence, you might imagine that it's everywhere. In fact, a new report says fewer than 10 percent of companies—primarily larger ones—are using the technology.
The findings emerge from one of the broadest efforts to date to gauge the use of AI. The US Census Bureau surveyed 583,000 US businesses in late 2018 about their use of AI and other advanced technologies. The results were revealed in a research paper presented at a virtual conference held by the National Bureau of Economic Research on July 16. ... "
From Wired https://www.wired.com/story/ai-why-not-more-businesses-use/
P&G Had a Good Year
Would have liked to see much more detail, consult the WSJ.
P&G has had a very good year in the WSJ
07/30/2020
Procter & Gamble posted its single biggest yearly sales gain since 2006 as, around the glove, the pandemic kept consumers at home and focused on staying clean and safe. “On the whole, with health, hygiene and cleaning, consumers’ needs have changed forever,” said P&G CFO Jon Moeller said. “Maybe not to the degree that’s happened recently. But it’s hard to imagine we’ll snap back to the old world.” ... '
WSJ quoted, and more there
P&G has had a very good year in the WSJ
07/30/2020
Procter & Gamble posted its single biggest yearly sales gain since 2006 as, around the glove, the pandemic kept consumers at home and focused on staying clean and safe. “On the whole, with health, hygiene and cleaning, consumers’ needs have changed forever,” said P&G CFO Jon Moeller said. “Maybe not to the degree that’s happened recently. But it’s hard to imagine we’ll snap back to the old world.” ... '
WSJ quoted, and more there
Thursday, July 30, 2020
Is the Pandemic Breaking AI?
This quickly came to mind in an effort underway, if an event is truly rare it has revealed less data, and current AI methods work best with lots of data. Note the mention of CPG, are the buying patterns now so different that AI becomes worthless? Important thoughts here about the future of the application of AI.
How the Coronavirus Pandemic Is Breaking Artificial Intelligence and How to Fix It By Gizmodo via ACM
Artificial intelligence algorithms are prone to becoming unreliable when rare events like the Covid-19 pandemic happen.
As covid-19 disrupted the world in March, online retail giant Amazon struggled to respond to the sudden shift caused by the pandemic. Household items like bottled water and toilet paper, which never ran out of stock, suddenly became in short supply. One- and two-day deliveries were delayed for several days. Though Amazon CEO Jeff Bezos would go on to make $24 billion during the pandemic, initially, the company struggled with adjusting its logistics, transportation, supply chain, purchasing, and third-party seller processes to prioritize stocking and delivering higher-priority items.
Under normal circumstances, Amazon's complicated logistics are mostly handled by artificial intelligence algorithms. Honed on billions of sales and deliveries, these systems accurately predict how much of each item will be sold, when to replenish stock at fulfillment centers, and how to bundle deliveries to minimize travel distances. But as the coronavirus pandemic crisis has changed our daily habits and life patterns, those predictions are no longer valid.
"In the CPG [consumer packaged goods] industry, the consumer buying patterns during this pandemic has shifted immensely," Rajeev Sharma, SVP and global head of enterprise AI solutions & cognitive engineering at AI consultancy firm Pactera Edge, told Gizmodo. "There is a tendency of panic buying of items in larger quantities and of different sizes and quantities. The [AI] models may have never seen such spikes in the past and hence would give less accurate outputs."
Among the many things the coronavirus outbreak has highlighted is how fragile our AI systems are. And as automation continues to become a bigger part of everything we do, we need new approaches to ensure our AI systems remain robust in face of black swan events that cause widespread
disruptions.
How the Coronavirus Pandemic Is Breaking Artificial Intelligence and How to Fix It By Gizmodo via ACM
Artificial intelligence algorithms are prone to becoming unreliable when rare events like the Covid-19 pandemic happen.
As covid-19 disrupted the world in March, online retail giant Amazon struggled to respond to the sudden shift caused by the pandemic. Household items like bottled water and toilet paper, which never ran out of stock, suddenly became in short supply. One- and two-day deliveries were delayed for several days. Though Amazon CEO Jeff Bezos would go on to make $24 billion during the pandemic, initially, the company struggled with adjusting its logistics, transportation, supply chain, purchasing, and third-party seller processes to prioritize stocking and delivering higher-priority items.
Under normal circumstances, Amazon's complicated logistics are mostly handled by artificial intelligence algorithms. Honed on billions of sales and deliveries, these systems accurately predict how much of each item will be sold, when to replenish stock at fulfillment centers, and how to bundle deliveries to minimize travel distances. But as the coronavirus pandemic crisis has changed our daily habits and life patterns, those predictions are no longer valid.
"In the CPG [consumer packaged goods] industry, the consumer buying patterns during this pandemic has shifted immensely," Rajeev Sharma, SVP and global head of enterprise AI solutions & cognitive engineering at AI consultancy firm Pactera Edge, told Gizmodo. "There is a tendency of panic buying of items in larger quantities and of different sizes and quantities. The [AI] models may have never seen such spikes in the past and hence would give less accurate outputs."
Among the many things the coronavirus outbreak has highlighted is how fragile our AI systems are. And as automation continues to become a bigger part of everything we do, we need new approaches to ensure our AI systems remain robust in face of black swan events that cause widespread
disruptions.
SeroLearn Demo
I see that Serolearn is moving towards a version 2.0. You can sign up to see the update, and you can try their interactive demo online. In active use by the DOD. See: http://serolearn.com/
... Sign up to experience Sero! 2.0 concept mapping-based assessments for free. In the meantime, try our interactive demo! ...
... Sign up to experience Sero! 2.0 concept mapping-based assessments for free. In the meantime, try our interactive demo! ...
Britannica Beyond
Brought to my attention, a new way to deliver the equity of Britannica? Click through to try it. surprised at its emergence. Used to know several people who worked with Britannica.
Curiosity is at the core of Britannica’s mission. And there is no better way to foster curiosity than by asking questions. ...
Britannica is proud to announce the newest addition to the Britannica family: Britannica Beyond.
As a question and answer platform, Beyond helps you dig deeper and get answers to your most burning questions directly from the expert editors at Britannica and from other knowledgeable users.
From current events to age old mysteries, untangle the unknown and engage your curiosity at Britannica Beyond. ....
Curiosity is at the core of Britannica’s mission. And there is no better way to foster curiosity than by asking questions. ...
Britannica is proud to announce the newest addition to the Britannica family: Britannica Beyond.
As a question and answer platform, Beyond helps you dig deeper and get answers to your most burning questions directly from the expert editors at Britannica and from other knowledgeable users.
From current events to age old mysteries, untangle the unknown and engage your curiosity at Britannica Beyond. ....
AI Safer from Hackers
(See details at the link)
A new way to train AI systems could keep them safer from hackers
Technology Review: Blogs: Mims's Bitsby Karen Hao
Artificial intelligence
" ... The research: Bo Li (named one of this year’s MIT Technology Review Innovators Under 35) and her colleagues at the University of Illinois at Urbana-Champaign are now proposing a new method for training such deep-learning systems https://arxiv.org/pdf/2002.11821.pdf to be more failproof and thus trustworthy in safety-critical scenarios. They pit the neural network responsible for image reconstruction against another neural network responsible for generating adversarial examples, in a style similar to GAN algorithms. Through iterative rounds, the adversarial network attempts to fool the reconstruction network into producing things that aren’t part of the ground truth, and the reconstruction network continuously tweaks itself to avoid being fooled, making it safer to deploy in the real world. ... "\
A new way to train AI systems could keep them safer from hackers
Technology Review: Blogs: Mims's Bitsby Karen Hao
Artificial intelligence
" ... The research: Bo Li (named one of this year’s MIT Technology Review Innovators Under 35) and her colleagues at the University of Illinois at Urbana-Champaign are now proposing a new method for training such deep-learning systems https://arxiv.org/pdf/2002.11821.pdf to be more failproof and thus trustworthy in safety-critical scenarios. They pit the neural network responsible for image reconstruction against another neural network responsible for generating adversarial examples, in a style similar to GAN algorithms. Through iterative rounds, the adversarial network attempts to fool the reconstruction network into producing things that aren’t part of the ground truth, and the reconstruction network continuously tweaks itself to avoid being fooled, making it safer to deploy in the real world. ... "\
Wednesday, July 29, 2020
Ant Algorithm for Commercial Fleets
This kind of bio behavior mimicry was experimented with in some yard applications, found to work better for some path variability.
Ant Algorithms Help Fleet Operators Halve Emissions
The Engineer (U.K.)
July 27, 2020
Researchers at Aston University in the U.K. have developed software that imitates how ants share knowledge, in an effort to help cities and towns reduce emissions and achieve clean air targets. The researchers found that ants can keep a record of the best solutions to problems and update their knowledge similarly to how computer algorithms do so. The researchers were able to improve these ant algorithms to reduce the number of decisions they make and apply that knowledge to city-scale fleet-routing problems. Said Aston's Darren Chitty, "Algorithms based on the foraging behavior of ants have long been used to solve vehicle routing problems, but now we have found how to scale these up to city-size fleets operating over several weeks in much less time than before. It means much larger fleet optimization problems can be tackled within reasonable timescales using software a user can put on their laptop."
Ant Algorithms Help Fleet Operators Halve Emissions
The Engineer (U.K.)
July 27, 2020
Researchers at Aston University in the U.K. have developed software that imitates how ants share knowledge, in an effort to help cities and towns reduce emissions and achieve clean air targets. The researchers found that ants can keep a record of the best solutions to problems and update their knowledge similarly to how computer algorithms do so. The researchers were able to improve these ant algorithms to reduce the number of decisions they make and apply that knowledge to city-scale fleet-routing problems. Said Aston's Darren Chitty, "Algorithms based on the foraging behavior of ants have long been used to solve vehicle routing problems, but now we have found how to scale these up to city-size fleets operating over several weeks in much less time than before. It means much larger fleet optimization problems can be tackled within reasonable timescales using software a user can put on their laptop."
Numbers, Mathematics and the Reality of Science
Why can numbers do such a good job of describing reality? Can they describe all of reality?
Recently republished: https://medium.com/@ruth.ym.ng/do-numbers-exist-251e9b61508
Do Numbers Exist? by Ruth Ng
November 2nd 2018
In 1960, Eugene Wigner began the closing paragraph of his paper The Unreasonable Effectiveness of Mathematics in the Natural Sciences with a beautiful summary of the problem philosophers face when it comes to the existence of numbers. He said:
“The miracle of the appropriateness of the language of mathematics for the formulation of the laws of physics is a wonderful gift which we neither understand nor deserve.”
He’s talking about the sheer power, the disproportionate usefulness and beauty of mathematics. Its ability to seemingly describe reality in a way our ordinary language never could is uncanny.
The Collatz Conjecture is one such curiosity, as is the Fibonacci sequence. (If these sorts of things interest you, my favourite books on this kind of thing are Ian Stewart’s Incredible Numbers, Freiberger & Thomas’ Numericon and David Acheson’s 1089.) ...
Recently republished: https://medium.com/@ruth.ym.ng/do-numbers-exist-251e9b61508
Do Numbers Exist? by Ruth Ng
November 2nd 2018
In 1960, Eugene Wigner began the closing paragraph of his paper The Unreasonable Effectiveness of Mathematics in the Natural Sciences with a beautiful summary of the problem philosophers face when it comes to the existence of numbers. He said:
“The miracle of the appropriateness of the language of mathematics for the formulation of the laws of physics is a wonderful gift which we neither understand nor deserve.”
He’s talking about the sheer power, the disproportionate usefulness and beauty of mathematics. Its ability to seemingly describe reality in a way our ordinary language never could is uncanny.
The Collatz Conjecture is one such curiosity, as is the Fibonacci sequence. (If these sorts of things interest you, my favourite books on this kind of thing are Ian Stewart’s Incredible Numbers, Freiberger & Thomas’ Numericon and David Acheson’s 1089.) ...
BMW Takes Android Auto Forward
Continue to be a student of in-car use of assistants, here advances in Android Auto. Have been using Echo Auto for some time. Disappointed in its lack of advance into other mobile assistant uses. What will be key next applications?
BMW's update with wireless Android Auto is rolling out
You may need to visit a dealership to get it.
Richard Lawler, @Rjcc in Engadget
BMW Android Auto
At the end of last year BMW announced plans to add wireless Android Auto support on several of its vehicles, and according to Android Police, the update is now available. BMW announced it would arrive in July for vehicles using version 7.0 of its operating system, and while they weren’t able to snag it over-the-air, prompting the dealer for an update during a visit got the correct software installed.
One wrinkle to be aware of is that this support works only via wireless, so not every Android phone can make it happen (Google’s official list is made up mostly of Nexus, Pixel and Samsung devices). That said, they report it worked well with only a few hitches, which is similar to Autoblog’s experience with improved CarPlay support on this version of the software. For owners curious about how it will all work out, BMW has put together a demo video (below), and it should be more widely available soon. .... "
BMW's update with wireless Android Auto is rolling out
You may need to visit a dealership to get it.
Richard Lawler, @Rjcc in Engadget
BMW Android Auto
At the end of last year BMW announced plans to add wireless Android Auto support on several of its vehicles, and according to Android Police, the update is now available. BMW announced it would arrive in July for vehicles using version 7.0 of its operating system, and while they weren’t able to snag it over-the-air, prompting the dealer for an update during a visit got the correct software installed.
One wrinkle to be aware of is that this support works only via wireless, so not every Android phone can make it happen (Google’s official list is made up mostly of Nexus, Pixel and Samsung devices). That said, they report it worked well with only a few hitches, which is similar to Autoblog’s experience with improved CarPlay support on this version of the software. For owners curious about how it will all work out, BMW has put together a demo video (below), and it should be more widely available soon. .... "
Google Assistant vs Samsung Bixby
We were early testers of Bixby for general appliance assistance. I see less of it there now. But have not since experienced it brould on the phone. Indication of it promoting changes in its broad use?
Google reportedly negotiating with Samsung to push Assistant over Bixby
Google is said to want more prominence for its own services like the Play Store
By Sam Byford@345triangle
Google and Samsung are in discussions for a deal that would give the US tech giant’s services more prominence on Samsung phones at the expense of those from the Korean manufacturer, according to a report by Bloomberg. The deal would reportedly involve promoting the Google Assistant and the Play Store over Samsung’s own alternatives. ..."
Google reportedly negotiating with Samsung to push Assistant over Bixby
Google is said to want more prominence for its own services like the Play Store
By Sam Byford@345triangle
Google and Samsung are in discussions for a deal that would give the US tech giant’s services more prominence on Samsung phones at the expense of those from the Korean manufacturer, according to a report by Bloomberg. The deal would reportedly involve promoting the Google Assistant and the Play Store over Samsung’s own alternatives. ..."
CNet Looks at Alexa Event
I attended, good overall. Here a general overview.
Amazon's Alexa event shows the future of the Echo's voice assistant
Amazon hasn't revealed any major new Alexa-powered hardware this year, but today's Alexa Live developer conference gives insights into its voice-centric priorities moving forward.
By David Priest
Last year's fully remote Alexa Live developer conference was good practice for this year, Daniel Rausch, Amazon's vice president of smart home, joked with me on the phone -- even though no one knew they were practicing at the time. It's late July in a year racked by pandemic, and although Amazon has not released a single major piece of smart home hardware, Rausch is excited.
"It's by far the largest set of developer-facing announcements about new features and new tools that we've ever [released] at once," said Rausch -- some of which he believes "represent a revolution" for a voice assistant now over five years old. So what exactly are these new features, and how are they going to impact you? Let's dive in. .... "
Amazon's Alexa event shows the future of the Echo's voice assistant
Amazon hasn't revealed any major new Alexa-powered hardware this year, but today's Alexa Live developer conference gives insights into its voice-centric priorities moving forward.
By David Priest
Last year's fully remote Alexa Live developer conference was good practice for this year, Daniel Rausch, Amazon's vice president of smart home, joked with me on the phone -- even though no one knew they were practicing at the time. It's late July in a year racked by pandemic, and although Amazon has not released a single major piece of smart home hardware, Rausch is excited.
"It's by far the largest set of developer-facing announcements about new features and new tools that we've ever [released] at once," said Rausch -- some of which he believes "represent a revolution" for a voice assistant now over five years old. So what exactly are these new features, and how are they going to impact you? Let's dive in. .... "
SAP to Take Qualtrics Public
Another large cloud competitor. Previously mentioned here. A key means of gathering customer data to drive emerging analytics?
CustomerThink: SAP Takes Qualtrics Public
The News
On July 26, 2020, not two years after announcing the acquisition of Qualtrics, SAP announced its intent to take Qualtrics public. The timeline is yet to be communicated.
Their announcement.
What the press release basically says is that SAP’s cloud growth, including Qualtics was a ‘great success’. SAP itself wants to remain in control by keeping a majority stake in Qualtrics after the spin-off while Qualtrics founder Ryan Smith wants to be the ‘largest independent shareholder’.
SAP insists in it being fully committed to the Qualtrics XM platform as a key element of its Intelligent Enterprise strategy, but with Qualtrics being a part of the SAP ecosystem instead of being a part of SAP itself.
For your convenience the full press release is quoted here.
WALLDORF — SAP SE (NYSE: SAP) today announced its intent to take Qualtrics public through an initial public offering (IPO) in the United States.
Qualtrics is the market leader and creator of the Experience Management (XM) category, a large, fast-growing and rapidly evolving market. SAP intends to remain the majority owner of Qualtrics. SAP’s primary objective for the IPO is to fortify Qualtrics’ ability to capture its full market potential within Experience Management. This will help to increase Qualtrics’ autonomy and enable it to expand its footprint both within SAP’s customer base and beyond.
“SAP’s acquisition of Qualtrics has been a great success and has outperformed our expectations with 2019 cloud growth in excess of 40 percent, demonstrating very strong performance in the current setup,” SAP CEO Christian Klein said. “As Ryan Smith, Zig Serafin and I worked together, we decided that an IPO would provide the greatest opportunity for Qualtrics to grow the Experience Management category, serve its customers, explore its own acquisition strategy and continue building the best talent. SAP will remain Qualtrics’ largest and most important go-to-market and research and development (R&D) partner while giving Qualtrics greater independence to broaden its base by partnering and building out the entire experience management ecosystem.”
Qualtrics, which is part of SAP’s cloud portfolio, has operated with greater autonomy than other companies SAP had previously acquired. The founder and current management team of Qualtrics will continue to operate the company.
“When we launched the Experience Management category, our goal was always to help as many organizations as possible leverage the XM Platform as a system of action,” Qualtrics Founder Ryan Smith said. “SAP is an incredible partner with unprecedented global reach, and we couldn’t be more excited about continuing the partnership. This will allow us to continue building out the XM ecosystem across a broad array of partners.” ... '
CustomerThink: SAP Takes Qualtrics Public
The News
On July 26, 2020, not two years after announcing the acquisition of Qualtrics, SAP announced its intent to take Qualtrics public. The timeline is yet to be communicated.
Their announcement.
What the press release basically says is that SAP’s cloud growth, including Qualtics was a ‘great success’. SAP itself wants to remain in control by keeping a majority stake in Qualtrics after the spin-off while Qualtrics founder Ryan Smith wants to be the ‘largest independent shareholder’.
SAP insists in it being fully committed to the Qualtrics XM platform as a key element of its Intelligent Enterprise strategy, but with Qualtrics being a part of the SAP ecosystem instead of being a part of SAP itself.
For your convenience the full press release is quoted here.
WALLDORF — SAP SE (NYSE: SAP) today announced its intent to take Qualtrics public through an initial public offering (IPO) in the United States.
Qualtrics is the market leader and creator of the Experience Management (XM) category, a large, fast-growing and rapidly evolving market. SAP intends to remain the majority owner of Qualtrics. SAP’s primary objective for the IPO is to fortify Qualtrics’ ability to capture its full market potential within Experience Management. This will help to increase Qualtrics’ autonomy and enable it to expand its footprint both within SAP’s customer base and beyond.
“SAP’s acquisition of Qualtrics has been a great success and has outperformed our expectations with 2019 cloud growth in excess of 40 percent, demonstrating very strong performance in the current setup,” SAP CEO Christian Klein said. “As Ryan Smith, Zig Serafin and I worked together, we decided that an IPO would provide the greatest opportunity for Qualtrics to grow the Experience Management category, serve its customers, explore its own acquisition strategy and continue building the best talent. SAP will remain Qualtrics’ largest and most important go-to-market and research and development (R&D) partner while giving Qualtrics greater independence to broaden its base by partnering and building out the entire experience management ecosystem.”
Qualtrics, which is part of SAP’s cloud portfolio, has operated with greater autonomy than other companies SAP had previously acquired. The founder and current management team of Qualtrics will continue to operate the company.
“When we launched the Experience Management category, our goal was always to help as many organizations as possible leverage the XM Platform as a system of action,” Qualtrics Founder Ryan Smith said. “SAP is an incredible partner with unprecedented global reach, and we couldn’t be more excited about continuing the partnership. This will allow us to continue building out the XM ecosystem across a broad array of partners.” ... '
Talk: Creating Generalizable AI
Looks to be a good piece, I am attending, will be available after the talk.
August 11 Talk with Anima Anandkumar: "How to Create Generalizable AI"
Register now for the next free ACM TechTalk, "How to Create Generalizable AI," presented on Wednesday, August 11 at 2:00 PM ET/11:00 AM PT by Anima Anandkumar, Director of ML Research, NVIDIA; Bren Professor, California Institute of Technology. Michael Zeller, Head of AI Strategy & Solutions at Temasek and ACM SIGKDD Secretary/Treasurer, will moderate the questions and answers session following the talk.
Leave your comments and questions with our speaker now and any time before the live event on ACM's Discourse Page. And check out the page after the webcast for extended discussion with your peers in the computing community, as well as further resources on AI and Machine Learning.
(If you'd like to attend but can't make it to the virtual event, you still need to register to receive a recording of the TechTalk when it becomes available.)
Note: You can stream this and all ACM TechTalks on your mobile device, including smartphones and tablets.
Current deep-learning benchmarks focus on generalization on the same distribution as the training data. However, real-world applications require generalization to new, unseen scenarios, domains, and tasks. I'll present key ingredients that I believe are critical towards achieving this, including (1) compositional systems that have modular and interpretable components; (2) unsupervised learning to discover new concepts; (3) feedback mechanisms for robust inference; and (4) causal discovery and inference that capture underlying relationships and invariances. Domain knowledge and structure can help enable learning in these challenging settings. This talk is beginner-friendly and will give a high-level overview of these challenges.
Duration: 60 minutes (including audience Q&A)
Presenter:
Anima Anandkumar, Director of ML Research, NVIDIA; Bren Professor, California Institute of Technology
Anima Anandkumar is a Bren Professor at Caltech and Director of ML Research at NVIDIA. She was previously a Principal Scientist at Amazon Web Services. She has received several honors such as the Alfred P. Sloan Fellowship, NSF Career Award, Young Investigator Awards from DoD, and Faculty Fellowships from Microsoft, Google, Facebook, and Adobe. She is part of the World Economic Forum's Expert Network. She is passionate about designing principled AI algorithms and applying them in interdisciplinary applications. Her research focus is on unsupervised AI, optimization, and tensor methods.
August 11 Talk with Anima Anandkumar: "How to Create Generalizable AI"
Register now for the next free ACM TechTalk, "How to Create Generalizable AI," presented on Wednesday, August 11 at 2:00 PM ET/11:00 AM PT by Anima Anandkumar, Director of ML Research, NVIDIA; Bren Professor, California Institute of Technology. Michael Zeller, Head of AI Strategy & Solutions at Temasek and ACM SIGKDD Secretary/Treasurer, will moderate the questions and answers session following the talk.
Leave your comments and questions with our speaker now and any time before the live event on ACM's Discourse Page. And check out the page after the webcast for extended discussion with your peers in the computing community, as well as further resources on AI and Machine Learning.
(If you'd like to attend but can't make it to the virtual event, you still need to register to receive a recording of the TechTalk when it becomes available.)
Note: You can stream this and all ACM TechTalks on your mobile device, including smartphones and tablets.
Current deep-learning benchmarks focus on generalization on the same distribution as the training data. However, real-world applications require generalization to new, unseen scenarios, domains, and tasks. I'll present key ingredients that I believe are critical towards achieving this, including (1) compositional systems that have modular and interpretable components; (2) unsupervised learning to discover new concepts; (3) feedback mechanisms for robust inference; and (4) causal discovery and inference that capture underlying relationships and invariances. Domain knowledge and structure can help enable learning in these challenging settings. This talk is beginner-friendly and will give a high-level overview of these challenges.
Duration: 60 minutes (including audience Q&A)
Presenter:
Anima Anandkumar, Director of ML Research, NVIDIA; Bren Professor, California Institute of Technology
Anima Anandkumar is a Bren Professor at Caltech and Director of ML Research at NVIDIA. She was previously a Principal Scientist at Amazon Web Services. She has received several honors such as the Alfred P. Sloan Fellowship, NSF Career Award, Young Investigator Awards from DoD, and Faculty Fellowships from Microsoft, Google, Facebook, and Adobe. She is part of the World Economic Forum's Expert Network. She is passionate about designing principled AI algorithms and applying them in interdisciplinary applications. Her research focus is on unsupervised AI, optimization, and tensor methods.
Tuesday, July 28, 2020
CEO Moment: Leadership in a New Era
McKinsey's look at what the new leadership may look like.
The CEO moment: Leadership for a new era
July 21, 2020 | Article
By Carolyn Dewar, Scott Keller, Kevin Sneader, and Kurt Strovink
Article (12 pages)
COVID-19 has created a massive humanitarian challenge: millions ill and hundreds of thousands of lives lost; soaring unemployment rates in the world’s most robust economies; food banks stretched beyond capacity; governments straining to deliver critical services. The pandemic is also a challenge for businesses—and their CEOs—unlike any they have ever faced, forcing an abrupt dislocation of how employees work, how customers behave, how supply chains function, and even what ultimately constitutes business performance.
Confronting this unique moment, CEOs have shifted how they lead in expedient and ingenious ways. The changes may have been birthed of necessity, but they have great potential beyond this crisis. In this article, we explore four shifts in how CEOs are leading that are also better ways to lead a company: unlocking bolder (“10x”) aspirations, elevating their “to be” list to the same level as “to do” in their operating models, fully embracing stakeholder capitalism, and harnessing the full power of their CEO peer networks. If they become permanent, these shifts hold the potential to thoroughly recalibrate the organization and how it operates, the company’s performance potential, and its relationship to critical constituents.
Only CEOs can decide whether to continue leading in these new ways, and in so doing seize a once-in-a-generation opportunity to consciously evolve the very nature and impact of their role. Indeed, as we have written elsewhere, part of the role of the CEO is to serve as a chief calibrator—deciding the extent and degree of change needed. As part of this, CEOs must have a thesis of transformation that works in their company context. A good CEO is always scanning for signals and helping the organization deliver fine-tuned responses. A great CEO will see that this moment is a unique opportunity for self-calibration, with profound implications for the organization. ... "
The CEO moment: Leadership for a new era
July 21, 2020 | Article
By Carolyn Dewar, Scott Keller, Kevin Sneader, and Kurt Strovink
Article (12 pages)
COVID-19 has created a massive humanitarian challenge: millions ill and hundreds of thousands of lives lost; soaring unemployment rates in the world’s most robust economies; food banks stretched beyond capacity; governments straining to deliver critical services. The pandemic is also a challenge for businesses—and their CEOs—unlike any they have ever faced, forcing an abrupt dislocation of how employees work, how customers behave, how supply chains function, and even what ultimately constitutes business performance.
Confronting this unique moment, CEOs have shifted how they lead in expedient and ingenious ways. The changes may have been birthed of necessity, but they have great potential beyond this crisis. In this article, we explore four shifts in how CEOs are leading that are also better ways to lead a company: unlocking bolder (“10x”) aspirations, elevating their “to be” list to the same level as “to do” in their operating models, fully embracing stakeholder capitalism, and harnessing the full power of their CEO peer networks. If they become permanent, these shifts hold the potential to thoroughly recalibrate the organization and how it operates, the company’s performance potential, and its relationship to critical constituents.
Only CEOs can decide whether to continue leading in these new ways, and in so doing seize a once-in-a-generation opportunity to consciously evolve the very nature and impact of their role. Indeed, as we have written elsewhere, part of the role of the CEO is to serve as a chief calibrator—deciding the extent and degree of change needed. As part of this, CEOs must have a thesis of transformation that works in their company context. A good CEO is always scanning for signals and helping the organization deliver fine-tuned responses. A great CEO will see that this moment is a unique opportunity for self-calibration, with profound implications for the organization. ... "
Zagat Looks at the Future of Dining
The Small Business Lab looks at Zagats future of dining study. Some instructive thoughts about where this is all moving to.
"...Zagat, the restaurant review site favored by foodies, recently released a Future of Dining Study.
The study looks at the impact of COVID-19 on the eating habits of consumers and how they may evolve post-COVID. ... "
And Zagat's full study.
"...Zagat, the restaurant review site favored by foodies, recently released a Future of Dining Study.
The study looks at the impact of COVID-19 on the eating habits of consumers and how they may evolve post-COVID. ... "
And Zagat's full study.
Do We Need a Theory of Everything?
Nicely put. The math does not to be easy, or visual or nice. Just predict real things correctly.
Do We Need a Theory of Everything? in Nautilus
Posted By Sabine Hossenfelder
I get constantly asked if I could please comment on other people’s theories of everything. That could be Garrett Lisi’s E8 theory or Eric Weinstein’s geometric unity or Stephen Wolfram’s idea that the universe is but a big graph, and so on. Good, then. Let me tell you what I think about this. But I’m afraid it may not be what you wanted to hear.
Before we start, let me remind you what physicists mean by a “Theory of Everything.” For all we currently know, the universe and everything in it is held together by four fundamental interactions. That’s the electromagnetic force, the strong and the weak nuclear force, and gravity. All other forces that you are familiar with, say, the van der Waals force, or muscle force, or the force that’s pulling you down an infinite sequence of links on Wikipedia, these are all non-fundamental forces that derive from the four fundamental interactions. At least in principle.
This whole idea of a theory of everything is based on an unscientific premise.... "
Do We Need a Theory of Everything? in Nautilus
Posted By Sabine Hossenfelder
I get constantly asked if I could please comment on other people’s theories of everything. That could be Garrett Lisi’s E8 theory or Eric Weinstein’s geometric unity or Stephen Wolfram’s idea that the universe is but a big graph, and so on. Good, then. Let me tell you what I think about this. But I’m afraid it may not be what you wanted to hear.
Before we start, let me remind you what physicists mean by a “Theory of Everything.” For all we currently know, the universe and everything in it is held together by four fundamental interactions. That’s the electromagnetic force, the strong and the weak nuclear force, and gravity. All other forces that you are familiar with, say, the van der Waals force, or muscle force, or the force that’s pulling you down an infinite sequence of links on Wikipedia, these are all non-fundamental forces that derive from the four fundamental interactions. At least in principle.
This whole idea of a theory of everything is based on an unscientific premise.... "
Monday, July 27, 2020
Amazon Testing more Delivery with Scout
More testing underway.
Meet Scout: Amazon Is Taking Its Prime Delivery Robots to the South
USA Today
Dalvin Brown
July 22, 2020
Amazon has announced the deployment of its Scout autonomous delivery system to Atlanta, GA, and Franklin, TN, following year-long pilots elsewhere. The company in January 2019 launched Scout, an electric delivery robot, in the Seattle region before expanding to Irvine, CA, in August. The rovers can navigate around pets, pedestrians, and other objects on sidewalks; they are engineered to travel at a walking pace, and will initially be accompanied by an Amazon employee. The carriers help to reduce human-to-human contact during the coronavirus pandemic. Amazon Scout's Sean Scott said the service has helped the company fulfill growing customer demand during the crisis. The Atlanta and Franklin rollouts are intended to extend the service to diverse neighborhoods with different climates than those in which the robots currently operate. ... "
Meet Scout: Amazon Is Taking Its Prime Delivery Robots to the South
USA Today
Dalvin Brown
July 22, 2020
Amazon has announced the deployment of its Scout autonomous delivery system to Atlanta, GA, and Franklin, TN, following year-long pilots elsewhere. The company in January 2019 launched Scout, an electric delivery robot, in the Seattle region before expanding to Irvine, CA, in August. The rovers can navigate around pets, pedestrians, and other objects on sidewalks; they are engineered to travel at a walking pace, and will initially be accompanied by an Amazon employee. The carriers help to reduce human-to-human contact during the coronavirus pandemic. Amazon Scout's Sean Scott said the service has helped the company fulfill growing customer demand during the crisis. The Atlanta and Franklin rollouts are intended to extend the service to diverse neighborhoods with different climates than those in which the robots currently operate. ... "
SAS: How AI Changes the Rules
As usual, SAS does a good job when talking the impact of analytics of many kinds. Good non technical depth. Requires free sign up:
How AI Changes the Rules
New Imperatives for the Intelligent Organization
About this paper
Many leaders are excited about AI’s potential to profoundly transform organizations by making them more innovative and productive. But implementing AI will also lead to significant changes in how organizations are managed, according to our recent survey of more than 2,200 business leaders, managers and key contributors. Those survey respondents, representing organizations across the globe, expect that reaping the benefits of AI will require changes in workplace structures, technology strategies and technology governance.to manage the significant changes to software development and deployment processes that most respondents expect from AI.
AI will drive organizational change and ask more of top leaders. The majority of survey respondents expect that implementing AI will require more significant organizational change than other emerging technologies including cloud. AI demands more collaboration among people skilled in data management, data analytics, IT infrastructure, and systems development, as well as business and operational experts. This means that organizational leaders need to ensure that traditional silos don’t hinder advanced analytics efforts, and they must support the training required to build skills across their workforces.
AI will place new demands on the CIO and CTO. AI implementation will influence the choices CIOs and CTOs make in setting their broad technology agendas. They will need to prioritize developing foundational technology capabilities, from infrastructure and cybersecurity to data management and development processes — areas in which those with more advanced AI implementations are taking the lead compared with other respondents. CIOs will also need to manage the significant changes to software development and deployment processes that most respondents expect from AI. The survey also indicated that many CIOs will be charged with overseeing or supporting formal data governance efforts: CIOs and CTOs are more likely than other executives to be tasked with this.
AI will require an increased focus on risk management and ethics. The Global survey shows a broad awareness of the risks inherent in using AI, but few practitioners have taken action to create policies and processes to manage risks, including ethical, legal, reputational, and financial risks. Managing ethical risk is a particular area of opportunity. Those with more advanced AI practices are establishing processes and policies for data governance and risk management, including providing ways to explain how their algorithms deliver results. They point out that understanding how AI systems reach their conclusions is both an emerging best practice and a necessity, in order to ensure that the human intelligence that feeds and nurtures AI systems keeps pace with the machines’ advancements.
The report that follows explores these findings in depth. Read on to learn more about the changes that leaders must prepare for to successfully implement trusted AI. ... "
How AI Changes the Rules
New Imperatives for the Intelligent Organization
About this paper
Many leaders are excited about AI’s potential to profoundly transform organizations by making them more innovative and productive. But implementing AI will also lead to significant changes in how organizations are managed, according to our recent survey of more than 2,200 business leaders, managers and key contributors. Those survey respondents, representing organizations across the globe, expect that reaping the benefits of AI will require changes in workplace structures, technology strategies and technology governance.to manage the significant changes to software development and deployment processes that most respondents expect from AI.
AI will drive organizational change and ask more of top leaders. The majority of survey respondents expect that implementing AI will require more significant organizational change than other emerging technologies including cloud. AI demands more collaboration among people skilled in data management, data analytics, IT infrastructure, and systems development, as well as business and operational experts. This means that organizational leaders need to ensure that traditional silos don’t hinder advanced analytics efforts, and they must support the training required to build skills across their workforces.
AI will place new demands on the CIO and CTO. AI implementation will influence the choices CIOs and CTOs make in setting their broad technology agendas. They will need to prioritize developing foundational technology capabilities, from infrastructure and cybersecurity to data management and development processes — areas in which those with more advanced AI implementations are taking the lead compared with other respondents. CIOs will also need to manage the significant changes to software development and deployment processes that most respondents expect from AI. The survey also indicated that many CIOs will be charged with overseeing or supporting formal data governance efforts: CIOs and CTOs are more likely than other executives to be tasked with this.
AI will require an increased focus on risk management and ethics. The Global survey shows a broad awareness of the risks inherent in using AI, but few practitioners have taken action to create policies and processes to manage risks, including ethical, legal, reputational, and financial risks. Managing ethical risk is a particular area of opportunity. Those with more advanced AI practices are establishing processes and policies for data governance and risk management, including providing ways to explain how their algorithms deliver results. They point out that understanding how AI systems reach their conclusions is both an emerging best practice and a necessity, in order to ensure that the human intelligence that feeds and nurtures AI systems keeps pace with the machines’ advancements.
The report that follows explores these findings in depth. Read on to learn more about the changes that leaders must prepare for to successfully implement trusted AI. ... "
Toward Quantum Rainbows at Room Temperature
Starting to see some of these efforts link to my own work. Here one of interest.
Quantum Rainbow'—Photons of Switching Colors Allow Room-Temperature Quantum Computing
Purdue University News
July 20, 2020
Engineers at Purdue University have developed a quantum random walk method that could eventually allow computers to sift through data at incredibly fast speeds. A random walk involves an agent randomly moving to the right or left at each time interval, while a quantum agent can move to right and left simultaneously at each step. Purdue's Andrew Weiner said the new technique employs photons at specific colors or frequencies, which he described as "the quantum walk of the rainbow." The photons randomly change colors in a quantum manner during the walk, and this method can be conducted at room temperature because it uses photons rather than superconducting quantum bits. Performing experiments with integrated photonics and other elements used in lightwave or optical communication also reduces costs and adds compatibility with fiber-optics communications infrastructure. ... "
Quantum Rainbow'—Photons of Switching Colors Allow Room-Temperature Quantum Computing
Purdue University News
July 20, 2020
Engineers at Purdue University have developed a quantum random walk method that could eventually allow computers to sift through data at incredibly fast speeds. A random walk involves an agent randomly moving to the right or left at each time interval, while a quantum agent can move to right and left simultaneously at each step. Purdue's Andrew Weiner said the new technique employs photons at specific colors or frequencies, which he described as "the quantum walk of the rainbow." The photons randomly change colors in a quantum manner during the walk, and this method can be conducted at room temperature because it uses photons rather than superconducting quantum bits. Performing experiments with integrated photonics and other elements used in lightwave or optical communication also reduces costs and adds compatibility with fiber-optics communications infrastructure. ... "
Robots with Ultraviolet Light
Using ultraviolet light from Robots to destroy virus.
Heathrow Airport brings in robots to fight coronavirus in the BBC
Disinfection robots have been installed at Heathrow Airport as part of measures to help keep the passengers and staff safe from the coronavirus.
Previously used to tackle hospital acquired infections, the machines move through the airport terminals disinfecting high risk touch points like bathrooms and lifts. ... "
Heathrow Airport brings in robots to fight coronavirus in the BBC
Disinfection robots have been installed at Heathrow Airport as part of measures to help keep the passengers and staff safe from the coronavirus.
Previously used to tackle hospital acquired infections, the machines move through the airport terminals disinfecting high risk touch points like bathrooms and lifts. ... "
Google Pushing Advanced Wearables
Good to see Google pushing this research. Still consider the use of this kind of focused wearable to have narrow uses, until people decide it to be part of their normal clothing.
Google is quietly experimenting with holographic glasses and smart tattoos
The search giant has been working on or funding a new generation of wearable technology.
Richard Nieva in CNET
A simple pair of sunglasses that projects holographic icons. A smartwatch that has a digital screen but analog hands. A temporary tattoo that, when applied to your skin, transforms your body into a living touchpad. A virtual reality controller that lets you pick up objects in digital worlds and feel their weight as you swing them around. Those are some of the projects Google has quietly been developing or funding, according to white papers and demo videos, in an effort to create the next generation of wearable technology devices.
The eyewear and smartwatch projects come from the search giant's Interaction Lab, an initiative aimed at intertwining digital and physical experiences. It's part of Google Research, an arm of the search giant with roots in academia that focuses on technical breakthroughs. The Interaction Lab was created within Google's hardware division in 2015, before it was spun out to join the company's research arm about two years ago, according to the resume of Alex Olwal, the lab's leader. Olwal, a senior Google researcher, previously worked at X, the company's self-described moonshot factory, and ATAP, Google's experimental hardware branch.
The goal of the Interaction Lab is to expand Google's "capabilities for rapid hardware prototyping of wearable concepts and interface technology," Olwal writes. Its initiatives appear to be more science experiment than product roadmap, with the likely goal of proving ideas rather than competing with the Apple Watch or Snapchat Spectacles. But taken together, they provide a glimpse at Google's ambitions for wearable tech.
The other projects were collaborations with researchers from universities around the world. At least two of them -- the VR controller and smart tattoos -- were partly funded through Google Faculty Research Awards, which support academic work related to computer science and engineering. The efforts highlight Google's close ties with the academic community, a bridge to the company's beginnings as a Stanford University grad school project by co-founders Larry Page and Sergey Brin that grew into a global behemoth with deep hooks into our lives. .... "
Google is quietly experimenting with holographic glasses and smart tattoos
The search giant has been working on or funding a new generation of wearable technology.
Richard Nieva in CNET
A simple pair of sunglasses that projects holographic icons. A smartwatch that has a digital screen but analog hands. A temporary tattoo that, when applied to your skin, transforms your body into a living touchpad. A virtual reality controller that lets you pick up objects in digital worlds and feel their weight as you swing them around. Those are some of the projects Google has quietly been developing or funding, according to white papers and demo videos, in an effort to create the next generation of wearable technology devices.
The eyewear and smartwatch projects come from the search giant's Interaction Lab, an initiative aimed at intertwining digital and physical experiences. It's part of Google Research, an arm of the search giant with roots in academia that focuses on technical breakthroughs. The Interaction Lab was created within Google's hardware division in 2015, before it was spun out to join the company's research arm about two years ago, according to the resume of Alex Olwal, the lab's leader. Olwal, a senior Google researcher, previously worked at X, the company's self-described moonshot factory, and ATAP, Google's experimental hardware branch.
The goal of the Interaction Lab is to expand Google's "capabilities for rapid hardware prototyping of wearable concepts and interface technology," Olwal writes. Its initiatives appear to be more science experiment than product roadmap, with the likely goal of proving ideas rather than competing with the Apple Watch or Snapchat Spectacles. But taken together, they provide a glimpse at Google's ambitions for wearable tech.
The other projects were collaborations with researchers from universities around the world. At least two of them -- the VR controller and smart tattoos -- were partly funded through Google Faculty Research Awards, which support academic work related to computer science and engineering. The efforts highlight Google's close ties with the academic community, a bridge to the company's beginnings as a Stanford University grad school project by co-founders Larry Page and Sergey Brin that grew into a global behemoth with deep hooks into our lives. .... "
AI and the Curious Consideration of Autonomous Intent
Context, Autonomy Intent... for Autonomous AI. Considerable piece on the difficult problem. Has not come close to technical solution.
Home AI Trends Insider on Autonomy On Whether AI Can Form ‘Intent’ Including In The Case Of Autonomous...
On Whether AI Can Form ‘Intent’ Including In The Case Of Autonomous Cars
By Lance Eliot, the AI Trends Insider
These remarks all have something in common:
The devil made me do it
I didn’t mean to be mean to you
Something just came over me
I wanted to do it
You got what was coming to you
My motives were pure
What’s that all about?
You could say that those are all various ways in which someone might express their intent or intentions.
In some instances, the person is seemingly expressing their intent directly, while in other cases they appear to be avoiding being pinned down on their intentions and are trying to toss the intent onto the shoulders of someone or something else.
When we express our intent, there is no particular reason to necessarily believe that it is true per se.
A person can tell you their intentions and yet be lying through their teeth.
Or, a person can offer their intentions and genuinely believe that they are forthcoming in their indication, and yet it might be entirely fabricated and concocted as a kind of rationalization after-the-fact. ... "
Home AI Trends Insider on Autonomy On Whether AI Can Form ‘Intent’ Including In The Case Of Autonomous...
On Whether AI Can Form ‘Intent’ Including In The Case Of Autonomous Cars
By Lance Eliot, the AI Trends Insider
These remarks all have something in common:
The devil made me do it
I didn’t mean to be mean to you
Something just came over me
I wanted to do it
You got what was coming to you
My motives were pure
What’s that all about?
You could say that those are all various ways in which someone might express their intent or intentions.
In some instances, the person is seemingly expressing their intent directly, while in other cases they appear to be avoiding being pinned down on their intentions and are trying to toss the intent onto the shoulders of someone or something else.
When we express our intent, there is no particular reason to necessarily believe that it is true per se.
A person can tell you their intentions and yet be lying through their teeth.
Or, a person can offer their intentions and genuinely believe that they are forthcoming in their indication, and yet it might be entirely fabricated and concocted as a kind of rationalization after-the-fact. ... "
Identifying Birds from Behind
Not banned for bias yet. So at least we can continue to fine tune accuracy as needed and apply it to studies to help the avian world.
Birdwatching AI can recognise individual birds from behind
in NewScientist By Michael Le Page
Artificial intelligence has been trained to recognise individual birds, which is more than we humans are capable of. The system is being developed for biologists studying wild animals, but could be adapted so that people can identify individual birds in their surroundings.
André Ferreira at the Center for Functional and Evolutionary Ecology in Montpellier, France, started the project while studying how individual sociable weavers contribute to their colonies. This is normally done by putting coloured tags on their legs and sitting by nests to watch them, which is very time-consuming. Ferreira tried filming the colonies instead, but often the coloured tags weren’t visible in the footage, so he and his colleagues turned to AI. ... "
Birdwatching AI can recognise individual birds from behind
in NewScientist By Michael Le Page
Artificial intelligence has been trained to recognise individual birds, which is more than we humans are capable of. The system is being developed for biologists studying wild animals, but could be adapted so that people can identify individual birds in their surroundings.
André Ferreira at the Center for Functional and Evolutionary Ecology in Montpellier, France, started the project while studying how individual sociable weavers contribute to their colonies. This is normally done by putting coloured tags on their legs and sitting by nests to watch them, which is very time-consuming. Ferreira tried filming the colonies instead, but often the coloured tags weren’t visible in the footage, so he and his colleagues turned to AI. ... "
Sunday, July 26, 2020
Neural Search for Scaling and Efficiency
Useful look at a different kind of more precise search. Neural search has the opportunity to learn over time, and be more semantically correct. It also should be more scalable to very large databases. Based on some research I am doing on the topic of efficient search.
What is Neural Search, and Why Should I Care?
Based on a paper, with a more complete technical view.
In Towards Data Science by Alex C-G (May require sign-in or incognito)
AI-powered search with less effort, more flexibility
Neural Search? What’s That?
In short, neural search is a new approach to retrieving information. Instead of telling a machine a set of rules to understand what data is what, neural search does the same thing with a pre-trained neural network. This means developers don’t have to write every little rule, saving time and headaches, while the system trains itself to get better as it goes along. One such company providing an open-source neural search framework is Jina (An Easier Way to Build Search in the Cloud) .
Background
Search is big business, and getting bigger every day. Just a few years ago, searching meant typing something into a text box (ah, those heady days of Yahoo! and Altavista). Now search encompasses text, voice, music, photos, videos, products, and so much more. Just before the turn of the millennium there were only 3.5 million Google searches a day. Today (according to the top result for search term 2020 google searches per day) that figure could be as high as 5 billion and rising, more than 1,000 times more. That’s not to mention all the billions of Wikipedia articles, Amazon products, and Spotify playlists searched by millions of people every day from their phones, computers, and virtual assistants..... "
(See full graphics at the link above)
What is Neural Search, and Why Should I Care?
Based on a paper, with a more complete technical view.
In Towards Data Science by Alex C-G (May require sign-in or incognito)
AI-powered search with less effort, more flexibility
Neural Search? What’s That?
In short, neural search is a new approach to retrieving information. Instead of telling a machine a set of rules to understand what data is what, neural search does the same thing with a pre-trained neural network. This means developers don’t have to write every little rule, saving time and headaches, while the system trains itself to get better as it goes along. One such company providing an open-source neural search framework is Jina (An Easier Way to Build Search in the Cloud) .
Background
Search is big business, and getting bigger every day. Just a few years ago, searching meant typing something into a text box (ah, those heady days of Yahoo! and Altavista). Now search encompasses text, voice, music, photos, videos, products, and so much more. Just before the turn of the millennium there were only 3.5 million Google searches a day. Today (according to the top result for search term 2020 google searches per day) that figure could be as high as 5 billion and rising, more than 1,000 times more. That’s not to mention all the billions of Wikipedia articles, Amazon products, and Spotify playlists searched by millions of people every day from their phones, computers, and virtual assistants..... "
(See full graphics at the link above)
Quantum Cryptography Explained
A good non-technical, but still challenging explanation of how key-based and then 'quantum cryptography' works. Also touches on some of the remaining challenges. In Physics Girl. Sent via Jeff Dyck.
Autonomous Ship Use Expanding
Continued look at autonomous maritime shipping to decrease costs of large scale supply chains.
Sea Machines raises $15 million for autonomous ship navigation
Kyle Wiggers
@Kyle_L_Wiggers in Venturebeat
Autonomous vessel software and systems provider Sea Machines Robotics today closed a $15 million funding round to accelerate deployment of its technologies in the unmanned naval boat and ship market. Sea Machines boldly claims this is one of the largest rounds for a tech company tackling marine and maritime use cases.
Self-steering vessels aren’t a new idea — but they are gaining steam. Earlier this year, IBM and Promare — a U.K.-based marine research and exploration charity — trialed a prototype of an AI-powered maritime navigation system ahead of a September 6th venture to send a ship across the Atlantic Ocean. In Norway, a crewless cargo ship called the Yara Birkeland is expected to go into commercial operation later in 2020. And Rolls-Royce previously demonstrated a fully autonomous passenger ferry in Finland and announced a partnership with Intel as part of a plan to bring self-guided cargo ships to seas by 2025. ... "
Sea Machines raises $15 million for autonomous ship navigation
Kyle Wiggers
@Kyle_L_Wiggers in Venturebeat
Autonomous vessel software and systems provider Sea Machines Robotics today closed a $15 million funding round to accelerate deployment of its technologies in the unmanned naval boat and ship market. Sea Machines boldly claims this is one of the largest rounds for a tech company tackling marine and maritime use cases.
Self-steering vessels aren’t a new idea — but they are gaining steam. Earlier this year, IBM and Promare — a U.K.-based marine research and exploration charity — trialed a prototype of an AI-powered maritime navigation system ahead of a September 6th venture to send a ship across the Atlantic Ocean. In Norway, a crewless cargo ship called the Yara Birkeland is expected to go into commercial operation later in 2020. And Rolls-Royce previously demonstrated a fully autonomous passenger ferry in Finland and announced a partnership with Intel as part of a plan to bring self-guided cargo ships to seas by 2025. ... "
Swarms of Drones
We examined the idea of having swarms of drones performing subtasks in groups. its been much kicked about, and recently implemented, especially for military applications.
AI helps drone swarms navigate through crowded, unfamiliar spaces
It could be key to self-driving cars, not to mention search and rescue.
By Jon Fingas in Engadget
Drone swarms frequently fly outside for a reason: it’s difficult for the robotic fliers to navigate in tight spaces without hitting each other. Caltech researchers may have a way for those drones to fly indoors, however. They’ve developed a machine learning algorithm, Global-to-Local Safe Autonomy Synthesis (GLAS), that lets swarms navigate crowded, unmapped environments. The system works by giving each drone a degree of independence that lets it adapt to a changing environment.
Instead of relying on existing maps or the routes of every other drone in the swarm, GLAS has each machine learning how to navigate a given space on its own even as it coordinates with others. This decentralized model both helps the drones improvise and makes scaling the swarm easier, as the computing is spread across many robots. ... "
AI helps drone swarms navigate through crowded, unfamiliar spaces
It could be key to self-driving cars, not to mention search and rescue.
By Jon Fingas in Engadget
Drone swarms frequently fly outside for a reason: it’s difficult for the robotic fliers to navigate in tight spaces without hitting each other. Caltech researchers may have a way for those drones to fly indoors, however. They’ve developed a machine learning algorithm, Global-to-Local Safe Autonomy Synthesis (GLAS), that lets swarms navigate crowded, unmapped environments. The system works by giving each drone a degree of independence that lets it adapt to a changing environment.
Instead of relying on existing maps or the routes of every other drone in the swarm, GLAS has each machine learning how to navigate a given space on its own even as it coordinates with others. This decentralized model both helps the drones improvise and makes scaling the swarm easier, as the computing is spread across many robots. ... "
Should Workers Accept Pay Cuts for Working Remotely?
Excerpts and analysis of a survey, more details at the link,
Should workers accept pay cuts in exchange for working remotely? by Tom Ryan in Retailwire
A recent survey of 600 U.S. adults found 66 percent willing to take a pay cut for the flexibility of working remotely.
To what degree varied, however.
Fourteen percent would take a one to four percent cut;
Twenty-nine percent would take a five-to-14 percent cut;
Seventeen percent would take a 15-to-24 percent cut;
Seven percent would take a 25 percent or more cut;
Thirty-four percent would not take a lower salary for flexible remote work.
Should workers accept pay cuts in exchange for working remotely? by Tom Ryan in Retailwire
A recent survey of 600 U.S. adults found 66 percent willing to take a pay cut for the flexibility of working remotely.
To what degree varied, however.
Fourteen percent would take a one to four percent cut;
Twenty-nine percent would take a five-to-14 percent cut;
Seventeen percent would take a 15-to-24 percent cut;
Seven percent would take a 25 percent or more cut;
Thirty-four percent would not take a lower salary for flexible remote work.
Saturday, July 25, 2020
Webinar:Graphs for Cyber Security
Brought to my attention. Addresses my particular favorite advice: Carefully define your intent, direction and process when planning to do something important and complex. Visual is good. Upcoming Webinar:
Webinar: Graphs for Cyber Security
On Thursday, August 6, 2020 07:00 PT | 10:00 ET | 15:00 BT | 16:00 CET
REGISTER NOW
Find out how graphs can support solving Cyber Security problems.
In the first part of this webinar, we will find out what makes graph databases so unique and powerful, and how we can use them to solve complex Cyber Security use cases, such as fake accounts, workloads hacks or application control.
In the second part of the webinar, we will demonstrate live how to build a suitable data model and which algorithms best to use to solve the respective problems. And we will provide tips and tricks to answer your questions.
Hope to see you there,
Sabine Seitz, Neo4j
What is a Graph Database?
Webinar: Graphs for Cyber Security
On Thursday, August 6, 2020 07:00 PT | 10:00 ET | 15:00 BT | 16:00 CET
REGISTER NOW
Find out how graphs can support solving Cyber Security problems.
In the first part of this webinar, we will find out what makes graph databases so unique and powerful, and how we can use them to solve complex Cyber Security use cases, such as fake accounts, workloads hacks or application control.
In the second part of the webinar, we will demonstrate live how to build a suitable data model and which algorithms best to use to solve the respective problems. And we will provide tips and tricks to answer your questions.
Hope to see you there,
Sabine Seitz, Neo4j
What is a Graph Database?
Article Summary Experiment: Some Simple Economics of the Blockchain
I had mentioned this article previously, found it interesting and well written. Here an experiment with its content. Below it is excerpted ... with summary points, summary short video and link to the full (10 page) text. What do you think?
Some Simple Economics of the Blockchain
By Christian Catalini, Joshua S. Gans
Communications of the ACM, July 2020, Vol. 63 No. 7, Pages 80-90
10.1145/3359552
Some Simple Economics of the Blockchain
By Christian Catalini, Joshua S. Gans
Communications of the ACM, July 2020, Vol. 63 No. 7, Pages 80-90
10.1145/3359552
GPT-3 and Crypto Assets
Just recently been talking digital Assets and how advances in crypto might influence them. Here a piece from Coindesk that talks some of the issues, considering it. I am thinking this, let me know if you have comments. Full opinion piece at the link.
Crypto Needn’t Fear GPT-3. It Should Embrace It
Jul 22, 2020 at 17:47 UTC By Jesus Rodriquez in Coindesk
Jesus Rodriguez is the CEO of IntoTheBlock, a market intelligence platform for crypto assets. He has held leadership roles at major technology companies and hedge funds. He is an active investor, speaker, author and guest lecturer at Columbia University.
During the last few days, there has been an explosion of commentary in the crypto community about OpenAI’s new GPT-3 language generator model. Some of the comments express useful curiosity about GPT-3, while others are a bit to the extreme, asserting that the crypto community should be terrified about it.
The interest is somewhat surprising because the GPT models are not exactly new and they have been making headlines in the machine learning community for over a year now. The research behind the first GPT model was published in June 2018, followed by GPT-2 in February 2019 and most recently GPT-3 two months ago.
See also: What Is GPT-3 and Should We Be Terrified?
I think it is unlikely that GPT-3 by itself can have a major impact in the crypto ecosystem. However, the techniques behind GPT-3 represent the biggest advancement in deep learning in the last few years and, consequently, can become incredibly relevant to the analysis of crypto-assets. In this article, I would like to take a few minutes to dive into some of the concepts behind GPT-3 and contextualize it to the crypto world. ....
Crypto Needn’t Fear GPT-3. It Should Embrace It
Jul 22, 2020 at 17:47 UTC By Jesus Rodriquez in Coindesk
Jesus Rodriguez is the CEO of IntoTheBlock, a market intelligence platform for crypto assets. He has held leadership roles at major technology companies and hedge funds. He is an active investor, speaker, author and guest lecturer at Columbia University.
During the last few days, there has been an explosion of commentary in the crypto community about OpenAI’s new GPT-3 language generator model. Some of the comments express useful curiosity about GPT-3, while others are a bit to the extreme, asserting that the crypto community should be terrified about it.
The interest is somewhat surprising because the GPT models are not exactly new and they have been making headlines in the machine learning community for over a year now. The research behind the first GPT model was published in June 2018, followed by GPT-2 in February 2019 and most recently GPT-3 two months ago.
See also: What Is GPT-3 and Should We Be Terrified?
I think it is unlikely that GPT-3 by itself can have a major impact in the crypto ecosystem. However, the techniques behind GPT-3 represent the biggest advancement in deep learning in the last few years and, consequently, can become incredibly relevant to the analysis of crypto-assets. In this article, I would like to take a few minutes to dive into some of the concepts behind GPT-3 and contextualize it to the crypto world. ....
Blueprint for Tools to Manage a Pandemic
Like the process and requirements statement for a specific set of goals. Often not done rigorously enough.
Blueprint for the Perfect Coronavirus App
ETH Zurich (Switzerland)
Felix Wursten
July 20, 2020
Researchers at the Swiss Federal Institute of Technology in Zurich (ETH Zurich) have outlined the ethical and legal challenges of developing and implementing digital tools for managing the Covid-19 pandemic. The authors highlighted contact-tracing applications, programs for assessing an infection's presence based on symptoms, apps to check compliance of quarantine regulations, and flow models like those Google uses for mobility reports. ETH Zurich's Effy Vayena said rigorous scientific validation must ensure digital tools work as intended, and confirm their efficacy and reliability. Ethical issues include ensuring data collected by apps is not used for any other purpose without users' prior knowledge, and deploying tools for limited periods to deter their misuse for population surveillance. Vayena said, "The basic principles—respecting autonomy and privacy, promoting healthcare and solidarity, and preventing new infections and malicious behavior—are the same everywhere."
Blueprint for the Perfect Coronavirus App
ETH Zurich (Switzerland)
Felix Wursten
July 20, 2020
Researchers at the Swiss Federal Institute of Technology in Zurich (ETH Zurich) have outlined the ethical and legal challenges of developing and implementing digital tools for managing the Covid-19 pandemic. The authors highlighted contact-tracing applications, programs for assessing an infection's presence based on symptoms, apps to check compliance of quarantine regulations, and flow models like those Google uses for mobility reports. ETH Zurich's Effy Vayena said rigorous scientific validation must ensure digital tools work as intended, and confirm their efficacy and reliability. Ethical issues include ensuring data collected by apps is not used for any other purpose without users' prior knowledge, and deploying tools for limited periods to deter their misuse for population surveillance. Vayena said, "The basic principles—respecting autonomy and privacy, promoting healthcare and solidarity, and preventing new infections and malicious behavior—are the same everywhere."
Friday, July 24, 2020
US Wants to Build an Unhackable Quantum Internet
What does unhackable mean here?
U.S. Hatches Plan to Build Quantum Internet That Might Be Unhackable
The Washington Post
Jeanne Whalen
July 23, 2020
U.S. officials and scientists yesterday unveiled a plan to construct a potentially hackproof quantum Internet to operate parallel to the world's existing networks. The Department of Energy (DoE) and its national laboratories will form the project's main support pillar, and DoE official Paul Dabbar suggested it could be funded using some of the $500 million to $700 million in annual federal quantum information technology investments. A quantum Internet relies on entangled photons to share information over long distances without physical links; the race to create one is a global competition. Researchers said attempts to observe or disrupt photons or quantum bits in a quantum Internet would automatically change their state and destroy the transmitted information. A quantum Internet also could interconnect various quantum systems and boost their computing power. ... '
U.S. Hatches Plan to Build Quantum Internet That Might Be Unhackable
The Washington Post
Jeanne Whalen
July 23, 2020
U.S. officials and scientists yesterday unveiled a plan to construct a potentially hackproof quantum Internet to operate parallel to the world's existing networks. The Department of Energy (DoE) and its national laboratories will form the project's main support pillar, and DoE official Paul Dabbar suggested it could be funded using some of the $500 million to $700 million in annual federal quantum information technology investments. A quantum Internet relies on entangled photons to share information over long distances without physical links; the race to create one is a global competition. Researchers said attempts to observe or disrupt photons or quantum bits in a quantum Internet would automatically change their state and destroy the transmitted information. A quantum Internet also could interconnect various quantum systems and boost their computing power. ... '
Exploring Exploration
A very old proposition has said that if only we could make a system as inquisitive as a child, and had it able to learn and extend its knowledge, we could get to intelligent machines. An ultimate form of Machine learning. We are not close to that, but the attempt continues. I like the term: Exploring Exploration. Bair takes a look and explains the idea and what is being done. So how do children explore to learn? And can we use that as a model for machines? Extensive and not too technical report.
Exploring Exploration: Comparing Children with RL Agents in Unified Environments
Eliza Kosoy, Jasmine Collins and David Chan Jul 24, 2020
Despite recent advances in artificial intelligence (AI) research, human children are still by far the best learners we know of, learning impressive skills like language and high-level reasoning from very little data. Children’s learning is supported by highly efficient, hypothesis-driven exploration: in fact, they explore so well that many machine learning researchers have been inspired to put videos like the one below in their talks to motivate research into exploration methods. However, because applying results from studies in developmental psychology can be difficult, this video is often the extent to which such research actually connects with human cognition. ... "
Exploring Exploration: Comparing Children with RL Agents in Unified Environments
Eliza Kosoy, Jasmine Collins and David Chan Jul 24, 2020
Despite recent advances in artificial intelligence (AI) research, human children are still by far the best learners we know of, learning impressive skills like language and high-level reasoning from very little data. Children’s learning is supported by highly efficient, hypothesis-driven exploration: in fact, they explore so well that many machine learning researchers have been inspired to put videos like the one below in their talks to motivate research into exploration methods. However, because applying results from studies in developmental psychology can be difficult, this video is often the extent to which such research actually connects with human cognition. ... "
Intel Building Artificial Skin
Modeling chips more closely to biological neurons.
Researchers use Intel’s neuromorphic chip to build artificial skin By Maria Deutscher in SiliconAngle
Intel Corp. today revealed that researchers are using its neuromorphic chips to develop artificial skin for robots, in a project representing one of the first practical applications of the technology.
Intel, the leading maker of central processing units, is researching alternative chip architectures to help it maintain its long-term competitive advantage. Neuromorphic computing is one of the areas where the company is active. The term refers to an emerging class of chips that have transistors modeled after neurons to help them run artificial intelligence models faster. ... "
See also:
Neuromorphic Chips Take Shape By Samuel Greengard
Communications of the ACM, August 2020, Vol. 63 No. 8, Pages 9-11
10.1145/3403960 ...
Researchers use Intel’s neuromorphic chip to build artificial skin By Maria Deutscher in SiliconAngle
Intel Corp. today revealed that researchers are using its neuromorphic chips to develop artificial skin for robots, in a project representing one of the first practical applications of the technology.
Intel, the leading maker of central processing units, is researching alternative chip architectures to help it maintain its long-term competitive advantage. Neuromorphic computing is one of the areas where the company is active. The term refers to an emerging class of chips that have transistors modeled after neurons to help them run artificial intelligence models faster. ... "
See also:
Neuromorphic Chips Take Shape By Samuel Greengard
Communications of the ACM, August 2020, Vol. 63 No. 8, Pages 9-11
10.1145/3403960 ...
Thursday, July 23, 2020
P&G Works with Consumer Data via AI
Some useful details here. Note different forms of data being used as an asset.
P&G Gets Personal with Consumers through Data, AI Tech Collaboration
By Alarice Rajagopal - in Cnsumergoods
The Procter & Gamble Company (P&G) has selected data analytics and AI technology from Google Cloud, to enable more personalized experiences for consumers. Through this new collaboration, P&G will now be able to leverage consumer and media data to innovate product experiences and enrich the shopping journey for existing and new consumers.
"We're always looking to ensure a great consumer experience across all our categories, from healthcare to beauty products and much more," says Vittorio Cretella, CIO, Procter & Gamble. "As a leader in analytics and AI, Google Cloud is a strategic partner helping us offer our consumers superior products and services that provide value in a secure and transparent way."
In its more than 180 years, P&G has been at the forefront of innovation. P&G is now modernizing and integrating consumer, brand, and media data using cloud technology to deliver the next generation of consumer goods. Examples of how P&G is leveraging data for better consumer experiences include: ... "
P&G Gets Personal with Consumers through Data, AI Tech Collaboration
By Alarice Rajagopal - in Cnsumergoods
The Procter & Gamble Company (P&G) has selected data analytics and AI technology from Google Cloud, to enable more personalized experiences for consumers. Through this new collaboration, P&G will now be able to leverage consumer and media data to innovate product experiences and enrich the shopping journey for existing and new consumers.
"We're always looking to ensure a great consumer experience across all our categories, from healthcare to beauty products and much more," says Vittorio Cretella, CIO, Procter & Gamble. "As a leader in analytics and AI, Google Cloud is a strategic partner helping us offer our consumers superior products and services that provide value in a secure and transparent way."
In its more than 180 years, P&G has been at the forefront of innovation. P&G is now modernizing and integrating consumer, brand, and media data using cloud technology to deliver the next generation of consumer goods. Examples of how P&G is leveraging data for better consumer experiences include: ... "
Israeli Military Launches Radical New Google Maps Alternative
New approaches at detailed street maps.
Israeli Military Launches Radical New Google Maps Alternative
Forbes
Zak Doffman
June 30, 2020
The Israeli military has adopted artificial intelligence (AI), multi-source data fusion, and augmented reality (AR) to weed out terrorists from civilians in urban areas. Similar to Google Street View, the military is using an AR overlay from the fusion of multiple sources of highly classified intelligence and open source data on the terrain and environment, along with AI running pattern analytics from previous combat experiences to gauge the hidden enemy's next move. The AR display, which is shown to soldiers on a smartphone or tablet or streamed directly into their binoculars or weapons sights, helps them understand why a location has been deemed hostile. Final targeting decisions are left to the soldiers on the ground. The AI tool is tasked with distilling terabytes of intelligence every day into useful and relevant data, and soldiers have just five to 10 seconds to decide on any action they take based on that data. ..."
Israeli Military Launches Radical New Google Maps Alternative
Forbes
Zak Doffman
June 30, 2020
The Israeli military has adopted artificial intelligence (AI), multi-source data fusion, and augmented reality (AR) to weed out terrorists from civilians in urban areas. Similar to Google Street View, the military is using an AR overlay from the fusion of multiple sources of highly classified intelligence and open source data on the terrain and environment, along with AI running pattern analytics from previous combat experiences to gauge the hidden enemy's next move. The AR display, which is shown to soldiers on a smartphone or tablet or streamed directly into their binoculars or weapons sights, helps them understand why a location has been deemed hostile. Final targeting decisions are left to the soldiers on the ground. The AI tool is tasked with distilling terabytes of intelligence every day into useful and relevant data, and soldiers have just five to 10 seconds to decide on any action they take based on that data. ..."
Alexa-Live Sessions
Attended this, mostly either broadly general or deep technical. Dev and Device oriented. Like the direction, lots to do The on-demand pieces are at the link below, go to the bottom of the page at the link to register and see/hear the sessions:
Discover the Latest in Voice Technology
During Alexa Live, our virtual developer event, we brought together the developer and device maker community to explore the latest advancements in voice technology and Alexa development tools. Whether you build Alexa skills, make Alexa devices, or lead a business that’s incorporating voice, Alexa Live offered content and resources to help you build delightful customer experiences. View the on-demand sessions and resources to catch up on the news and technical deep-dives we shared at Alexa Live 2020. .... '
Discover the Latest in Voice Technology
During Alexa Live, our virtual developer event, we brought together the developer and device maker community to explore the latest advancements in voice technology and Alexa development tools. Whether you build Alexa skills, make Alexa devices, or lead a business that’s incorporating voice, Alexa Live offered content and resources to help you build delightful customer experiences. View the on-demand sessions and resources to catch up on the news and technical deep-dives we shared at Alexa Live 2020. .... '
Lego Data Talk
Lego is a favorite company for a number of reasons, met with their management in the 90s. Here an upcoming talk via Retailwire, plan to participate.
LEGO: Brick by Brick, Built on the Foundation of Data
Expert Webinar on July 30th, 2020: 11am EST | 8am PST
More info and Register Now
Discover one of LEGO's fundamental bricks to commercial success: high-quality data.
In this webinar, hear from LEGO's Head of Experience, Torben Nielsen, on how the innovative brand worked with Loqate to enhance their user experiences, optimize business efficiencies, and improve the accuracy of 25% of their customer database. ... "
LEGO: Brick by Brick, Built on the Foundation of Data
Expert Webinar on July 30th, 2020: 11am EST | 8am PST
More info and Register Now
Discover one of LEGO's fundamental bricks to commercial success: high-quality data.
In this webinar, hear from LEGO's Head of Experience, Torben Nielsen, on how the innovative brand worked with Loqate to enhance their user experiences, optimize business efficiencies, and improve the accuracy of 25% of their customer database. ... "
Image GPT
Been looking at GPT from OpenAI, and at their site found Image GPT. There a number of links to technical papers at the pieces below:
OpenAI first described GPT-3 in a research paper published in May. But last week it began drip-feeding the software to selected people who requested access to a private beta. For now, OpenAI wants outside developers to help it explore what GPT-3 can do, but it plans to turn the tool into a commercial product later this year, offering businesses a paid-for subscription to the AI via the cloud. ...
Image GPT
We find that, just as a large transformer model trained on language can generate coherent text, the same exact model trained on pixel sequences can generate coherent image completions and samples. By establishing a correlation between sample quality and image classification accuracy, we show that our best generative model also contains features competitive with top convolutional nets in the unsupervised setting.
Introduction
Unsupervised and self-supervised learning,1 or learning without human-labeled data, is a longstanding challenge of machine learning. Recently, it has seen incredible success in language, as transformer2 models like BERT,3 GPT-2,4 RoBERTa,5 T5,6 and other variants78910 have achieved top performance on a wide array of language tasks. However, the same broad class of models has not been successful in producing strong features for image classification.11 Our work aims to understand and bridge this gap.
Transformer models like BERT and GPT-2 are domain agnostic, meaning that they can be directly applied to 1-D sequences of any form. When we train GPT-2 on images unrolled into long sequences of pixels, which we call iGPT, we find that the model appears to understand 2-D image characteristics such as object appearance and category. This is evidenced by the diverse range of coherent image samples it generates, even without the guidance of human provided labels. As further proof, features from the model achieve state-of-the-art performance on a number of classification datasets and near state-of-the-art unsupervised accuracy[1] ... "
OpenAI first described GPT-3 in a research paper published in May. But last week it began drip-feeding the software to selected people who requested access to a private beta. For now, OpenAI wants outside developers to help it explore what GPT-3 can do, but it plans to turn the tool into a commercial product later this year, offering businesses a paid-for subscription to the AI via the cloud. ...
Image GPT
We find that, just as a large transformer model trained on language can generate coherent text, the same exact model trained on pixel sequences can generate coherent image completions and samples. By establishing a correlation between sample quality and image classification accuracy, we show that our best generative model also contains features competitive with top convolutional nets in the unsupervised setting.
Introduction
Unsupervised and self-supervised learning,1 or learning without human-labeled data, is a longstanding challenge of machine learning. Recently, it has seen incredible success in language, as transformer2 models like BERT,3 GPT-2,4 RoBERTa,5 T5,6 and other variants78910 have achieved top performance on a wide array of language tasks. However, the same broad class of models has not been successful in producing strong features for image classification.11 Our work aims to understand and bridge this gap.
Transformer models like BERT and GPT-2 are domain agnostic, meaning that they can be directly applied to 1-D sequences of any form. When we train GPT-2 on images unrolled into long sequences of pixels, which we call iGPT, we find that the model appears to understand 2-D image characteristics such as object appearance and category. This is evidenced by the diverse range of coherent image samples it generates, even without the guidance of human provided labels. As further proof, features from the model achieve state-of-the-art performance on a number of classification datasets and near state-of-the-art unsupervised accuracy[1] ... "
Responsible Innovation
Google reports on what they believe is responsible. Considerable piece. Lots of words.
An update on our work on AI and responsible innovation
Kent Walker SVP, Global Affairs
Jeff Dean Google Senior Fellow and SVP, Google Research and Health
Published Jul 9, 2020
AI is a powerful tool that will have a significant impact on society for many years to come, from improving sustainability around the globe to advancing the accuracy of disease screenings. As a leader in AI, we’ve always prioritized the importance of understanding its societal implications and developing it in a way that gets it right for everyone.
That’s why we first published our AI Principles two years ago and why we continue to provide regular updates on our work. As our CEO Sundar Pichai said in January, developing AI responsibly and with social benefit in mind can help avoid significant challenges and increase the potential to improve billions of lives.
The world has changed a lot since January, and in many ways our Principles have become even more important to the work of our researchers and product teams. As we develop AI we are committed to testing safety, measuring social benefits, and building strong privacy protections into products. Our Principles give us a clear framework for the kinds of AI applications we will not design or deploy, like those that violate human rights or enable surveillance that violates international norms. For example, we were the first major company to have decided, several years ago, not to make general-purpose facial recognition commercially available. ... "
An update on our work on AI and responsible innovation
Kent Walker SVP, Global Affairs
Jeff Dean Google Senior Fellow and SVP, Google Research and Health
Published Jul 9, 2020
AI is a powerful tool that will have a significant impact on society for many years to come, from improving sustainability around the globe to advancing the accuracy of disease screenings. As a leader in AI, we’ve always prioritized the importance of understanding its societal implications and developing it in a way that gets it right for everyone.
That’s why we first published our AI Principles two years ago and why we continue to provide regular updates on our work. As our CEO Sundar Pichai said in January, developing AI responsibly and with social benefit in mind can help avoid significant challenges and increase the potential to improve billions of lives.
The world has changed a lot since January, and in many ways our Principles have become even more important to the work of our researchers and product teams. As we develop AI we are committed to testing safety, measuring social benefits, and building strong privacy protections into products. Our Principles give us a clear framework for the kinds of AI applications we will not design or deploy, like those that violate human rights or enable surveillance that violates international norms. For example, we were the first major company to have decided, several years ago, not to make general-purpose facial recognition commercially available. ... "
Meshing and Simulation
Design meshes are a means for determining the design of simulation, especially for design problems. Good discussion.
Better simulation meshes well for design software (and more)
New work on 2D and 3D meshing aims to address challenges with some of today’s state-of-the-art methods. By Adam Conner-Simons | MIT CSAIL
The digital age has spurred the rise of entire industries aimed at simulating our world and the objects in it. Simulation is what helps movies have realistic effects, automakers test cars virtually, and scientists analyze geophysical data.
To simulate physical systems in 3D, researchers often program computers to divide objects into sets of smaller elements, a procedure known as “meshing.” Most meshing approaches tile 2D objects with patterns of triangles or quadrilaterals (quads), and tile 3D objects with patterns of triangular pyramids (tetrahedra) or bent cubes (hexahedra, or “hexes”).
While much progress has been made in the fields of computational geometry and geometry processing, scientists surprisingly still don’t fully understand the math of stacking together cubes when they are allowed to bend or stretch a bit. Many questions remain about the patterns that can be formed by gluing cube-shaped elements together, which relates to an area of math called topology.
New work out of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) aims to explore several of these questions. Researchers have published a series of papers that address shortcomings of existing meshing tools by seeking out mathematical structure in the problem. In collaboration with scientists at the University of Bern and the University of Texas at Austin, their work shows how areas of math like algebraic geometry, topology, and differential geometry could improve physical simulations used in computer-aided design (CAD), architecture, gaming, and other sectors.
“Simulation tools that are being deployed ‘in the wild’ don’t always fail gracefully,” says MIT Associate Professor Justin Solomon, senior author on the three new meshing-related papers. “If one thing is wrong with the mesh, the simulation might not agree with real-world physics, and you might have to throw the whole thing out.” .... '
In one paper, https://diglib.eg.org/handle/10.1111/c 4074a gf14074a team led by MIT undergraduate Zoë Marschner developed an algorithm to repair issues that can often trip up existing approaches for hex meshing, specifically. ..."
Better simulation meshes well for design software (and more)
New work on 2D and 3D meshing aims to address challenges with some of today’s state-of-the-art methods. By Adam Conner-Simons | MIT CSAIL
The digital age has spurred the rise of entire industries aimed at simulating our world and the objects in it. Simulation is what helps movies have realistic effects, automakers test cars virtually, and scientists analyze geophysical data.
To simulate physical systems in 3D, researchers often program computers to divide objects into sets of smaller elements, a procedure known as “meshing.” Most meshing approaches tile 2D objects with patterns of triangles or quadrilaterals (quads), and tile 3D objects with patterns of triangular pyramids (tetrahedra) or bent cubes (hexahedra, or “hexes”).
While much progress has been made in the fields of computational geometry and geometry processing, scientists surprisingly still don’t fully understand the math of stacking together cubes when they are allowed to bend or stretch a bit. Many questions remain about the patterns that can be formed by gluing cube-shaped elements together, which relates to an area of math called topology.
New work out of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) aims to explore several of these questions. Researchers have published a series of papers that address shortcomings of existing meshing tools by seeking out mathematical structure in the problem. In collaboration with scientists at the University of Bern and the University of Texas at Austin, their work shows how areas of math like algebraic geometry, topology, and differential geometry could improve physical simulations used in computer-aided design (CAD), architecture, gaming, and other sectors.
“Simulation tools that are being deployed ‘in the wild’ don’t always fail gracefully,” says MIT Associate Professor Justin Solomon, senior author on the three new meshing-related papers. “If one thing is wrong with the mesh, the simulation might not agree with real-world physics, and you might have to throw the whole thing out.” .... '
In one paper, https://diglib.eg.org/handle/10.1111/c 4074a gf14074a team led by MIT undergraduate Zoë Marschner developed an algorithm to repair issues that can often trip up existing approaches for hex meshing, specifically. ..."
Wednesday, July 22, 2020
How Good is Language Generator GPT-3?
Trying to understand the full range and usefulness of this, will schedule a test. I did take a look at GPT-2 for a client. Much more at the link ...
OpenAI’s new language generator GPT-3 is shockingly good—and completely mindless
The AI is the largest language model ever created and can generate amazing human-like text on demand but won't bring us closer to true intelligence.
by Will Douglas Heaven in TechologyReview
July 20, 2020
“Playing with GPT-3 feels like seeing the future,” Arram Sabeti, a San Francisco–based developer and artist, tweeted last week. That pretty much sums up the response on social media in the last few days to OpenAI’s latest language-generating AI. ... "
OpenAI’s new language generator GPT-3 is shockingly good—and completely mindless
The AI is the largest language model ever created and can generate amazing human-like text on demand but won't bring us closer to true intelligence.
by Will Douglas Heaven in TechologyReview
July 20, 2020
“Playing with GPT-3 feels like seeing the future,” Arram Sabeti, a San Francisco–based developer and artist, tweeted last week. That pretty much sums up the response on social media in the last few days to OpenAI’s latest language-generating AI. ... "
Linkedin Plans to Lay off
Bell weather activity,especially when you would expect them to thrive in an era of increased job hunting and HR activity.
In A Scary Sign Of The Times, LinkedIn Plans To Lay Off 960 People By Jack Kelly in Forbes
In a sign of the times, with a little bit of irony, LinkedIn—the go-to social media site for white-collar professionals—announced that it is laying off 960 workers.
In an open memo from LinkedIn CEO Ryan Roslansky to his employees, published on the platform, he wrote, “LinkedIn is not immune to the effects of the global pandemic. Our Talent Solutions business continues to be impacted as fewer companies, including ours, need to hire at the same volume they did previously.” ... '
In A Scary Sign Of The Times, LinkedIn Plans To Lay Off 960 People By Jack Kelly in Forbes
In a sign of the times, with a little bit of irony, LinkedIn—the go-to social media site for white-collar professionals—announced that it is laying off 960 workers.
In an open memo from LinkedIn CEO Ryan Roslansky to his employees, published on the platform, he wrote, “LinkedIn is not immune to the effects of the global pandemic. Our Talent Solutions business continues to be impacted as fewer companies, including ours, need to hire at the same volume they did previously.” ... '
3D Latex Printing
Nteo the use use of an embedded 3D printing system,another example of added sensor applications. Potential healthcare applications.
Innovations Lead to 3D-Printed Latex Rubber Breakthrough
Virginia Tech News
July 13, 2020
Researchers at Virginia Polytechnic Institute and State University (Virginia Tech) have created a process to three-dimensionally (3D) print latex rubber, which could pave the way for printing elastic materials in complex geometric shapes for applications such as soft robotics, medical devices, and shock absorbers. The researchers chemically modified liquid latexes to make them printable, and built a custom 3D printer with an embedded computer vision system. Because liquid latex is extremely fragile and difficult to alter, the researchers built a scaffold around the latex particles to keep the structure in place, allowing for the addition of photoinitiators and other compounds to enable 3D printing with ultraviolet (UV) light. A camera embedded in the printer allows the machine to see the interaction of UV light on the latex resin surface and automatically adjusts printing parameters to reduce resin scattering. ... "
Innovations Lead to 3D-Printed Latex Rubber Breakthrough
Virginia Tech News
July 13, 2020
Researchers at Virginia Polytechnic Institute and State University (Virginia Tech) have created a process to three-dimensionally (3D) print latex rubber, which could pave the way for printing elastic materials in complex geometric shapes for applications such as soft robotics, medical devices, and shock absorbers. The researchers chemically modified liquid latexes to make them printable, and built a custom 3D printer with an embedded computer vision system. Because liquid latex is extremely fragile and difficult to alter, the researchers built a scaffold around the latex particles to keep the structure in place, allowing for the addition of photoinitiators and other compounds to enable 3D printing with ultraviolet (UV) light. A camera embedded in the printer allows the machine to see the interaction of UV light on the latex resin surface and automatically adjusts printing parameters to reduce resin scattering. ... "
Tuesday, July 21, 2020
NVIDIA and U of Florida
An Alma of mine makes an interesting AI move with NVidia.
UF announces $70 million artificial intelligence partnership with NVIDIA
Artist's rendering of University of Florida's new AI supercomputer based on NVIDIA DGX SuperPOD architecture.
“With AI holding the potential to revolutionize education and research – and indeed every sector of society – the University of Florida will work to ensure diversity, equity, and inclusion are at the center of this transformational change. We’re thrilled UF is demonstrating such national leadership and placing community engagement and training a 21st Century workforce at the heart of its mission.”
Peter McPherson, President, Association of Public and Land-grant Universities
The University of Florida today announced a public-private partnership with NVIDIA that will catapult UF’s research strength to address some of the world’s most formidable challenges, create unprecedented access to AI training and tools for underrepresented communities, and build momentum for transforming the future of the workforce.
The initiative is anchored by a $50 million gift -- $25 million from UF alumnus Chris Malachowsky and $25 million in hardware, software, training and services from NVIDIA, the Silicon Valley-based technology company he cofounded and a world leader in AI and accelerated computing.
Along with an additional $20 million investment from UF, the initiative will create an AI-centric data center that houses the world’s fastest AI supercomputer in higher education. Working closely with NVIDIA, UF will boost the capabilities of its existing supercomputer, HiPerGator, with the recently announced NVIDIA DGX SuperPOD™ architecture. This will give faculty and students within and beyond UF the tools to apply AI across a multitude of areas to improve lives, bolster industry, and create economic growth across the state.
“This incredible gift from Chris and NVIDIA will propel the state of Florida to new heights as it strives to be an economic powerhouse, an unrivaled leader in job creation and an international model of 21st-century know-how,” Florida Gov. Ron DeSantis said. “Over the coming years, tens of thousands of University of Florida graduates with this unique AI-oriented background will create their futures and ours, transforming our workforce and virtually every field and every industry here in Florida and around the world.” .. ... '
UF announces $70 million artificial intelligence partnership with NVIDIA
Artist's rendering of University of Florida's new AI supercomputer based on NVIDIA DGX SuperPOD architecture.
“With AI holding the potential to revolutionize education and research – and indeed every sector of society – the University of Florida will work to ensure diversity, equity, and inclusion are at the center of this transformational change. We’re thrilled UF is demonstrating such national leadership and placing community engagement and training a 21st Century workforce at the heart of its mission.”
Peter McPherson, President, Association of Public and Land-grant Universities
The University of Florida today announced a public-private partnership with NVIDIA that will catapult UF’s research strength to address some of the world’s most formidable challenges, create unprecedented access to AI training and tools for underrepresented communities, and build momentum for transforming the future of the workforce.
The initiative is anchored by a $50 million gift -- $25 million from UF alumnus Chris Malachowsky and $25 million in hardware, software, training and services from NVIDIA, the Silicon Valley-based technology company he cofounded and a world leader in AI and accelerated computing.
Along with an additional $20 million investment from UF, the initiative will create an AI-centric data center that houses the world’s fastest AI supercomputer in higher education. Working closely with NVIDIA, UF will boost the capabilities of its existing supercomputer, HiPerGator, with the recently announced NVIDIA DGX SuperPOD™ architecture. This will give faculty and students within and beyond UF the tools to apply AI across a multitude of areas to improve lives, bolster industry, and create economic growth across the state.
“This incredible gift from Chris and NVIDIA will propel the state of Florida to new heights as it strives to be an economic powerhouse, an unrivaled leader in job creation and an international model of 21st-century know-how,” Florida Gov. Ron DeSantis said. “Over the coming years, tens of thousands of University of Florida graduates with this unique AI-oriented background will create their futures and ours, transforming our workforce and virtually every field and every industry here in Florida and around the world.” .. ... '
Amazon Scout Delivery Robots Expand
Been awaiting for this, looking forward to see it operational in the suburbs, on the roads and sidewalks, among people. My camera is ready.
Amazon is testing its Scout delivery robots in Georgia and Tennessee
It’s already testing the bots in Washington and California.
Christine Fisher, @cfisherwrites in Engadget
If you live in Atlanta, Georgia or Franklin, Tennessee, your next Amazon order might arrive in one of the company’s Scout delivery robots. Amazon began testing its cooler-sized delivery bots in Snohomish County, Washington last year. They’ve been making deliveries in the Irvine area of California, and this week they popped up in Atlanta and Franklin.
Only a handful of Amazon Scout devices will operate in each city. They’ll be accompanied by a human, travel at walking speed and make deliveries Monday through Friday, during daylight hours. Customers will place their Amazon orders as usual, and there won’t be any additional cost for Scout deliveries.
Scout has successfully navigated around objects on the sidewalk -- from dogs to refrigerators left for pickup and surfboards. We still don’t know how it will verify who is opening its storage hatch or how it will unload packages if no one is there to collect them. For now, its human assistant will take care of that. .. "
Amazon is testing its Scout delivery robots in Georgia and Tennessee
It’s already testing the bots in Washington and California.
Christine Fisher, @cfisherwrites in Engadget
If you live in Atlanta, Georgia or Franklin, Tennessee, your next Amazon order might arrive in one of the company’s Scout delivery robots. Amazon began testing its cooler-sized delivery bots in Snohomish County, Washington last year. They’ve been making deliveries in the Irvine area of California, and this week they popped up in Atlanta and Franklin.
Only a handful of Amazon Scout devices will operate in each city. They’ll be accompanied by a human, travel at walking speed and make deliveries Monday through Friday, during daylight hours. Customers will place their Amazon orders as usual, and there won’t be any additional cost for Scout deliveries.
Scout has successfully navigated around objects on the sidewalk -- from dogs to refrigerators left for pickup and surfboards. We still don’t know how it will verify who is opening its storage hatch or how it will unload packages if no one is there to collect them. For now, its human assistant will take care of that. .. "
Ethereum is 5, and is Talking
Been impressed by the direction of Blockchain player Ethereum, which has been ambitious and planning to publish a newsletter as part of their milestone. I ask: Where has it come from, where is it going? Where will it take us beyond money? When will the smart contract rise? (They promise a post on that) Implications for cryptography, distributed consensus, Data Identity and Secure Shared intelligence? Am Following. You can sign up for the limited newsletter at the link.
Ethereum Turns Five Next Week and We’re Producing a Special Series
By Elaine Ramirez from Coindesk
Five years ago, a wildly ambitious project went live. Its creators envisioned a “world computer” that would transform not just money but a vast range of social interactions, pushing the boundaries of what could be done with cryptography and distributed consensus. Ethereum had arrived.
From its technical aspirations to unicorn memes, Ethereum is a culture on its own. It has spawned inventions – from digital cats to yield farming – previously unimagined and now faces a major overhaul – Eth 2.0 – to keep up with the market’s demands.
CoinDesk is marking the milestone with Ethereum at Five: a cross-platform series featuring special coverage, a limited-run newsletter and live-streamed discussions on Twitter. New issues and sessions launch daily from July 27-31. ....
Ethereum at Five Newsletter
Each morning during the event, our editorial team will publish a newsletter that covers the waterfront from Ethereum’s culture and lifestyle to innovations like decentralized apps, DAOs, decentralized finance and enterprise solutions, and closes out by asking the big questions about the hotly anticipated Ethereum 2.0.
Don’t know what any of that means? Led by editors Marc Hochstein, Elaine Ramirez, Christie Harkin and Zack Seward, our limited-run newsletter will be packed with educational content that eases the newbie into Ethereum, while offering the thoughtful deep-dives loyal CoinDesk readers know and love.
CoinDesk reporters and analysts Ian Allison, Leigh Cuen, Brady Dale, Nate DiCamillo, Will Foxley, Christine Kim and Hoa Nguyen will guide readers through the five years of Ethereum’s evolution, including Ethereum’s efforts to upend the banking industry, its challenges in scalability and enterprise adoption, its Woodstock moment and its status as a global lifestyle brand.
Meanwhile, we’ll provide bite-sized lessons to ease readers into the concepts of smart contracts, enterprise blockchain, yield farming and more and offer recommended reading from the CoinDesk vault. ... "
Ethereum Turns Five Next Week and We’re Producing a Special Series
By Elaine Ramirez from Coindesk
Five years ago, a wildly ambitious project went live. Its creators envisioned a “world computer” that would transform not just money but a vast range of social interactions, pushing the boundaries of what could be done with cryptography and distributed consensus. Ethereum had arrived.
From its technical aspirations to unicorn memes, Ethereum is a culture on its own. It has spawned inventions – from digital cats to yield farming – previously unimagined and now faces a major overhaul – Eth 2.0 – to keep up with the market’s demands.
CoinDesk is marking the milestone with Ethereum at Five: a cross-platform series featuring special coverage, a limited-run newsletter and live-streamed discussions on Twitter. New issues and sessions launch daily from July 27-31. ....
Ethereum at Five Newsletter
Each morning during the event, our editorial team will publish a newsletter that covers the waterfront from Ethereum’s culture and lifestyle to innovations like decentralized apps, DAOs, decentralized finance and enterprise solutions, and closes out by asking the big questions about the hotly anticipated Ethereum 2.0.
Don’t know what any of that means? Led by editors Marc Hochstein, Elaine Ramirez, Christie Harkin and Zack Seward, our limited-run newsletter will be packed with educational content that eases the newbie into Ethereum, while offering the thoughtful deep-dives loyal CoinDesk readers know and love.
CoinDesk reporters and analysts Ian Allison, Leigh Cuen, Brady Dale, Nate DiCamillo, Will Foxley, Christine Kim and Hoa Nguyen will guide readers through the five years of Ethereum’s evolution, including Ethereum’s efforts to upend the banking industry, its challenges in scalability and enterprise adoption, its Woodstock moment and its status as a global lifestyle brand.
Meanwhile, we’ll provide bite-sized lessons to ease readers into the concepts of smart contracts, enterprise blockchain, yield farming and more and offer recommended reading from the CoinDesk vault. ... "
Microsoft Studies Remote Working
Good to see this being studied in the new context.
Microsoft Analyzed Data on Its Newly Remote Workforce
By Harvard Business Review
July 20, 2020
Microsoft employees found new touchpoints ranging from group lunches to happy hours with themes such as "pajama day" and "meet my pet." ...
Microsoft began a study of remote work four months ago when the pandemic prompted work-from-home practices. "We wanted to study how flexible and adaptable [work] might or might not be, how collaboration and networks morph in remote settings, what agility looks like in different spaces," say members of the company's Workplace Insights, Workplace Analytics, and workplace intelligence teams.
The experiment measured how work patterns across the groups were changing, and included anonymous sentiment surveys. "We looked weekly at areas such as work-life balance and collaboration by analyzing aggregated, de-identified email, calendar, and IM metadata; comparing it with metadata from a prior time period; and inviting colleagues to share their thoughts and feelings," the researchers say.
Among their findings:
Workdays are lengthening. Employees said they were carving out pockets of personal time to care for children, grab some fresh air or exercise, and walk the dog. To accommodate these breaks, people were likely signing into work earlier and signing off later.
Meetings are getting shorter. The total time for meetings each week increased by 10% overall, and individual meetings actually shrank in duration. Workers had 22% more meetings of 30 minutes or less and 11% fewer meetings of more than one hour. ... "
Via the HBR and ACM
Microsoft Analyzed Data on Its Newly Remote Workforce
By Harvard Business Review
July 20, 2020
Microsoft employees found new touchpoints ranging from group lunches to happy hours with themes such as "pajama day" and "meet my pet." ...
Microsoft began a study of remote work four months ago when the pandemic prompted work-from-home practices. "We wanted to study how flexible and adaptable [work] might or might not be, how collaboration and networks morph in remote settings, what agility looks like in different spaces," say members of the company's Workplace Insights, Workplace Analytics, and workplace intelligence teams.
The experiment measured how work patterns across the groups were changing, and included anonymous sentiment surveys. "We looked weekly at areas such as work-life balance and collaboration by analyzing aggregated, de-identified email, calendar, and IM metadata; comparing it with metadata from a prior time period; and inviting colleagues to share their thoughts and feelings," the researchers say.
Among their findings:
Workdays are lengthening. Employees said they were carving out pockets of personal time to care for children, grab some fresh air or exercise, and walk the dog. To accommodate these breaks, people were likely signing into work earlier and signing off later.
Meetings are getting shorter. The total time for meetings each week increased by 10% overall, and individual meetings actually shrank in duration. Workers had 22% more meetings of 30 minutes or less and 11% fewer meetings of more than one hour. ... "
Via the HBR and ACM
KFC Testing 3D BioPrinted Chicken
One of those things that didn't even possible not too long ago. Now being tested by a major player.
KFC to 3D Print Chicken Using Lab-Grown 'Meat of the Future'
The fast food chain wants to develop the world's first laboratory-produced chicken nuggets.
Stephanie Mlot
"Our experiment in testing 3D bioprinting technology to create chicken products can also help address several looming global problems," Raisa Polyakova, general manager of KFC Russia, said in a statement. "We are glad to contribute to its development and are working to make it available to thousands of people in Russia and, if possible, around the world."
A final product should be ready for testing this fall in Moscow, where folks are working on additive bioprinting technology that uses chicken cells and plant material to reproduce the taste and texture of meat, "almost" without involving animals. Biomeat has the same microelements of the original product without any additives (typically used in the production, processing, treatment, packaging, transportation, or storage), making it cleaner and more ethical, considering the process does not harm animals. ... "
KFC to 3D Print Chicken Using Lab-Grown 'Meat of the Future'
The fast food chain wants to develop the world's first laboratory-produced chicken nuggets.
Stephanie Mlot
"Our experiment in testing 3D bioprinting technology to create chicken products can also help address several looming global problems," Raisa Polyakova, general manager of KFC Russia, said in a statement. "We are glad to contribute to its development and are working to make it available to thousands of people in Russia and, if possible, around the world."
A final product should be ready for testing this fall in Moscow, where folks are working on additive bioprinting technology that uses chicken cells and plant material to reproduce the taste and texture of meat, "almost" without involving animals. Biomeat has the same microelements of the original product without any additives (typically used in the production, processing, treatment, packaging, transportation, or storage), making it cleaner and more ethical, considering the process does not harm animals. ... "
Robots Given Human Like Perception of Environment
In general, there is need for general an understanding of context, starting with physical environment.
MIT researchers have developed a representation of spatial perception for robots that is modeled after the way humans perceive and navigate the world. The key component of the team’s new model is Kimera, an open-source library that the team previously developed to simultaneously construct a 3D geometric model of an environment. Kimera builds a dense 3D semantic mesh of an environment and can track humans in the environment. The figure shows a multi-frame action sequence of a human moving in the scene. (Videos at the link) Paper: https://roboticsconference.org/program/papers/79/
“Alexa, go to the kitchen and fetch me a snack”
New model aims to give robots human-like perception of their physical environments.
Jennifer Chu | MIT News Office
Wouldn’t we all appreciate a little help around the house, especially if that help came in the form of a smart, adaptable, uncomplaining robot? Sure, there are the one-trick Roombas of the appliance world. But MIT engineers are envisioning robots more like home helpers, able to follow high-level, Alexa-type commands, such as “Go to the kitchen and fetch me a coffee cup.” ...
To carry out such high-level tasks, researchers believe robots will have to be able to perceive their physical environment as humans do.
“In order to make any decision in the world, you need to have a mental model of the environment around you,” says Luca Carlone, assistant professor of aeronautics and astronautics at MIT. “This is something so effortless for humans. But for robots it’s a painfully hard problem, where it’s about transforming pixel values that they see through a camera, into an understanding of the world.”
Now Carlone and his students have developed a representation of spatial perception for robots that is modeled after the way humans perceive and navigate the world.
The new model, which they call 3D Dynamic Scene Graphs, enables a robot to quickly generate a 3D map of its surroundings that also includes objects and their semantic labels (a chair versus a table, for instance), as well as people, rooms, walls, and other structures that the robot is likely seeing in its environment.
The model also allows the robot to extract relevant information from the 3D map, to query the location of objects and rooms, or the movement of people in its path. ... "
MIT researchers have developed a representation of spatial perception for robots that is modeled after the way humans perceive and navigate the world. The key component of the team’s new model is Kimera, an open-source library that the team previously developed to simultaneously construct a 3D geometric model of an environment. Kimera builds a dense 3D semantic mesh of an environment and can track humans in the environment. The figure shows a multi-frame action sequence of a human moving in the scene. (Videos at the link) Paper: https://roboticsconference.org/program/papers/79/
“Alexa, go to the kitchen and fetch me a snack”
New model aims to give robots human-like perception of their physical environments.
Jennifer Chu | MIT News Office
Wouldn’t we all appreciate a little help around the house, especially if that help came in the form of a smart, adaptable, uncomplaining robot? Sure, there are the one-trick Roombas of the appliance world. But MIT engineers are envisioning robots more like home helpers, able to follow high-level, Alexa-type commands, such as “Go to the kitchen and fetch me a coffee cup.” ...
To carry out such high-level tasks, researchers believe robots will have to be able to perceive their physical environment as humans do.
“In order to make any decision in the world, you need to have a mental model of the environment around you,” says Luca Carlone, assistant professor of aeronautics and astronautics at MIT. “This is something so effortless for humans. But for robots it’s a painfully hard problem, where it’s about transforming pixel values that they see through a camera, into an understanding of the world.”
Now Carlone and his students have developed a representation of spatial perception for robots that is modeled after the way humans perceive and navigate the world.
The new model, which they call 3D Dynamic Scene Graphs, enables a robot to quickly generate a 3D map of its surroundings that also includes objects and their semantic labels (a chair versus a table, for instance), as well as people, rooms, walls, and other structures that the robot is likely seeing in its environment.
The model also allows the robot to extract relevant information from the 3D map, to query the location of objects and rooms, or the movement of people in its path. ... "
Offline Learning
Some good points made about using particular machine learning methods off-line. The point that offline could be dangerous.
D4RL: Building Better Benchmarks for Offline Reinforcement Learning
By Justin Fu By Berkeley
In the last decade, one of the biggest drivers for success in machine learning has arguably been the rise of high-capacity models such as neural networks along with large datasets such as ImageNet to produce accurate models. While we have seen deep neural networks being applied to success in reinforcement learning (RL) in domains such as robotics, poker, board games, and team-based video games, a significant barrier to getting these methods working on real-world problems is the difficulty of large-scale online data collection.
Not only is online data collection time-consuming and expensive, it can also be dangerous in safety-critical domains such as driving or healthcare. For example, it would be unreasonable to allow reinforcement learning agents to explore, make mistakes, and learn while controlling an autonomous vehicle or treating patients in a hospital. This makes learning from pre-collected experience enticing, and we are fortunate in that many of these domains, there already exist large datasets for applications such as self-driving cars, healthcare, or robotics. Therefore, the ability for RL algorithms to learn offline from these datasets (a setting referred to as offline or batch RL) has an enormous potential impact in shaping the way we build machine learning systems for the future. ... "
D4RL: Building Better Benchmarks for Offline Reinforcement Learning
By Justin Fu By Berkeley
In the last decade, one of the biggest drivers for success in machine learning has arguably been the rise of high-capacity models such as neural networks along with large datasets such as ImageNet to produce accurate models. While we have seen deep neural networks being applied to success in reinforcement learning (RL) in domains such as robotics, poker, board games, and team-based video games, a significant barrier to getting these methods working on real-world problems is the difficulty of large-scale online data collection.
Not only is online data collection time-consuming and expensive, it can also be dangerous in safety-critical domains such as driving or healthcare. For example, it would be unreasonable to allow reinforcement learning agents to explore, make mistakes, and learn while controlling an autonomous vehicle or treating patients in a hospital. This makes learning from pre-collected experience enticing, and we are fortunate in that many of these domains, there already exist large datasets for applications such as self-driving cars, healthcare, or robotics. Therefore, the ability for RL algorithms to learn offline from these datasets (a setting referred to as offline or batch RL) has an enormous potential impact in shaping the way we build machine learning systems for the future. ... "
Monday, July 20, 2020
Assistant IOT Regulation
Specifics of the IOT aspects would seem to be beyond these, and to any assistant, depending on the definition of an IOT.
Siri, Alexa Targeted as E.U. Probes Internet of Things
By Bloomberg July 20, 2020
The European Commission is mounting an antitrust investigation into the Internet of Things, targeting voice assistants such as Apple's Siri and Amazon's Alexa.
The probe concerns how Silicon Valley uses data to dominate growing markets, and European Union (E.U.) Competition Commissioner Margrethe Vestager said voice assistants are central because they control how users interact with things.
Such products give technology firms access to sensitive data about consumers at a time when they continue to collect information, worrying regulators this could quash competition.
The E.U. said companies favoring their own products or setting strict terms on industry standards could impel dominant digital ecosystems and gatekeepers. For example, voice requests to buy products could circumvent competitors by immediately directing purchases to a single shopping website, like Amazon.
From Bloomberg
View Full Article ...
Siri, Alexa Targeted as E.U. Probes Internet of Things
By Bloomberg July 20, 2020
The European Commission is mounting an antitrust investigation into the Internet of Things, targeting voice assistants such as Apple's Siri and Amazon's Alexa.
The probe concerns how Silicon Valley uses data to dominate growing markets, and European Union (E.U.) Competition Commissioner Margrethe Vestager said voice assistants are central because they control how users interact with things.
Such products give technology firms access to sensitive data about consumers at a time when they continue to collect information, worrying regulators this could quash competition.
The E.U. said companies favoring their own products or setting strict terms on industry standards could impel dominant digital ecosystems and gatekeepers. For example, voice requests to buy products could circumvent competitors by immediately directing purchases to a single shopping website, like Amazon.
From Bloomberg
View Full Article ...
Automatically Generating Reinforcement Learning Algorithms
Can be expected that many current machine learning techniques will move towards automation. Papers mentioned are worth looking at.
DeepMind’s AI automatically generates reinforcement learning algorithms
Kyle Wiggers in VentureBeat
In a study published on the preprint server Arxiv.org, DeepMind researchers describe a reinforcement learning algorithm-generating technique that discovers what to predict and how to learn it by interacting with environments. They claim the generated algorithms perform well on a range of challenging Atari video games, achieving “non-trivial” performance indicative of the technique’s generalizability.
Reinforcement learning algorithms — algorithms that enable software agents to learn in environments by trial and error using feedback — update an agent’s parameters according to one of several rules. These rules are usually discovered through years of research, and automating their discovery from data could lead to more efficient algorithms, or algorithms better adapted to specific environments. ... "
DeepMind’s AI automatically generates reinforcement learning algorithms
Kyle Wiggers in VentureBeat
In a study published on the preprint server Arxiv.org, DeepMind researchers describe a reinforcement learning algorithm-generating technique that discovers what to predict and how to learn it by interacting with environments. They claim the generated algorithms perform well on a range of challenging Atari video games, achieving “non-trivial” performance indicative of the technique’s generalizability.
Reinforcement learning algorithms — algorithms that enable software agents to learn in environments by trial and error using feedback — update an agent’s parameters according to one of several rules. These rules are usually discovered through years of research, and automating their discovery from data could lead to more efficient algorithms, or algorithms better adapted to specific environments. ... "
Pandemic Shopping Habits
An interesting peek into corporate bahavior.
Our pandemic shopping habits are here to stay. Brands are racing to adapt By Hanna Ziady CNN Business
Three days a week at 7:00 am, senior Procter & Gamble executives check in with each other about their customers: what they're buying, how their needs are changing and whether the company's products are hitting the mark. .. "
Our pandemic shopping habits are here to stay. Brands are racing to adapt By Hanna Ziady CNN Business
Three days a week at 7:00 am, senior Procter & Gamble executives check in with each other about their customers: what they're buying, how their needs are changing and whether the company's products are hitting the mark. .. "
The Industrial Internet Consortium
Was just reminded of the Industrial Internet Consortium. Has some interesting documents within.
THE INDUSTRIAL INTERNET CONSORTIUM: A GLOBAL NOT-FOR-PROFIT PARTNERSHIP OF INDUSTRY, GOVERNMENT AND ACADEMIA
The Industrial Internet Consortium was founded in March 2014 to bring together the organizations and technologies necessary to accelerate the growth of the industrial internet by identifying, assembling, testing and promoting best practices. Members work collaboratively to speed the commercial use of advanced technologies. Membership includes small and large technology innovators, vertical market leaders, researchers, universities and government organizations.
Through multiple activities and programs, the Industrial Internet Consortium helps technology users, vendors, system integrators and researchers achieve tangible results as they seek to digitally transform across the enterprise. The resources of the Industrial Internet Consortium – developed collaboratively over the years by industry experts from around the globe and across all industries – give organizations the guidance needed to strategically apply digital technologies and achieve digital transformation..... "
THE INDUSTRIAL INTERNET CONSORTIUM: A GLOBAL NOT-FOR-PROFIT PARTNERSHIP OF INDUSTRY, GOVERNMENT AND ACADEMIA
The Industrial Internet Consortium was founded in March 2014 to bring together the organizations and technologies necessary to accelerate the growth of the industrial internet by identifying, assembling, testing and promoting best practices. Members work collaboratively to speed the commercial use of advanced technologies. Membership includes small and large technology innovators, vertical market leaders, researchers, universities and government organizations.
Through multiple activities and programs, the Industrial Internet Consortium helps technology users, vendors, system integrators and researchers achieve tangible results as they seek to digitally transform across the enterprise. The resources of the Industrial Internet Consortium – developed collaboratively over the years by industry experts from around the globe and across all industries – give organizations the guidance needed to strategically apply digital technologies and achieve digital transformation..... "
A Look at Transfer Learning
Good generalized look at the concept of Transfer Learnig
Everything you need to know about transfer learning in AI in TNW
Today, artificial intelligence programs can recognize faces and objects in photos and videos, transcribe audio in real-time, detect cancer in x-ray scans years in advance, and compete with humans in some of the most complicated games.
Until a few years ago, all these challenges were either thought insurmountable, decades away, or were being solved with sub-optimal results. But advances in neural networks and deep learning, a branch of AI that has become very popular in the past few years, has helped computers solve these and many other complicated problems.
Unfortunately, when created from scratch, deep learning models require access to vast amounts of data and compute resources. This is a luxury that many can’t afford. Moreover, it takes a long time to train deep learning models to perform tasks, which is not suitable for use cases that have a short time budget.
Fortunately, transfer learning, the discipline of using the knowledge gained from one trained AI model to another, can help solve these problems.
The cost of training deep learning models
Deep learning is a subset of machine learning, the science of developing AI through training examples. The concepts and science behind deep learning and neural networks is as old as the term “artificial intelligence” itself. But until recent years, they had been largely dismissed by the AI community for being inefficient.
The availability of vast amounts of data and compute resources in the past few years have pushed neural networks into the limelight and made it possible to develop deep learning algorithms that can solve real world problems.
To train a deep learning model, you basically must feed a neural network with lots of annotated examples. These examples can be things such as labeled images of objects or mammograms scans of patients with their eventual outcomes. The neural network will carefully analyze and compare the images and develop mathematical models that represent the recurring patterns between images of a similar category.
[Read: Weird AI illustrates why algorithms still need people]
There already exists several large open-source datasets such as ImageNet, a database of more than 14 million images labeled in 22,000 categories, and MNIST, a dataset of 60,000 handwritten digits. AI engineers can use these sources to train their deep learning models.
However, training deep learning models also requires access to very strong computing resources. Developers usually use clusters of CPUs, GPUs or specialized hardware such as Google’s Tensor Processors (TPUs) to train neural networks in a time-efficient way. The costs of purchasing or renting such resources can be beyond the budget of individual developers or small organizations. Also, for many problems, there aren’t enough examples to train robust AI models.
Transfer learning makes deep learning training much less demanding
Say an AI engineer wants to create an image classifier neural network to solve a specific problem. Instead of gathering thousands and millions of images, the engineer can use one of the publicly available datasets such as ImageNet and enhance it with domain-specific photos.
But the AI engineer must still rent pay a hefty sum to rent the compute resources necessary to run those millions of images through the neural network. This is where transfer learning comes into play. Transfer learning is the process of creating new AI models by fine-tuning previously trained neural networks. ... "
Everything you need to know about transfer learning in AI in TNW
Today, artificial intelligence programs can recognize faces and objects in photos and videos, transcribe audio in real-time, detect cancer in x-ray scans years in advance, and compete with humans in some of the most complicated games.
Until a few years ago, all these challenges were either thought insurmountable, decades away, or were being solved with sub-optimal results. But advances in neural networks and deep learning, a branch of AI that has become very popular in the past few years, has helped computers solve these and many other complicated problems.
Unfortunately, when created from scratch, deep learning models require access to vast amounts of data and compute resources. This is a luxury that many can’t afford. Moreover, it takes a long time to train deep learning models to perform tasks, which is not suitable for use cases that have a short time budget.
Fortunately, transfer learning, the discipline of using the knowledge gained from one trained AI model to another, can help solve these problems.
The cost of training deep learning models
Deep learning is a subset of machine learning, the science of developing AI through training examples. The concepts and science behind deep learning and neural networks is as old as the term “artificial intelligence” itself. But until recent years, they had been largely dismissed by the AI community for being inefficient.
The availability of vast amounts of data and compute resources in the past few years have pushed neural networks into the limelight and made it possible to develop deep learning algorithms that can solve real world problems.
To train a deep learning model, you basically must feed a neural network with lots of annotated examples. These examples can be things such as labeled images of objects or mammograms scans of patients with their eventual outcomes. The neural network will carefully analyze and compare the images and develop mathematical models that represent the recurring patterns between images of a similar category.
[Read: Weird AI illustrates why algorithms still need people]
There already exists several large open-source datasets such as ImageNet, a database of more than 14 million images labeled in 22,000 categories, and MNIST, a dataset of 60,000 handwritten digits. AI engineers can use these sources to train their deep learning models.
However, training deep learning models also requires access to very strong computing resources. Developers usually use clusters of CPUs, GPUs or specialized hardware such as Google’s Tensor Processors (TPUs) to train neural networks in a time-efficient way. The costs of purchasing or renting such resources can be beyond the budget of individual developers or small organizations. Also, for many problems, there aren’t enough examples to train robust AI models.
Transfer learning makes deep learning training much less demanding
Say an AI engineer wants to create an image classifier neural network to solve a specific problem. Instead of gathering thousands and millions of images, the engineer can use one of the publicly available datasets such as ImageNet and enhance it with domain-specific photos.
But the AI engineer must still rent pay a hefty sum to rent the compute resources necessary to run those millions of images through the neural network. This is where transfer learning comes into play. Transfer learning is the process of creating new AI models by fine-tuning previously trained neural networks. ... "
Subscribe to:
Posts (Atom)