The broad idea has been around for while, we tried it for tracking a manufacturing process to determine the 'feel' of output based on handling it. A kind of test that could indicated if maintenance or mixing parameters needed to be updated. Could see some agricultural applications. Also, if you can gather data from the gloves, you might be able to look for patterns that predicted other issues. For example dryness of plants. Combine it with visual data, and find other patterns of interest? But at the time the approaches were not discerning enough, This seems closer.
Smart Glove Works Out What You’re Holding from Its Weight, Shape
New Scientist
By Chelsea Whyte in ACM
May 29, 2019
Researchers at the Massachusetts Institute of Technology (MIT) have created a smart glove that allows a neural network to learn the shape and weight of an object, a development that could be applied to robots in factories or homes, and could even provide insights about how the human grip works. The researchers attached a force-sensitive film to the palms and fingers of a knitted glove and stitched a network of 64 conductive silver threads into it. When pressure is applied to the 548 points where the threads intersect, the electrical resistance of the film beneath decreases, allowing the glove to detect the weight and shape of an object the wearer is holding, as well as the pressure created as the hand moves. Said MIT researcher Subramanian Sundaram, "It can tell whether you’re holding an object with a long edge, like a chalkboard eraser, as opposed to something more spherical like a tennis ball.".... '
More technical details:
Sensor-Packed Glove Learns Signatures of the Human Grasp
By MIT News
STAG scalable tactile glove
The "scalable tactile glove" (STAG) is equipped with 548 sensors that capture pressure signals as humans interact with objects.
Wearing a sensor-packed glove while handling a variety of objects, MIT researchers have compiled a massive dataset that enables an AI system to recognize objects through touch alone. The information could be leveraged to help robots identify and manipulate objects, and may aid in prosthetics design.
The researchers developed a low-cost knitted glove, called "scalable tactile glove" (STAG), equipped with about 550 tiny sensors across nearly the entire hand. Each sensor captures pressure signals as humans interact with objects in various ways. A neural network processes the signals to "learn" a dataset of pressure-signal patterns related to specific objects. Then, the system uses that dataset to classify the objects and predict their weights by feel alone, with no visual input needed. .... "
Friday, May 31, 2019
Public Decentralized Applications Not Broadly Used
The idea of a Dapp (Distributed Application) is often included in the idea of 'smart contracts'. The idea of inserting a Dapp into a blockchain is often mentioned. The below refers to 'public' blockchains and the creation of Dapps that can be arbitrary applications. Currently examining the concept and how it might support AI algorithms and applications. So I don't see the below as broadly negative. Most commercial systems are not public.
CryptoKitties, Dice Games Fail to Lure Users to Dapps
The Wall Street Journal, Paul Vigna
Attempts to entice people to use decentralized apps (Dapps) as alternatives to Google's Android or Apple's iOS apps have fallen short so far. Dapps cannot be censored by governments or controlled by corporations or other online gatekeepers, but users must first download and become familiar with a wholly new blockchain-based operating system. Developers have created about 2,700 Dapps with sufficient data to be tracked, yet just three Dapps have more than 10,000 daily active users, according to the State of the DApps website. The CryptoKitties Dapp, once a fad, lets users create and trade animated cats using cryptocurrency, but the fad has faded and the Dapp has only a few hundred daily users today ... "
CryptoKitties, Dice Games Fail to Lure Users to Dapps
The Wall Street Journal, Paul Vigna
Attempts to entice people to use decentralized apps (Dapps) as alternatives to Google's Android or Apple's iOS apps have fallen short so far. Dapps cannot be censored by governments or controlled by corporations or other online gatekeepers, but users must first download and become familiar with a wholly new blockchain-based operating system. Developers have created about 2,700 Dapps with sufficient data to be tracked, yet just three Dapps have more than 10,000 daily active users, according to the State of the DApps website. The CryptoKitties Dapp, once a fad, lets users create and trade animated cats using cryptocurrency, but the fad has faded and the Dapp has only a few hundred daily users today ... "
Interview with Turing Award recipients on Neural Nets
We read some of the earliest work of Hinton and were inspired by the direction. Also a lesson about hype and the lessons from the fringe of typical research. But this is still not complete enough to create strong AI. Lots more to do. So this interview is interesting, just not long or detailed enough.
Reaching New Heights with Artificial Neural Networks
By Leah Hoffmann
Communications of the ACM, June 2019, Vol. 62 No. 6, Pages 96-ff
10.1145/3324011
2018 Turing Award recipients Yoshua Bengio, Geoffrey Hinton, and Yann LeCun
Once treated by the field with skepticism (if not outright derision), the artificial neural networks that 2018 ACM A.M. Turing Award recipients Geoffrey Hinton, Yann LeCun, and Yoshua Bengio spent their careers developing are today an integral component of everything from search to content filtering. So what of the now-red-hot field of deep learning and artificial intelligence (AI)? Here, the three researchers share what they find exciting, and which challenges remain.
There's so much more noise now about artificial intelligence than there was when you began your careers—some of it well-informed, some not. What do you wish people would stop asking you?
GEOFFREY HINTON: "Is this just a bubble?" In the old days, people in AI made grand claims, and they sometimes turned out to be just a bubble. But neural nets go way beyond promises. The technology actually works. Furthermore, it scales. It automatically gets better when you give it more data and a faster computer, without anybody having to write more lines of code. ... "
Reaching New Heights with Artificial Neural Networks
By Leah Hoffmann
Communications of the ACM, June 2019, Vol. 62 No. 6, Pages 96-ff
10.1145/3324011
2018 Turing Award recipients Yoshua Bengio, Geoffrey Hinton, and Yann LeCun
Once treated by the field with skepticism (if not outright derision), the artificial neural networks that 2018 ACM A.M. Turing Award recipients Geoffrey Hinton, Yann LeCun, and Yoshua Bengio spent their careers developing are today an integral component of everything from search to content filtering. So what of the now-red-hot field of deep learning and artificial intelligence (AI)? Here, the three researchers share what they find exciting, and which challenges remain.
There's so much more noise now about artificial intelligence than there was when you began your careers—some of it well-informed, some not. What do you wish people would stop asking you?
GEOFFREY HINTON: "Is this just a bubble?" In the old days, people in AI made grand claims, and they sometimes turned out to be just a bubble. But neural nets go way beyond promises. The technology actually works. Furthermore, it scales. It automatically gets better when you give it more data and a faster computer, without anybody having to write more lines of code. ... "
Advances in Automated Machine Learning
Automated machine learning is inevitable. How good will it be, and how much human oversight needs to be applied to ensure confidence in their results is important. This article is a good overview of work underway. Its no only about searching for the right model, in context of needed goals, its also about maintaining the interaction between data and solutions. Not too unlike the use of any kind of analytics optimization, which has been studied for years.
Cracking open the black box of automated machine learning
Interactive tool lets users see and control how automated model searches work.
By Rob Matheson | MIT News Office
Researchers from MIT and elsewhere have developed an interactive tool that, for the first time, lets users see and control how automated machine-learning systems work. The aim is to build confidence in these systems and find ways to improve them.
Designing a machine-learning model for a certain task — such as image classification, disease diagnoses, and stock market prediction — is an arduous, time-consuming process. Experts first choose from among many different algorithms to build the model around. Then, they manually tweak “hyperparameters” — which determine the model’s overall structure — before the model starts training.
Recently developed automated machine-learning (AutoML) systems iteratively test and modify algorithms and those hyperparameters, and select the best-suited models. But the systems operate as “black boxes,” meaning their selection techniques are hidden from users. Therefore, users may not trust the results and can find it difficult to tailor the systems to their search needs.
In a paper presented at the ACM CHI Conference on Human Factors in Computing Systems, researchers from MIT, the Hong Kong University of Science and Technology (HKUST), and Zhejiang University describe a tool that puts the analyses and control of AutoML methods into users’ hands. Called ATMSeer, the tool takes as input an AutoML system, a dataset, and some information about a user’s task. Then, it visualizes the search process in a user-friendly interface, which presents in-depth information on the models’ performance.
“We let users pick and see how the AutoML systems works,” says co-author Kalyan Veeramachaneni, a principal research scientist in the MIT Laboratory for Information and Decision Systems (LIDS), who leads the Data to AI group. “You might simply choose the top-performing model, or you might have other considerations or use domain expertise to guide the system to search for some models over others.”
In case studies with science graduate students, who were AutoML novices, the researchers found about 85 percent of participants who used ATMSeer were confident in the models selected by the system. Nearly all participants said using the tool made them comfortable enough to use AutoML systems in the future. ... "
Also discusses Auto-Tuned Models ATMs ... "
Cracking open the black box of automated machine learning
Interactive tool lets users see and control how automated model searches work.
By Rob Matheson | MIT News Office
Researchers from MIT and elsewhere have developed an interactive tool that, for the first time, lets users see and control how automated machine-learning systems work. The aim is to build confidence in these systems and find ways to improve them.
Designing a machine-learning model for a certain task — such as image classification, disease diagnoses, and stock market prediction — is an arduous, time-consuming process. Experts first choose from among many different algorithms to build the model around. Then, they manually tweak “hyperparameters” — which determine the model’s overall structure — before the model starts training.
Recently developed automated machine-learning (AutoML) systems iteratively test and modify algorithms and those hyperparameters, and select the best-suited models. But the systems operate as “black boxes,” meaning their selection techniques are hidden from users. Therefore, users may not trust the results and can find it difficult to tailor the systems to their search needs.
In a paper presented at the ACM CHI Conference on Human Factors in Computing Systems, researchers from MIT, the Hong Kong University of Science and Technology (HKUST), and Zhejiang University describe a tool that puts the analyses and control of AutoML methods into users’ hands. Called ATMSeer, the tool takes as input an AutoML system, a dataset, and some information about a user’s task. Then, it visualizes the search process in a user-friendly interface, which presents in-depth information on the models’ performance.
“We let users pick and see how the AutoML systems works,” says co-author Kalyan Veeramachaneni, a principal research scientist in the MIT Laboratory for Information and Decision Systems (LIDS), who leads the Data to AI group. “You might simply choose the top-performing model, or you might have other considerations or use domain expertise to guide the system to search for some models over others.”
In case studies with science graduate students, who were AutoML novices, the researchers found about 85 percent of participants who used ATMSeer were confident in the models selected by the system. Nearly all participants said using the tool made them comfortable enough to use AutoML systems in the future. ... "
Also discusses Auto-Tuned Models ATMs ... "
Wal-Mart IRL Store With Virtual Shelves in the Store
This picture through the ACM in the Chicago Tribune. Shows a scene in Wal-Mart's new smart store, called their IRL.
Appears to be a virtual shelf placed on an end cap to demonstrate or test the placement of product on shelf. The quality of the product images looks quite low, but I imagine could be easily improved. We did similar things to test products on shelf using images, cameras and consumer behavior in contexts in real store environments. (Thread to be updated)
Appears to be a virtual shelf placed on an end cap to demonstrate or test the placement of product on shelf. The quality of the product images looks quite low, but I imagine could be easily improved. We did similar things to test products on shelf using images, cameras and consumer behavior in contexts in real store environments. (Thread to be updated)
See also:
Healthcare Uses of AI
Not sure I would call this 'Top' perhaps common uses of AI in healthcare. Somewhat limited in scope. Still some useful ideas to get people started in where the technology applies. Been asked to comment on the topic and I am using this as well as other resources as an outline.
Top AI algorithms for Healthcare
Posted by Max Ved in DSC
The benefits of AI for healthcare have been extensively discussed in the recent years up to the point of the possibility to replace human physicians with AI in the future.
Both such discussions and the current AI-driven projects reveal that Artificial Intelligence can be used in healthcare in several ways:
AI can learn features from a large volume of healthcare data, and then use the obtained insights to assist clinical practice in treatment design or risk assessment;
AI system can extract useful information from a large patient population to assist making real-time inferences for health risk alert and health outcome prediction;
AI can do repetitive jobs, such as analyzing tests, X-Rays, CT scans or data entry;
AI systems can help to reduce diagnostic and therapeutic errors that are inevitable in the human clinical practice;
AI can assist physicians by providing up-to-date medical information from journals, textbooks and clinical practices to inform proper patient care;
AI can manage medical records and analyze both performance of an individual institution and the whole healthcare system;
AI can help develop precision medicine and new drugs based on the faster processing of mutations and links to disease;
AI can provide digital consultations and health monitoring services — to the extent of being “digital nurses” or “health bots”. ... "
Top AI algorithms for Healthcare
Posted by Max Ved in DSC
The benefits of AI for healthcare have been extensively discussed in the recent years up to the point of the possibility to replace human physicians with AI in the future.
Both such discussions and the current AI-driven projects reveal that Artificial Intelligence can be used in healthcare in several ways:
AI can learn features from a large volume of healthcare data, and then use the obtained insights to assist clinical practice in treatment design or risk assessment;
AI system can extract useful information from a large patient population to assist making real-time inferences for health risk alert and health outcome prediction;
AI can do repetitive jobs, such as analyzing tests, X-Rays, CT scans or data entry;
AI systems can help to reduce diagnostic and therapeutic errors that are inevitable in the human clinical practice;
AI can assist physicians by providing up-to-date medical information from journals, textbooks and clinical practices to inform proper patient care;
AI can manage medical records and analyze both performance of an individual institution and the whole healthcare system;
AI can help develop precision medicine and new drugs based on the faster processing of mutations and links to disease;
AI can provide digital consultations and health monitoring services — to the extent of being “digital nurses” or “health bots”. ... "
Small Racing Drone
Always been interested by the possibilities of the very small drone. Here aimed at racing, and not meant to swarm to tasks with other drones. But notably fast and autonomous. Small means we do need to do motion prediction and avoidance well.
TU Delft scientists have created the world's smallest autonomous racing drone. The main challenge in creating the drone lies in the use of only a single, small camera and in the highly restricted amount of processing. The main innovation is the design of robust, yet extremely efficient algorithms for motion prediction and computer vision.
Drone racing by human pilots is becoming a major e-sport. In its wake, autonomous drone racing has become a major challenge for artificial intelligence and control. Over the years, the speed of autonomous race drones has been gradually improving, with some of the fastest drones in recent competitions now moving at 2 meters per second. Most of the autonomous racing drones are equipped with high-performance processors, with multiple, high-quality cameras, and sometimes even with laser scanners. This allows these drones to use state-of-the-art solutions to visual perception, like building maps of the environment or tracking accurately how the drone is moving over time. However, it also makes the drones relatively heavy and expensive. .... "
TU Delft scientists have created the world's smallest autonomous racing drone. The main challenge in creating the drone lies in the use of only a single, small camera and in the highly restricted amount of processing. The main innovation is the design of robust, yet extremely efficient algorithms for motion prediction and computer vision.
Drone racing by human pilots is becoming a major e-sport. In its wake, autonomous drone racing has become a major challenge for artificial intelligence and control. Over the years, the speed of autonomous race drones has been gradually improving, with some of the fastest drones in recent competitions now moving at 2 meters per second. Most of the autonomous racing drones are equipped with high-performance processors, with multiple, high-quality cameras, and sometimes even with laser scanners. This allows these drones to use state-of-the-art solutions to visual perception, like building maps of the environment or tracking accurately how the drone is moving over time. However, it also makes the drones relatively heavy and expensive. .... "
AI Revolution has not arrived Yet?
Late to this, but always new thoughts depending on the definition of the term and its goals. Not if its general intelligence, but if it is assistance or narrower and useful problem solving, its moving rapidly.
Artificial Intelligence — The Revolution Hasn’t Happened YetArtificial Intelligence — The Revolution Hasn’t Happened Yet By Michael Jordan
Artificial Intelligence (AI) is the mantra of the current era. The phrase is intoned by technologists, academicians, journalists and venture capitalists alike. As with many phrases that cross over from technical academic fields into general circulation, there is significant misunderstanding accompanying the use of the phrase. But this is not the classical case of the public not understanding the scientists — here the scientists are often as befuddled as the public. The idea that our era is somehow seeing the emergence of an intelligence in silicon that rivals our own entertains all of us — enthralling us and frightening us in equal measure. And, unfortunately, it distracts us. .... "
and Response:
Comments on Michael Jordan’s Essay
“Artificial Intelligence: The revolution hasn’t happened yet”
Emmanuel Candes, John Duchi, Chiara Sabatti `
Stanford University
We praise Jordan for bringing much needed clarity about the current status of Artificial Intelligence(AI)—what it currently is and what it is not—as well as explaining the current challenges lying ahead and outlining what is missing and remains to be done. Jordan makes several claims supported by a list of talking points that we hope will reach a wide audience; ideally, that audience will include academic, university, and governmental leaders, at a time where significant resources are being allocated to AI for research and education.
The importance of clarity
Jordan makes the point of being precise about the history of the term AI, and distinguishes several activities taking place under the AI umbrella term.Is it all right to use AI as a label for all of these different activities? Jordan seems to think it is not and we agree. To begin with, words are not simple aseptic names; they matter, and they convey meaning (as any branding expert knows). To quote Heidegger: “Man acts as though he were the shaper and master of language, while in fact language remains the master of man.” In this instance, we believe that mislabeling generates confusion, which has consequences for research and educational programming.
Artificial Intelligence — The Revolution Hasn’t Happened YetArtificial Intelligence — The Revolution Hasn’t Happened Yet By Michael Jordan
Artificial Intelligence (AI) is the mantra of the current era. The phrase is intoned by technologists, academicians, journalists and venture capitalists alike. As with many phrases that cross over from technical academic fields into general circulation, there is significant misunderstanding accompanying the use of the phrase. But this is not the classical case of the public not understanding the scientists — here the scientists are often as befuddled as the public. The idea that our era is somehow seeing the emergence of an intelligence in silicon that rivals our own entertains all of us — enthralling us and frightening us in equal measure. And, unfortunately, it distracts us. .... "
and Response:
Comments on Michael Jordan’s Essay
“Artificial Intelligence: The revolution hasn’t happened yet”
Emmanuel Candes, John Duchi, Chiara Sabatti `
Stanford University
We praise Jordan for bringing much needed clarity about the current status of Artificial Intelligence(AI)—what it currently is and what it is not—as well as explaining the current challenges lying ahead and outlining what is missing and remains to be done. Jordan makes several claims supported by a list of talking points that we hope will reach a wide audience; ideally, that audience will include academic, university, and governmental leaders, at a time where significant resources are being allocated to AI for research and education.
The importance of clarity
Jordan makes the point of being precise about the history of the term AI, and distinguishes several activities taking place under the AI umbrella term.Is it all right to use AI as a label for all of these different activities? Jordan seems to think it is not and we agree. To begin with, words are not simple aseptic names; they matter, and they convey meaning (as any branding expert knows). To quote Heidegger: “Man acts as though he were the shaper and master of language, while in fact language remains the master of man.” In this instance, we believe that mislabeling generates confusion, which has consequences for research and educational programming.
Thursday, May 30, 2019
David Brin Interviewed on Resiliency
We connected with David Brin a number of times through IFTF. He is a well known futurist, physicist and Scifi Author. Also consultant to NASA and NIAC. Thoughtful piece I am reading. Resilience is the ability to bounce back from disaster. Of course disaster has a number of levels of severity, and yours may not be apocalyptic. Broadly we found resiliency considerations, measures and systematic methods worthwhile regardless.
Unfortunately no free link here to full text, but worth the cost or looking it up on CACM.
An Interview with David Brin on Resiliency
By Peter J. Denning, David Brin
Communications of the ACM, June 2019, Vol. 62 No. 6, Pages 28-31
10.1145/3325287
Many people today are concerned about critical infrastructures such as the electrical network, water supplies, telephones, transportation, and the Internet. These nerve and bloodlines for society depend on reliable computing, communications, and electrical supply. What would happen if a massive cyber attack or an electromagnetic pulse, or other failure mode took down the electric grid in a way that requires many months or even years for repair? What about a natural disaster such as hurricane, wildfire, or earthquake that disabled all cellphone communications for an extended period?
David Brin, physicist and author, has been worrying about these issues for a long time and consults regularly with companies and federal agencies. He says there are many relatively straightforward measures that might greatly increase our resiliency—our ability to bounce back from disaster. I spoke with him about this. ...
(Abstract)
Unfortunately no free link here to full text, but worth the cost or looking it up on CACM.
An Interview with David Brin on Resiliency
By Peter J. Denning, David Brin
Communications of the ACM, June 2019, Vol. 62 No. 6, Pages 28-31
10.1145/3325287
Many people today are concerned about critical infrastructures such as the electrical network, water supplies, telephones, transportation, and the Internet. These nerve and bloodlines for society depend on reliable computing, communications, and electrical supply. What would happen if a massive cyber attack or an electromagnetic pulse, or other failure mode took down the electric grid in a way that requires many months or even years for repair? What about a natural disaster such as hurricane, wildfire, or earthquake that disabled all cellphone communications for an extended period?
David Brin, physicist and author, has been worrying about these issues for a long time and consults regularly with companies and federal agencies. He says there are many relatively straightforward measures that might greatly increase our resiliency—our ability to bounce back from disaster. I spoke with him about this. ...
(Abstract)
Recorded Future Bought by Insight Partners
Recorded Future is a company we worked with in its early years, and reported here about them since then. Good company. Their recent movement towards 'threat intelligence' analysis is of interest.
The security acquisitions continue: Insight Partners buys Recorded Future for $780M By Maria Deutscher in SiliconAngle
Insight Partners does most of its investing through growth-stage funding rounds, but the venture capital giant occasionally makes bigger bets as well. The firm today announced that it has acquired threat intelligence startup Recorded Future Inc. for a hefty $780 million.
The transaction is the third nine-figure acquisition that the cybersecurity industry has seen this week. On Wednesday, Palo Alto Networks Inc. bought container protection specialist Twistlock Inc. for $410 million along with a second, smaller startup called PureSec. A day earlier, FireEye Inc. picked up network monitoring provider Verodin Inc. in a $250 million deal. ... "
The security acquisitions continue: Insight Partners buys Recorded Future for $780M By Maria Deutscher in SiliconAngle
Insight Partners does most of its investing through growth-stage funding rounds, but the venture capital giant occasionally makes bigger bets as well. The firm today announced that it has acquired threat intelligence startup Recorded Future Inc. for a hefty $780 million.
The transaction is the third nine-figure acquisition that the cybersecurity industry has seen this week. On Wednesday, Palo Alto Networks Inc. bought container protection specialist Twistlock Inc. for $410 million along with a second, smaller startup called PureSec. A day earlier, FireEye Inc. picked up network monitoring provider Verodin Inc. in a $250 million deal. ... "
Wolfram: Mining the Computational Universe
Intriguing half hour talk. A response to the broad entry of AI, or a claim to a new architecture of computation? Just recently reexamined here the anniversary of Wolfram Alpha.
In the Edge:
Mining the Computational Universe
A Talk By Stephen Wolfram
I've spent several decades creating a computational language that aims to give a precise symbolic representation for computational thinking, suitable for use by both humans and machines. I'm interested in figuring out what can happen when a substantial fraction of humans can communicate in computational language as well as human language. It's clear that the introduction of both human spoken language and human written language had important effects on the development of civilization. What will now happen (for both humans and AI) when computational language spreads?
STEPHEN WOLFRAM is a scientist, inventor, and the founder and CEO of Wolfram Research. He is the creator of the symbolic computation program Mathematica and its programming language, Wolfram Language, as well as the knowledge engine Wolfram|Alpha. He is also the author of A New Kind of Science.
Mining the Computational Universe
STEPHEN WOLFRAM: I thought I would talk about my current thinking about computation and our interaction with it. The first question is, how common is computation? People have the general view that to make something do computation requires a lot of effort, and you have to build microprocessors and things like this. One of the things that I discovered a long time ago is that it’s very easy to get sophisticated computation.
I’ve studied cellular automata, studied Turing machines and other kinds of things—as soon as you have a system whose behavior is not obviously simple, you end up getting something that is as sophisticated computationally as it can be. This is something that is not an obvious fact. I call it the principle of computational equivalence. At some level, it’s a thing for which one can get progressive evidence. You just start looking at very simple systems, whether they’re cellular automata or Turing machines, and you say, "Does the system do sophisticated computation or not?" The surprising discovery is that as soon as what it’s doing is not something that you can obviously decode, then one can see, in particular cases at least, that it is capable of doing as sophisticated computation as anything. For example, it means it’s a universal computer. .... "
In the Edge:
Mining the Computational Universe
A Talk By Stephen Wolfram
I've spent several decades creating a computational language that aims to give a precise symbolic representation for computational thinking, suitable for use by both humans and machines. I'm interested in figuring out what can happen when a substantial fraction of humans can communicate in computational language as well as human language. It's clear that the introduction of both human spoken language and human written language had important effects on the development of civilization. What will now happen (for both humans and AI) when computational language spreads?
STEPHEN WOLFRAM is a scientist, inventor, and the founder and CEO of Wolfram Research. He is the creator of the symbolic computation program Mathematica and its programming language, Wolfram Language, as well as the knowledge engine Wolfram|Alpha. He is also the author of A New Kind of Science.
Mining the Computational Universe
STEPHEN WOLFRAM: I thought I would talk about my current thinking about computation and our interaction with it. The first question is, how common is computation? People have the general view that to make something do computation requires a lot of effort, and you have to build microprocessors and things like this. One of the things that I discovered a long time ago is that it’s very easy to get sophisticated computation.
I’ve studied cellular automata, studied Turing machines and other kinds of things—as soon as you have a system whose behavior is not obviously simple, you end up getting something that is as sophisticated computationally as it can be. This is something that is not an obvious fact. I call it the principle of computational equivalence. At some level, it’s a thing for which one can get progressive evidence. You just start looking at very simple systems, whether they’re cellular automata or Turing machines, and you say, "Does the system do sophisticated computation or not?" The surprising discovery is that as soon as what it’s doing is not something that you can obviously decode, then one can see, in particular cases at least, that it is capable of doing as sophisticated computation as anything. For example, it means it’s a universal computer. .... "
Automated Order Taking
Decreasing errors, wait times, learning patterns to anticipate demand.
Automated Order Takers May Reshape Future of Drive-Through Restaurants
Medill Reports By Yixuan Xie
Three artificial intelligence (AI) companies are developing AI-powered voice assistants to improve order-taking at drive-through restaurants. Valyant AI has piloted one assistant to take breakfast orders at an eatery in Colorado, which experienced a 10% to 25% reduction in average wait time. Valyant AI CEO Rod Carpenter said, "While our AI is carrying on a conversation with the customer, the employees are listening to the exchange and actually preparing the food." Meanwhile, Encounter AI's assistant is designed to improve order accuracy, so food allergies and other potential problems are not overlooked; Encounter AI's Derrick Johnson said the AI's accuracy is continuously improving via machine learning. Meanwhile, the software firm Clinc hopes to augment the voice control capabilities of drive-through windows with its own AI, which learns from the different ways people order by analyzing sentence structure. .... "
Automated Order Takers May Reshape Future of Drive-Through Restaurants
Medill Reports By Yixuan Xie
Three artificial intelligence (AI) companies are developing AI-powered voice assistants to improve order-taking at drive-through restaurants. Valyant AI has piloted one assistant to take breakfast orders at an eatery in Colorado, which experienced a 10% to 25% reduction in average wait time. Valyant AI CEO Rod Carpenter said, "While our AI is carrying on a conversation with the customer, the employees are listening to the exchange and actually preparing the food." Meanwhile, Encounter AI's assistant is designed to improve order accuracy, so food allergies and other potential problems are not overlooked; Encounter AI's Derrick Johnson said the AI's accuracy is continuously improving via machine learning. Meanwhile, the software firm Clinc hopes to augment the voice control capabilities of drive-through windows with its own AI, which learns from the different ways people order by analyzing sentence structure. .... "
Facial Recognition Comes to US Schools
Not unexpected. Happening in UK, China. But I am intrigued to see how much push back will occur. Here will be primarily to detect sex offenders, unauthorized people, specifically banned people, etc. As opposed to students.
Facial recognition is coming to US schools, starting in New York
The first school district in the US to pilot face recognition will switch it on next week. ... "
In Engadget By Mariella Moon, @mariella_moon ...
Facial recognition is coming to US schools, starting in New York
The first school district in the US to pilot face recognition will switch it on next week. ... "
In Engadget By Mariella Moon, @mariella_moon ...
Amazon and Small Suppliers
I am not sure this makes much difference for most small retailers, they have already made the transition, lived with their particular context or failed. Our own experience with a small retail within Amazon saw little support from them except basic infrastructure.
Amazon to set small suppliers adrift by George Anderson with further expert retail input.
For more than a decade, Amazon.com has publicly pronounced that it is not a destroyer of small businesses but a creator of growth opportunities for those that take advantage of the reach offered by its platform. Fifty-three percent of Amazon’s online sales are made by third parties, after all, and nearly three quarters of those selling directly to consumers on the site have between one and five employees. Many other small businesses sell products on a wholesale basis to Amazon. And so goes the rationalization that Amazon is small business friendly.
New reporting by Bloomberg, however, suggests Amazon may soon seem a less hospitable place for small third-party sellers as the e-tailer cozies up to larger retailers (Best Buy, Chico’s, Party City, etc.) and consumer brands such as Nike choose the path of coopetition to drive greater direct sales to consumers. Amazon is also shifting its percentage of products sourced from small suppliers to larger entities such as LEGO, Procter & Gamble and Sony as it focuses on competing directly with rivals selling popular name brand goods. .... "
Amazon to set small suppliers adrift by George Anderson with further expert retail input.
For more than a decade, Amazon.com has publicly pronounced that it is not a destroyer of small businesses but a creator of growth opportunities for those that take advantage of the reach offered by its platform. Fifty-three percent of Amazon’s online sales are made by third parties, after all, and nearly three quarters of those selling directly to consumers on the site have between one and five employees. Many other small businesses sell products on a wholesale basis to Amazon. And so goes the rationalization that Amazon is small business friendly.
New reporting by Bloomberg, however, suggests Amazon may soon seem a less hospitable place for small third-party sellers as the e-tailer cozies up to larger retailers (Best Buy, Chico’s, Party City, etc.) and consumer brands such as Nike choose the path of coopetition to drive greater direct sales to consumers. Amazon is also shifting its percentage of products sourced from small suppliers to larger entities such as LEGO, Procter & Gamble and Sony as it focuses on competing directly with rivals selling popular name brand goods. .... "
GigaOm Interviews about AI
Byron does a great job of interviewing emerging expertise. Been a follower for along time, join up.
Voices in AI – Episode 88: A Conversation with Ron Green By Byron Reese
Episode 88 of Voices in AI features Byron speaking with Ron Green of KUNGFU.AI about how companies integrate AI and machine learning into their business models.
Listen to this episode or read the full transcript at www.VoicesinAI.com
Transcript Excerpt:
Byron Reese: This is Voices in AI brought to you by GigaOm and I’m Byron Reese. Today my guest is Ron Green. Ron is the CTO over at KUNGFU.AI. He holds a BA in Computer Science from the University of Texas at Austin, and he holds a Master of Science from the University of Sussex in Evolutionary and Adaptive Systems. His company, KUNGFU.AI is a professional services company that helps companies start and accelerate artificial intelligence projects. I asked him [to be] on the show today because I wanted to do an episode that was a little more ‘hands-on’ about how an enterprise today can apply this technology to their business. Welcome to the show, Ron. ,,,, "
Voices in AI – Episode 88: A Conversation with Ron Green By Byron Reese
Episode 88 of Voices in AI features Byron speaking with Ron Green of KUNGFU.AI about how companies integrate AI and machine learning into their business models.
Listen to this episode or read the full transcript at www.VoicesinAI.com
Transcript Excerpt:
Byron Reese: This is Voices in AI brought to you by GigaOm and I’m Byron Reese. Today my guest is Ron Green. Ron is the CTO over at KUNGFU.AI. He holds a BA in Computer Science from the University of Texas at Austin, and he holds a Master of Science from the University of Sussex in Evolutionary and Adaptive Systems. His company, KUNGFU.AI is a professional services company that helps companies start and accelerate artificial intelligence projects. I asked him [to be] on the show today because I wanted to do an episode that was a little more ‘hands-on’ about how an enterprise today can apply this technology to their business. Welcome to the show, Ron. ,,,, "
Business Cards and Big Data
This made me think of other 'lost' sources of data
Here’s How Big Data And Business Card Marketing Go Together
Big data and business card marketing are a match made in heaven, and contrary to popular belief, business card marketing is here to stay.
Many people believe that digital media is rapidly replacing traditional forms of branding. They believe that advances in big data have made business cards, brochures and direct mail marketing obsolete.
Nothing could be further from the truth. We previously published an article on the state of direct mail marketing. We showed that marketers are actually using big data to improve the performance of their direct mail marketing campaigns. ...
We can draw a similar conclusion about the relevance of business cards in 2019. Online marketing did not make business cards go out of style. Data Floq made this point clear in a post they made in 2016. ... "
By Diana Hope
Here’s How Big Data And Business Card Marketing Go Together
Big data and business card marketing are a match made in heaven, and contrary to popular belief, business card marketing is here to stay.
Many people believe that digital media is rapidly replacing traditional forms of branding. They believe that advances in big data have made business cards, brochures and direct mail marketing obsolete.
Nothing could be further from the truth. We previously published an article on the state of direct mail marketing. We showed that marketers are actually using big data to improve the performance of their direct mail marketing campaigns. ...
We can draw a similar conclusion about the relevance of business cards in 2019. Online marketing did not make business cards go out of style. Data Floq made this point clear in a post they made in 2016. ... "
By Diana Hope
Wednesday, May 29, 2019
Jacquard Devices
Notes on new ways to place wearable, immersive computing. Well beyond the loom.
Jaquard Device Turns Houseplants into Keyboards in Ideaconnection
Designer Ivan Poupyrev discusses his Jacquard device, which can turn everyday objects into computers. ...
And Levi considers the Jacquard wearable
As a company of firsts, Levi’s has spent over 150 years innovating fashionable, functional clothing. The Jacquard vision centers around expanding the functionality of the clothes people already wear and love. And that’s exactly what we did with Levi’s in creating the Commuter Trucker Jacket with Jacquard by Google woven in. ... "
Is this anywhere today?
Jaquard Device Turns Houseplants into Keyboards in Ideaconnection
Designer Ivan Poupyrev discusses his Jacquard device, which can turn everyday objects into computers. ...
And Levi considers the Jacquard wearable
As a company of firsts, Levi’s has spent over 150 years innovating fashionable, functional clothing. The Jacquard vision centers around expanding the functionality of the clothes people already wear and love. And that’s exactly what we did with Levi’s in creating the Commuter Trucker Jacket with Jacquard by Google woven in. ... "
Is this anywhere today?
DSC Plain Language Statistics Series
Signup. The always useful DSC series on 'plain language statistics', and much more. Link to other editions of this. Sign up to their newsletter, essential for beginners or experts.
32 Statistical Concepts Explained in Simple English - Part 11
Posted by Vincent Granville
This resource is part of a series on specific topics related to data science: regression, clustering, neural networks, deep learning, decision trees, ensembles, correlation, Python, R, Tensorflow, SVM, data reduction, feature selection, experimental design, cross-validation, model fitting, and many more. To keep receiving these articles, sign up on DSC. ..... "
32 Statistical Concepts Explained in Simple English - Part 11
Posted by Vincent Granville
This resource is part of a series on specific topics related to data science: regression, clustering, neural networks, deep learning, decision trees, ensembles, correlation, Python, R, Tensorflow, SVM, data reduction, feature selection, experimental design, cross-validation, model fitting, and many more. To keep receiving these articles, sign up on DSC. ..... "
Smart Home Gets Cleaner
Quite a step forward, if it delivers. Its like they have done the easy stuff, now work on harder problems for the smart home. And cooperates with other devices to get it done.
iRobot’s new cleaning robots can team up to vacuum and mop your house
A job shared is a job halved
By Jon Porter@JonPorty in TheVerge
iRobot has a pair of new cleaning robots, the Roomba s9+ and Braava Jet m6, that can work in tandem to vacuum, mop, and dust your house. You coordinate the process from the iRobot app, which will automatically tell the mopping and dusting $499 (€699) Braava Jet m6 to clean your wood or stone floors after the $1,299 (€1,499) Roomba s9+ has vacuumed. .... "
iRobot’s new cleaning robots can team up to vacuum and mop your house
A job shared is a job halved
By Jon Porter@JonPorty in TheVerge
iRobot has a pair of new cleaning robots, the Roomba s9+ and Braava Jet m6, that can work in tandem to vacuum, mop, and dust your house. You coordinate the process from the iRobot app, which will automatically tell the mopping and dusting $499 (€699) Braava Jet m6 to clean your wood or stone floors after the $1,299 (€1,499) Roomba s9+ has vacuumed. .... "
Designing Your Future
Design, Benchmark and then deliver your own future.
Foresight Mindset™
The Art & Science Of Designing Your Future ...
Future Benchmarking [VIDEO] by Mario Herger
What is Future Benchmarking and why is it important for organizations for product and service development? Imagine your competitor introduced a new product or service and you are trying to catch up. You take the your competitor’s product, analyze it, and try to make your own specs and schedule a timeline. But if you don’t take into account that until you launch your own product your competitor has done improvements on their own product or service, your own offering will lag behind. ... "
Foresight Mindset™
The Art & Science Of Designing Your Future ...
Future Benchmarking [VIDEO] by Mario Herger
What is Future Benchmarking and why is it important for organizations for product and service development? Imagine your competitor introduced a new product or service and you are trying to catch up. You take the your competitor’s product, analyze it, and try to make your own specs and schedule a timeline. But if you don’t take into account that until you launch your own product your competitor has done improvements on their own product or service, your own offering will lag behind. ... "
Repairing a Satellite with Deep Learning AI in Space
By predicting lost data from other existing sources using deep learning. Note the alternative uses of the approach, say in the case of solar storms. Considerable complexity with varying goals.
IBM helped NASA fix one of its satellites using cutting-edge deep learning A.I.
How do you fix a satellite that’s floating 22,000 miles above the Earth’s surface?
That’s a question that NASA had to answer when it ran into problems with one of its crucial satellites. The satellite in question was the Solar Dynamics Observatory (SDO), which launched in 2010 with the important goal of studying the Sun and the effects of solar activity on Earth. This is important for all sorts of reasons — not least because solar storms can knock out GPS satellites, shut down electrical grids, and scramble communications.
Unfortunately, one of the SDO’s three instruments, responsible for measuring ultraviolet light, stopped working due to a fault. This data is essential to satellite operators, since it can affect the flight path of orbiting satellites. Not properly compensating for atmospheric changes due to ultraviolet light may cause satellites to fall out of orbit and burn up or crash.
It was deemed too costly to repair the $850 million satellite in space. As a result, NASA called in experts from IBM, SETI, Nimbix, Lockheed Martin, and its own Frontier Development Lab to see if they could solve the problem from Earth using cutting-edge artificial intelligence. The request? Could they figure out how to use data from the SDO’s remaining two instruments — its atmospheric imaging assembly and helioseismic and magnetic imager — to work out the missing ultraviolet radiation measurements. The answer: Apparently, yes.
“One of the biggest challenges was to find the optimal A.I. framework and model for the problem at hand — namely, virtually ‘resurrecting’ the failed SDO instrument so that we could once again get the data that instrument would have produced if it was still working,” Graham Mackintosh, A.I. advisor to SETI and NASA, told Digital Trends. “The team automated the task of modifying, testing, and recording the results of almost 1,000 different versions of the deep learning model before settling on the final approach they determined to be optimal.” .... "
IBM helped NASA fix one of its satellites using cutting-edge deep learning A.I.
How do you fix a satellite that’s floating 22,000 miles above the Earth’s surface?
That’s a question that NASA had to answer when it ran into problems with one of its crucial satellites. The satellite in question was the Solar Dynamics Observatory (SDO), which launched in 2010 with the important goal of studying the Sun and the effects of solar activity on Earth. This is important for all sorts of reasons — not least because solar storms can knock out GPS satellites, shut down electrical grids, and scramble communications.
Unfortunately, one of the SDO’s three instruments, responsible for measuring ultraviolet light, stopped working due to a fault. This data is essential to satellite operators, since it can affect the flight path of orbiting satellites. Not properly compensating for atmospheric changes due to ultraviolet light may cause satellites to fall out of orbit and burn up or crash.
It was deemed too costly to repair the $850 million satellite in space. As a result, NASA called in experts from IBM, SETI, Nimbix, Lockheed Martin, and its own Frontier Development Lab to see if they could solve the problem from Earth using cutting-edge artificial intelligence. The request? Could they figure out how to use data from the SDO’s remaining two instruments — its atmospheric imaging assembly and helioseismic and magnetic imager — to work out the missing ultraviolet radiation measurements. The answer: Apparently, yes.
“One of the biggest challenges was to find the optimal A.I. framework and model for the problem at hand — namely, virtually ‘resurrecting’ the failed SDO instrument so that we could once again get the data that instrument would have produced if it was still working,” Graham Mackintosh, A.I. advisor to SETI and NASA, told Digital Trends. “The team automated the task of modifying, testing, and recording the results of almost 1,000 different versions of the deep learning model before settling on the final approach they determined to be optimal.” .... "
Tuesday, May 28, 2019
Google Lens Translation Filters Roll
Much enjoyed Google Lens, now if it could be used for learning-ready data acquisition. Nice idea, also will be available on IOS photos Translation is a great idea, but I also want direct identification examples, like for example weeds or plants. It have already worked with those, its not perfect, but good example of what you could imagine. Suppose if you could scan someone and get a DNA analysis? Not close yet, but a possibility?
Google Lens dining and translation filters roll out this week
AR can decipher a sign or point out hot menu items.
By Jon Fingas, @jonfingas in Engadget
Google is acting quickly on its plans to bring clever new filters to Lens. The search firm is starting to roll out its promised Dining and Translate filters to Lens on Android and iOS, giving you some potential time savers. Translate is likely to be the most practical if you're a traveler -- aim your camera at text and Lens can overlay a translation in the language of your choice. The Dining filter, meanwhile, can highlight popular dishes on a menu (complete with photos and feedback) as well as use your receipt to calculate bill splits and tips. ... "
Google Lens dining and translation filters roll out this week
AR can decipher a sign or point out hot menu items.
By Jon Fingas, @jonfingas in Engadget
Google is acting quickly on its plans to bring clever new filters to Lens. The search firm is starting to roll out its promised Dining and Translate filters to Lens on Android and iOS, giving you some potential time savers. Translate is likely to be the most practical if you're a traveler -- aim your camera at text and Lens can overlay a translation in the language of your choice. The Dining filter, meanwhile, can highlight popular dishes on a menu (complete with photos and feedback) as well as use your receipt to calculate bill splits and tips. ... "
EchoLocation as Biometric Activity Data
A reminder that any kind of data that results from interaction with its environment can be probed for pattern. And thus provide learnable details.
This AI Uses Echo Location to Identify What You are Doing in Wired by Sophia Chen
GUO XINHUA WANTS to teach computers to echolocate. He and his colleagues have built a device, about the size of a thin laptop, that emits sound at frequencies 10 times higher than the shrillest note a piccolo can sustain. The pitches it produces are inaudible to the human ear. When Guo’s team aims the device at a person and fires an ultrasonic pitch, the gadget listens for the echo using its hundreds of embedded microphones. Then, employing artificial intelligence techniques, his team tries to decipher what the person is doing from the reflected sound alone.
The technology is still in its infancy, but they’ve achieved some promising initial results. Based at the Wuhan University of Technology, in China, Guo’s team has tested its microphone array on four different college students and found that they can identify whether the person is sitting, standing, walking, or falling, with complete accuracy, they report in a paper published today in Applied Physics Letters. While they still need to test that the technique works on more people, and that it can identify a broader range of behaviors, this demonstration hints at a new technology for surveilling human behavior. ... "
This AI Uses Echo Location to Identify What You are Doing in Wired by Sophia Chen
GUO XINHUA WANTS to teach computers to echolocate. He and his colleagues have built a device, about the size of a thin laptop, that emits sound at frequencies 10 times higher than the shrillest note a piccolo can sustain. The pitches it produces are inaudible to the human ear. When Guo’s team aims the device at a person and fires an ultrasonic pitch, the gadget listens for the echo using its hundreds of embedded microphones. Then, employing artificial intelligence techniques, his team tries to decipher what the person is doing from the reflected sound alone.
The technology is still in its infancy, but they’ve achieved some promising initial results. Based at the Wuhan University of Technology, in China, Guo’s team has tested its microphone array on four different college students and found that they can identify whether the person is sitting, standing, walking, or falling, with complete accuracy, they report in a paper published today in Applied Physics Letters. While they still need to test that the technique works on more people, and that it can identify a broader range of behaviors, this demonstration hints at a new technology for surveilling human behavior. ... "
Google Goes Mobile First
Trying to understand the full implications of this. Will it necessarily lead to better results for the customer of not? Assume this was determined with the test examples chosen by Google. What if a site does not have a mobile version? Are competitive scenarios assumed? Considered? Not sure I could predict the outcome of the change. Googles View.
From TechCrunch:
At the end of 2018, Google said mobile-first indexing — that is, using a website’s mobile version to index its pages — was being used for over half the web pages in Google search results. Today, Google announced that mobile-first indexing will now be the default for all new web domains as of July 1, 2019.
That means that when a new website is registered it will be crawled by Google’s smartphone Googlebot, and its mobile-friendly content will be used to index its pages, as well as to understand the site’s structured data and to show snippets from the site in Google’s search results, when relevant.
The mobile-first indexing initiative has come a long way since Google first announced its plans back in 2016. In December 2017, Google began to roll out mobile-first indexing to a small handful of sites, but didn’t specify which ones were in this early test group. Last March, mobile-indexing began to roll out on a broader scale. By year-end, half the pages on the web were indexed by Google’s smartphone Googlebot. ... "
From TechCrunch:
At the end of 2018, Google said mobile-first indexing — that is, using a website’s mobile version to index its pages — was being used for over half the web pages in Google search results. Today, Google announced that mobile-first indexing will now be the default for all new web domains as of July 1, 2019.
That means that when a new website is registered it will be crawled by Google’s smartphone Googlebot, and its mobile-friendly content will be used to index its pages, as well as to understand the site’s structured data and to show snippets from the site in Google’s search results, when relevant.
The mobile-first indexing initiative has come a long way since Google first announced its plans back in 2016. In December 2017, Google began to roll out mobile-first indexing to a small handful of sites, but didn’t specify which ones were in this early test group. Last March, mobile-indexing began to roll out on a broader scale. By year-end, half the pages on the web were indexed by Google’s smartphone Googlebot. ... "
Simulation for Training
Simulation was a favorite method for analyzing alternatives in the enterprise. Of course every simulation also created new data. Now that data can be used for finding operational patterns and examples in the real world. Would work well for systems like robots, where their operational constraints are strictly defined. But even when we did not have the kind of restriction, we could simulate within ranges. Which led you to more combinatorial problems. Nice way to think about these problems. Also the results are typically quite transparent.
NVIDIA Brings Robot Simulation Closer to Reality by Making Humans Redundant Learning in simulation no longer takes human expertise to make it useful in the real world By Evan Ackerman
We all know how annoying real robots are. They’re expensive, they’re finicky, and teaching them to do anything useful takes an enormous amount of time and effort. One way of making robot learning slightly more bearable is to program robots to teach themselves things, which is not as fast as having a human instructor in the loop, but can be much more efficient because that human can be off doing something else more productive instead. Google industrialized this process by running a bunch of robots in parallel, which sped things up enormously, but you’re still constrained by those pesky physical arms.
The way to really scale up robot learning is to do as much of it as you can in simulation instead. You can use as many virtual robots running in virtual environments testing virtual scenarios as you have the computing power to handle, and then push the fast forward button so that they’re learning faster than real time. Since no simulation is perfect, it’ll take some careful tweaking to get it to actually be useful and reliable in reality, and that means that humans have get back involved in the process. Ugh.
A team of NVIDIA researchers, working at the company’s new robotics lab in Seattle, is taking a crack at eliminating this final human-dependent step in a paper that they’re presenting at ICRA today. There’s still some tuning that has to happen to match simulation with reality, but now, it’s tuning that happens completely autonomously, meaning that the gap between simulation and reality can be closed without any human involvement at all. .... "
NVIDIA Brings Robot Simulation Closer to Reality by Making Humans Redundant Learning in simulation no longer takes human expertise to make it useful in the real world By Evan Ackerman
We all know how annoying real robots are. They’re expensive, they’re finicky, and teaching them to do anything useful takes an enormous amount of time and effort. One way of making robot learning slightly more bearable is to program robots to teach themselves things, which is not as fast as having a human instructor in the loop, but can be much more efficient because that human can be off doing something else more productive instead. Google industrialized this process by running a bunch of robots in parallel, which sped things up enormously, but you’re still constrained by those pesky physical arms.
The way to really scale up robot learning is to do as much of it as you can in simulation instead. You can use as many virtual robots running in virtual environments testing virtual scenarios as you have the computing power to handle, and then push the fast forward button so that they’re learning faster than real time. Since no simulation is perfect, it’ll take some careful tweaking to get it to actually be useful and reliable in reality, and that means that humans have get back involved in the process. Ugh.
A team of NVIDIA researchers, working at the company’s new robotics lab in Seattle, is taking a crack at eliminating this final human-dependent step in a paper that they’re presenting at ICRA today. There’s still some tuning that has to happen to match simulation with reality, but now, it’s tuning that happens completely autonomously, meaning that the gap between simulation and reality can be closed without any human involvement at all. .... "
Semi Supervised learning
Brought to my attention, explanation in DataRobot, much more at the link.
Semi-Supervised Machine Learning
What is Semi-Supervised Machine Learning?
Semi-supervised machine learning is a combination of supervised and unsupervised machine learning methods.
With more common supervised machine learning methods, you train a machine learning algorithm on a “labeled” dataset in which each record includes the outcome information. This allows the algorithm to deduce patterns and identify relationships between your target variable and the rest of the dataset based on information it already has. In contrast, unsupervised machine learning algorithms learn from a dataset without the outcome variable. In semi-supervised learning, an algorithm learns from a dataset that includes both labeled and unlabeled data, usually mostly unlabeled. ... "
See also: https://en.wikipedia.org/wiki/Semi-supervised_learning
Semi-Supervised Machine Learning
What is Semi-Supervised Machine Learning?
Semi-supervised machine learning is a combination of supervised and unsupervised machine learning methods.
With more common supervised machine learning methods, you train a machine learning algorithm on a “labeled” dataset in which each record includes the outcome information. This allows the algorithm to deduce patterns and identify relationships between your target variable and the rest of the dataset based on information it already has. In contrast, unsupervised machine learning algorithms learn from a dataset without the outcome variable. In semi-supervised learning, an algorithm learns from a dataset that includes both labeled and unlabeled data, usually mostly unlabeled. ... "
See also: https://en.wikipedia.org/wiki/Semi-supervised_learning
Managing Drone Draffic
Likely to continue to become more important.
Managing Drone traffic in Cities
NASA's first-of-kind tests look to manage drone in cities
by Scott Sonner
In this May 21, 2019 photo, a drone flies over downtown Reno, Nev., before landing on the Cal-Neva casino parking garage, as part of a NASA simulation to test emerging technology that someday will be used to manage travel of hundreds of thousands of commercial, unmanned aerial vehicles (UAVs) delivering packages. It marked the first time such tests have been conducted in an urban setting.
NASA has launched the final stage of a four-year effort to develop a national traffic management system for drones, testing them in cities for the first time beyond the operator's line of sight as businesses look in the future to unleash the unmanned devices in droves above busy streets and buildings.
Multiple drones took to the air at the same time above downtown Reno this week in a series of simulations testing emerging technology that someday will be used to manage hundreds of thousands of small unmanned commercial aircraft delivering packages, pizzas and medical supplies. ... "
Managing Drone traffic in Cities
NASA's first-of-kind tests look to manage drone in cities
by Scott Sonner
In this May 21, 2019 photo, a drone flies over downtown Reno, Nev., before landing on the Cal-Neva casino parking garage, as part of a NASA simulation to test emerging technology that someday will be used to manage travel of hundreds of thousands of commercial, unmanned aerial vehicles (UAVs) delivering packages. It marked the first time such tests have been conducted in an urban setting.
NASA has launched the final stage of a four-year effort to develop a national traffic management system for drones, testing them in cities for the first time beyond the operator's line of sight as businesses look in the future to unleash the unmanned devices in droves above busy streets and buildings.
Multiple drones took to the air at the same time above downtown Reno this week in a series of simulations testing emerging technology that someday will be used to manage hundreds of thousands of small unmanned commercial aircraft delivering packages, pizzas and medical supplies. ... "
Sony Launches IoT Chip
Low power, 60 mile range, but just in Japan for now. Seems would be ideal for Smart city applications. Most notable a new means to gather data and deliver applications remotely.
Sony built an IoT chip with a 60 mile range
It will use Sony's proprietary low-power wide area (LPWA) network launching this fall. By Steve Dent, @stevetdent in Engadget
Sony is quietly launching a chip that could change how e-bikes, cars, street lamps and all kinds of other connected devices can relay information. The module, when installed on any IoT object, will allow it send data to Sony's proprietary low-power wide area (LPWA) ELTRES network launching this fall. It can transmit up to about 60 miles and work in noisy urban environments on objects moving at high speeds, opening up a lot of new applications in security, monitoring, tracking and more. ... "
Sony built an IoT chip with a 60 mile range
It will use Sony's proprietary low-power wide area (LPWA) network launching this fall. By Steve Dent, @stevetdent in Engadget
Sony is quietly launching a chip that could change how e-bikes, cars, street lamps and all kinds of other connected devices can relay information. The module, when installed on any IoT object, will allow it send data to Sony's proprietary low-power wide area (LPWA) ELTRES network launching this fall. It can transmit up to about 60 miles and work in noisy urban environments on objects moving at high speeds, opening up a lot of new applications in security, monitoring, tracking and more. ... "
Monday, May 27, 2019
Pervasive Intelligence Signals
Everywhere, thus most importantly in context, on the edge. It is getting there.
Pervasive intelligence
Smart machines everywhere
By David Schatsky, Jonathan Camhi, Aniket Dongre Deloitte
Everything is getting smarter, as new AI technology empowers an ever-widening range of devices to learn from experiences, adapt to changing situations, and predict outcomes. Companies are already exploring opportunities.
ADVANCES in artificial intelligence (AI) software and hardware are giving rise to a multitude of smart devices that can recognize and react to sights, sounds, and other patterns—and do not require a persistent connection to the cloud. These smart devices, from robots to cameras to medical devices, could well unlock greater efficiency and effectiveness at organizations that adopt them. That’s only part of the story. In some industries, they may also change how profits are divided.
Signals
AI software providers are tailoring their AI models and algorithms for deployment on machines and devices outside the data center.
Chip manufacturers are increasingly embedding support for AI directly into devices
AI chips are being developed that can perform complex computations but consume minute amounts of power in some cases, measured in microwatts.
Machines with embedded AI are beginning to appear in many industries, including logistics, manufacturing, agriculture, transportation, and health care.
Annual shipments of devices with embedded AI are projected to increase from 79 million last year to 1.2 billion in 2023.
Advanced hardware is propelling AI out of the data center: .....
Pervasive intelligence
Smart machines everywhere
By David Schatsky, Jonathan Camhi, Aniket Dongre Deloitte
Everything is getting smarter, as new AI technology empowers an ever-widening range of devices to learn from experiences, adapt to changing situations, and predict outcomes. Companies are already exploring opportunities.
ADVANCES in artificial intelligence (AI) software and hardware are giving rise to a multitude of smart devices that can recognize and react to sights, sounds, and other patterns—and do not require a persistent connection to the cloud. These smart devices, from robots to cameras to medical devices, could well unlock greater efficiency and effectiveness at organizations that adopt them. That’s only part of the story. In some industries, they may also change how profits are divided.
Signals
AI software providers are tailoring their AI models and algorithms for deployment on machines and devices outside the data center.
Chip manufacturers are increasingly embedding support for AI directly into devices
AI chips are being developed that can perform complex computations but consume minute amounts of power in some cases, measured in microwatts.
Machines with embedded AI are beginning to appear in many industries, including logistics, manufacturing, agriculture, transportation, and health care.
Annual shipments of devices with embedded AI are projected to increase from 79 million last year to 1.2 billion in 2023.
Advanced hardware is propelling AI out of the data center: .....
BBC Reports on Shareholder Vote Amazon on Rekognition
Shareholders vs government regulation. Intriguing details from the UK, where advanced camera surveillance is ubiquitous. Advanced automation of these systems will quickly follow.
Amazon defeated Rekognition revolt by a large margin
By Leo Kelion, BBC Technology desk editor
An attempted shareholder revolt over Amazon's sale of facial recognition technology to the police mustered less than 3% of votes cast at the firm's annual general meeting.
The tally was revealed in a corporate filing.
The tech firm had said it was aware of civil rights concerns but had not received any reports of law enforcement clients misusing its Rekognition tool.
Even so, the system is set for further scrutiny.
Last week, Republican and Democrat politicians on the House Oversight Committee raised concerns about the speed at which Amazon's facial recognition facility and others like it were being deployed. .... "
Amazon defeated Rekognition revolt by a large margin
By Leo Kelion, BBC Technology desk editor
An attempted shareholder revolt over Amazon's sale of facial recognition technology to the police mustered less than 3% of votes cast at the firm's annual general meeting.
The tally was revealed in a corporate filing.
The tech firm had said it was aware of civil rights concerns but had not received any reports of law enforcement clients misusing its Rekognition tool.
Even so, the system is set for further scrutiny.
Last week, Republican and Democrat politicians on the House Oversight Committee raised concerns about the speed at which Amazon's facial recognition facility and others like it were being deployed. .... "
Key Look at the Meaning and Value of Blockchain
A favorite commenter of mine talks about the Blockchain. Who we worked with at IBM. With some great additional references. Recommended read and reference.
Irving Wladawsky-Berger
A collection of observations, news and resources on the changing nature of innovation, technology, leadership, and other subjects.
Blockchain - the Networked Ecosystem is the Business
A few weeks ago I attended two back-to-back blockchain events in Toronto, - the Blockchain Research Institute All-Member Summit followed by the inaugural Blockchain Revolution Global conference. Both events included a number of excellent talks and panels. One of the presentations I particularly enjoyed was Scaling Blockchain for the Enterprise: Emerging Business Models, by IBM’s Andrew Martin and Smitha Soman. Their presentation was based on their recently published report Building your blockchain advantage.
Since it first came to light a decade ago as the public, distributed ledger for the Bitcoin cryptocurrency, people have struggled to understand what blockchain is all about and what it’s truly good for. This is not unusual for potentially transformative technologies in their early stages, as was the case with the Internet, and is still the case with AI. The key question is whether blockchain has the potential to become a truly transformative technology over time. With few exceptions, the answer is positive.
The essence of blockchain, notes the report, is that the unit of competition is the networked ecosystem, no longer a single enterprise. “As blockchain adoption continues to gather momentum, organizations must approach their blockchain strategies with the same rigor and commitment as any other new and transformative strategies. They can’t just fall back on prototypes alone. They need to build a robust business case for blockchain that includes a fair incentive model to attract all the partners required for the success of their networks.”
Based on the analysis of over 25 blockchain networks in various stages of production across multiple industries and geographies, the report recommends that a company’s blockchain journey should evolve along three distinct stages: in search of value, - establishing a minimum value ecosystem; getting to scale - creating value for an overall industry; and designing for new markets, - creating entirely new markets and business models. Let me briefly discuss each of these stages..... "
Irving Wladawsky-Berger
A collection of observations, news and resources on the changing nature of innovation, technology, leadership, and other subjects.
Blockchain - the Networked Ecosystem is the Business
A few weeks ago I attended two back-to-back blockchain events in Toronto, - the Blockchain Research Institute All-Member Summit followed by the inaugural Blockchain Revolution Global conference. Both events included a number of excellent talks and panels. One of the presentations I particularly enjoyed was Scaling Blockchain for the Enterprise: Emerging Business Models, by IBM’s Andrew Martin and Smitha Soman. Their presentation was based on their recently published report Building your blockchain advantage.
Since it first came to light a decade ago as the public, distributed ledger for the Bitcoin cryptocurrency, people have struggled to understand what blockchain is all about and what it’s truly good for. This is not unusual for potentially transformative technologies in their early stages, as was the case with the Internet, and is still the case with AI. The key question is whether blockchain has the potential to become a truly transformative technology over time. With few exceptions, the answer is positive.
The essence of blockchain, notes the report, is that the unit of competition is the networked ecosystem, no longer a single enterprise. “As blockchain adoption continues to gather momentum, organizations must approach their blockchain strategies with the same rigor and commitment as any other new and transformative strategies. They can’t just fall back on prototypes alone. They need to build a robust business case for blockchain that includes a fair incentive model to attract all the partners required for the success of their networks.”
Based on the analysis of over 25 blockchain networks in various stages of production across multiple industries and geographies, the report recommends that a company’s blockchain journey should evolve along three distinct stages: in search of value, - establishing a minimum value ecosystem; getting to scale - creating value for an overall industry; and designing for new markets, - creating entirely new markets and business models. Let me briefly discuss each of these stages..... "
Automation ROI
Perspectives on value measurement, when and how.
Measuring Automation ROI by Deloitte
Maximizing your mileage...
Taking a strategic, holistic approach to automation ROI during the planning phases helps to build a more robust business case, demonstrating how enterprise automation can drive competitive advantage and how you’ll save money. ... "
Measuring Automation ROI by Deloitte
Maximizing your mileage...
Taking a strategic, holistic approach to automation ROI during the planning phases helps to build a more robust business case, demonstrating how enterprise automation can drive competitive advantage and how you’ll save money. ... "
Sunday, May 26, 2019
Beware of Giant Underwater Volcanoes and Seismic Hums
The planet is alive.
Geologists Discover Largest Underwater Volcano, Explain Weird Hum Heard Around the World By Laura Geggel in Livescience
A strange seismic event off the coast of Africa has led scientists to a mighty finding: the discovery of the largest underwater volcanic eruption ever recorded.
The eruption also may explain a weird seismic event recorded in November 2018 just off the island of Mayotte, located between Madagascar and Mozambique in the Indian Ocean. Researchers described that event as a seismic hum that circled the world, but no one could figure out what sparked it. .... "
Geologists Discover Largest Underwater Volcano, Explain Weird Hum Heard Around the World By Laura Geggel in Livescience
The eruption also may explain a weird seismic event recorded in November 2018 just off the island of Mayotte, located between Madagascar and Mozambique in the Indian Ocean. Researchers described that event as a seismic hum that circled the world, but no one could figure out what sparked it. .... "
What do the Amazon Star Product Ratings Mean? Can we get Transparency?
Well they are just an average of all the individual ratings, no? Well that approach would be very manipulable, and you see right away that they place more weight to people that have bought the product. Turns out it's much more than that. It's machine learning and lots more that is proprietary and un-revealed. So not transparent. And in a complex context. Which makes you wonder what transparency for this example of machine learning means. Even if we were given the exact set of algorithms used, what use would they be? Depends also on the data used to create the learn the algorithms. And how we as customers plan to use the ratings. So can we really be transparent in AI? Any more that a human can be transparent? Quote below made me think this, good article:
What do Amazon's Star Ratings Really Mean? In Wired By Louise Matsakis
" ..... Starting in 2015, Amazon began weighting stars using a proprietary machine-learning model. Some reviews now count more than others in the total average, based on factors like how recent they are and whether they come from “verified” purchasers (meaning Amazon could confirm the reviewer actually bought the item they claimed to love or hate). David Bryant, an Amazon seller who also blogs about the company, believes Amazon may also take into consideration factors like the age of the reviewer’s account and the average star rating they usually leave. “There appears to be some discount applied to reviewers who predominantly leave negative reviews,” he says. .... "
What do Amazon's Star Ratings Really Mean? In Wired By Louise Matsakis
" ..... Starting in 2015, Amazon began weighting stars using a proprietary machine-learning model. Some reviews now count more than others in the total average, based on factors like how recent they are and whether they come from “verified” purchasers (meaning Amazon could confirm the reviewer actually bought the item they claimed to love or hate). David Bryant, an Amazon seller who also blogs about the company, believes Amazon may also take into consideration factors like the age of the reviewer’s account and the average star rating they usually leave. “There appears to be some discount applied to reviewers who predominantly leave negative reviews,” he says. .... "
Voice Control for all Your Gadgets
Hands free is good though perhaps not necessary. Gesture, which we also looked at, has emerged much less.
One Day your Voice will Control all your Gadgets, and they will Gontrol you in Technology Review
Everything you own in the future will be controlled by your voice. That’s what this year’s CES, the world’s largest annual gadget bonanza, has made abundantly clear.
Google and Amazon have been in fierce competition to put their assistants into your TV, your car, and even your bathroom. It all came to a head this week in Las Vegas, where the full line-up of voice-enabled products underscored the scope of each company’s ambitions.
Maybe it seems like a wasteful side effect of capitalism that you can now ask Alexa to lift your toilet cover (or maybe not—you do you), but there’s more to the ubiquity of voice interfaces than a never-ending series of hardware companies jumping on the bandwagon.
It’s tied to an idea that leading AI expert Kai-Fu Lee calls OMO, online-merge-of-offline. OMO, as he describes it, refers to combining our digital and physical worlds in such a way that every object in our surrounding environment will become an interaction point for the internet—as well as a sensor that collects data about our lives. This will power what he dubs the “third wave” of AI: our algorithms, finally given a comprehensive view of all our behaviors, will be able to hyper-personalize our experiences, whether in the grocery store or the classroom. .... "
One Day your Voice will Control all your Gadgets, and they will Gontrol you in Technology Review
Everything you own in the future will be controlled by your voice. That’s what this year’s CES, the world’s largest annual gadget bonanza, has made abundantly clear.
Google and Amazon have been in fierce competition to put their assistants into your TV, your car, and even your bathroom. It all came to a head this week in Las Vegas, where the full line-up of voice-enabled products underscored the scope of each company’s ambitions.
Maybe it seems like a wasteful side effect of capitalism that you can now ask Alexa to lift your toilet cover (or maybe not—you do you), but there’s more to the ubiquity of voice interfaces than a never-ending series of hardware companies jumping on the bandwagon.
It’s tied to an idea that leading AI expert Kai-Fu Lee calls OMO, online-merge-of-offline. OMO, as he describes it, refers to combining our digital and physical worlds in such a way that every object in our surrounding environment will become an interaction point for the internet—as well as a sensor that collects data about our lives. This will power what he dubs the “third wave” of AI: our algorithms, finally given a comprehensive view of all our behaviors, will be able to hyper-personalize our experiences, whether in the grocery store or the classroom. .... "
Memory Lane by Accenture
Nice idea, I have some colleagues that are working on related problems, will bring this up.
Accenture Interactive creates 'Memory Lane' AI project to tackle elderly loneliness By Imogen Watson in theDrum
Accenture Interactive has created a project for the Swedish energy company, Stockholm Exergi, that uses artificial intelligence (AI) to tackle elderly loneliness.
Following medical research into elderly health, Accenture Interactive discovered that loneliness accelerated health problems including depression and early-stage dementia in the elderly.
To combat this, it created a project titled 'Memory Lane' that uses a voice assistant combined with conversational artificial intelligence to capture stories for future generations.
Using Google Voice Assistant, 'Memory Lane' invites someone who is lonely to tell their life story. Once captures, the discussion is then instantly converted into both a physical book and a podcast. ... " .... '
Accenture Interactive creates 'Memory Lane' AI project to tackle elderly loneliness By Imogen Watson in theDrum
Accenture Interactive has created a project for the Swedish energy company, Stockholm Exergi, that uses artificial intelligence (AI) to tackle elderly loneliness.
Following medical research into elderly health, Accenture Interactive discovered that loneliness accelerated health problems including depression and early-stage dementia in the elderly.
To combat this, it created a project titled 'Memory Lane' that uses a voice assistant combined with conversational artificial intelligence to capture stories for future generations.
Using Google Voice Assistant, 'Memory Lane' invites someone who is lonely to tell their life story. Once captures, the discussion is then instantly converted into both a physical book and a podcast. ... " .... '
Saturday, May 25, 2019
Facial Recognition in Retail
An expert discussion of the use vs risks of facial recognition in retail.
Do the benefits of using facial recognition in retail outweigh the risks? by Mark Ryski in Retailwire
Facial recognition technology has many practical applications, including law enforcement, airport security and retail loss prevention to name a few. And while these use cases seem reasonable to most people, not everyone is enamored with facial recognition technology or how it’s being used.
Thus far there has been very little legislation regarding the use of facial recognition, but that’s changing. On May 14, the city of San Francisco passed the Stop Secret Surveillance Ordinance that bans city agencies including law enforcement from utilizing facial recognition technologies. The legislation doesn’t apply to businesses, but one has to wonder if this is only a matter of time.
In March, a bipartisan bill was introduced in the U.S. Senate to strengthen consumer protections by prohibiting companies that use facial recognition technology from collecting and resharing data for identifying or tracking consumers without their consent. Illinois made it illegal to collect biometric data without consent in 2008. ... "
Do the benefits of using facial recognition in retail outweigh the risks? by Mark Ryski in Retailwire
Facial recognition technology has many practical applications, including law enforcement, airport security and retail loss prevention to name a few. And while these use cases seem reasonable to most people, not everyone is enamored with facial recognition technology or how it’s being used.
Thus far there has been very little legislation regarding the use of facial recognition, but that’s changing. On May 14, the city of San Francisco passed the Stop Secret Surveillance Ordinance that bans city agencies including law enforcement from utilizing facial recognition technologies. The legislation doesn’t apply to businesses, but one has to wonder if this is only a matter of time.
In March, a bipartisan bill was introduced in the U.S. Senate to strengthen consumer protections by prohibiting companies that use facial recognition technology from collecting and resharing data for identifying or tracking consumers without their consent. Illinois made it illegal to collect biometric data without consent in 2008. ... "
Kinds and Channels of Chat for Customers
Good thoughts about what customers need and the variety of chat channels.
Live chat and customer anxiety By Niamh Reed in Customerthink
Customers come in all moods and mindsets. Some are confident, some are angry, some are calm, some are anxious. And when it comes to customer service, some contact channels are better suited to different moods than others.
Customer anxiety, in particular, is tricky to manage. Service practices that work for most customers may not create good experiences for anxious ones. One of the best channels for catering to customer anxiety is live chat software.
Here’s why. .... "
Live chat and customer anxiety By Niamh Reed in Customerthink
Customers come in all moods and mindsets. Some are confident, some are angry, some are calm, some are anxious. And when it comes to customer service, some contact channels are better suited to different moods than others.
Customer anxiety, in particular, is tricky to manage. Service practices that work for most customers may not create good experiences for anxious ones. One of the best channels for catering to customer anxiety is live chat software.
Here’s why. .... "
Google Aims Assistant at Conversation
Have been impressed recently by Google's advances in assistance.
Google's AI Assistant aims to transcend the smart speaker in Techexplore by Rachel Lerman
... At the time, Amazon had been selling its Echo smart speaker, powered by its Alexa voice assistant, for more than a year . Apple's Siri was already five years old and familiar to most iPhone users. Google's main entry in the field up to that point was Google Now, a phone-bound app that took voice commands but didn't answer back.
Now the Google Assistant—known primarily as the voice of the Google Home smart speaker—is increasingly central to Google's new products. And even though it remains commercially overshadowed by Alexa, it keeps pushing the boundaries of what artificial intelligence can accomplish in everyday settings. ... "
Google's AI Assistant aims to transcend the smart speaker in Techexplore by Rachel Lerman
... At the time, Amazon had been selling its Echo smart speaker, powered by its Alexa voice assistant, for more than a year . Apple's Siri was already five years old and familiar to most iPhone users. Google's main entry in the field up to that point was Google Now, a phone-bound app that took voice commands but didn't answer back.
Now the Google Assistant—known primarily as the voice of the Google Home smart speaker—is increasingly central to Google's new products. And even though it remains commercially overshadowed by Alexa, it keeps pushing the boundaries of what artificial intelligence can accomplish in everyday settings. ... "
Finding Manufacturing Defects with AI
Descriptive largely non technical article that explains potential.
How to Detect Manufacturing Defects Using AI in Nanalyze
While most consumers don’t often see the process of manufacturing their products, they are most certainly aware of when there is a defect in the product they receive. This kind of experience can range from disappointing to downright disastrous – neither of which is ideal for the company producing the product. Unfortunately, the reality of modern manufacturing is that unexpected defects are more frequent than most realize, and most companies that produce physical products don’t know about critical defects until it is already too late: the product is in their customers’ hands.
A sour patch of Amazon reviews alone can tank the life of a product that ships with a small but severe issue. Incidentally, many companies have actually turned to Amazon reviews to find issues that went under the radar in their manufacturing facilities. While this is an interesting trend in its own right, most companies don’t want things to come to this point. There needs to be a more reliable way to prepare for unexpected yet major issues in a product – be it a discrepancy in sourced parts, a design flaw, or something we can’t even imagine. From there, how does a company keep that data close at hand and continually iterate on known and unknown problems? That is where Instrumental comes in.
How Instrumental Helps Find Defects .... "
How to Detect Manufacturing Defects Using AI in Nanalyze
While most consumers don’t often see the process of manufacturing their products, they are most certainly aware of when there is a defect in the product they receive. This kind of experience can range from disappointing to downright disastrous – neither of which is ideal for the company producing the product. Unfortunately, the reality of modern manufacturing is that unexpected defects are more frequent than most realize, and most companies that produce physical products don’t know about critical defects until it is already too late: the product is in their customers’ hands.
A sour patch of Amazon reviews alone can tank the life of a product that ships with a small but severe issue. Incidentally, many companies have actually turned to Amazon reviews to find issues that went under the radar in their manufacturing facilities. While this is an interesting trend in its own right, most companies don’t want things to come to this point. There needs to be a more reliable way to prepare for unexpected yet major issues in a product – be it a discrepancy in sourced parts, a design flaw, or something we can’t even imagine. From there, how does a company keep that data close at hand and continually iterate on known and unknown problems? That is where Instrumental comes in.
How Instrumental Helps Find Defects .... "
Friday, May 24, 2019
Getting Driverless up to the Doorstop
Most interesting is that the computing power for the 'robot' will be from the vehicle. Good now, but I wonder if advances will make this unnecessary. The wheel-less stork like Digit is different too.. First time I have seen such a walking device proposed. Will make the suburbs a weirder place, with storks patrolling the front yard. Can I ask it to do some light chores when it stops by?
Ford’s Way to Finish Driverless Deliveries: Package-Carrying Robots
Bloomberg By Keith Naughton
Researchers at Ford are working to incorporate into the company’s driverless delivery system an android capable of carrying a 40-pound load. The Digit android is intended to address what self-driving researchers refer to as "the last 50-foot problem," or how to get a package from an autonomous delivery vehicle to the customer's doorstep. Ford's decision to use a robot with two legs instead of wheels came with help from researchers at the University of Michigan, who emphasized the inherent attractiveness of a bipedal robot. In addition, the Digit system gets most of its computing power from Ford's self-driving vehicle, allowing the robot to have a more lightweight design. Ford would like to deploy Digit delivery robots as soon as 2021, the same year it plans to introduce its autonomous vehicle fleets. ... '
Ford’s Way to Finish Driverless Deliveries: Package-Carrying Robots
Bloomberg By Keith Naughton
Researchers at Ford are working to incorporate into the company’s driverless delivery system an android capable of carrying a 40-pound load. The Digit android is intended to address what self-driving researchers refer to as "the last 50-foot problem," or how to get a package from an autonomous delivery vehicle to the customer's doorstep. Ford's decision to use a robot with two legs instead of wheels came with help from researchers at the University of Michigan, who emphasized the inherent attractiveness of a bipedal robot. In addition, the Digit system gets most of its computing power from Ford's self-driving vehicle, allowing the robot to have a more lightweight design. Ford would like to deploy Digit delivery robots as soon as 2021, the same year it plans to introduce its autonomous vehicle fleets. ... '
AI More Human by Adding Dilemma
Wondering if the Prisoners Dilemma is the right thing to use here, but agree that showing typical human emotions can bring sympathy to a machine as it can to humans. Context dependent though, and the PD is an contrived context. Not a big enough sample either. Also its interesting that these are specifically 'avatars', or human-like agent. Would it be different if was not an avatar? Like that the test is being done though. Perhaps one way of certifying AI agents.
Making AI More Human
University of Waterloo News
A study by researchers at the University of Waterloo in Canada found that adding appropriate emotions to artificial intelligence (AI) avatars would make humans more accepting of them. Waterloo's Moojan Ghafurian, Neil Budnarain, and Jesse Hoey employed the classic Prisoner's Dilemma game, substituting one of two human "prisoners" with a virtual AI developed at the University of Colorado, Boulder. The researchers utilized virtual agents that evoked either neutrality, appropriate emotions, or random emotions. Participants cooperated 20 out of 25 times with the AI that exhibited human-like emotions, 16 out of 25 times for the agent with random emotions, and 17 out of 25 times for the emotionless agent. Ghafurian said, "Showing proper emotions can significantly improve the perception of humanness and how much people enjoy interacting with the technology." ... .'
Making AI More Human
University of Waterloo News
A study by researchers at the University of Waterloo in Canada found that adding appropriate emotions to artificial intelligence (AI) avatars would make humans more accepting of them. Waterloo's Moojan Ghafurian, Neil Budnarain, and Jesse Hoey employed the classic Prisoner's Dilemma game, substituting one of two human "prisoners" with a virtual AI developed at the University of Colorado, Boulder. The researchers utilized virtual agents that evoked either neutrality, appropriate emotions, or random emotions. Participants cooperated 20 out of 25 times with the AI that exhibited human-like emotions, 16 out of 25 times for the agent with random emotions, and 17 out of 25 times for the emotionless agent. Ghafurian said, "Showing proper emotions can significantly improve the perception of humanness and how much people enjoy interacting with the technology." ... .'
Bosch is Thinking IOT Smart Contracts
An example of SC use by big players, here with IOT transactions. No what happens when the code goes beyond counting coins?
Why Bosch is jumping on the Ethereum blockchain From deCryptmedia
German engineering giant Bosch wants to connect millions of cars, machines, buildings and things—and get them to pay each other. Smart contracts may be the answer. ..... " By Adriana Hamacher
-> Also see Bosch's well done page on the working of Blockchains and Smart Contracts. Which gives some indications of what they are doing there.
Why Bosch is jumping on the Ethereum blockchain From deCryptmedia
German engineering giant Bosch wants to connect millions of cars, machines, buildings and things—and get them to pay each other. Smart contracts may be the answer. ..... " By Adriana Hamacher
-> Also see Bosch's well done page on the working of Blockchains and Smart Contracts. Which gives some indications of what they are doing there.
Real Estate Investment Support via Blockchain
Brought to my attention to support a potential project, examining motivation of block chains, their architecture and integrated smart contact use.
For Real Estate, Blockchain Could Unshackle Investment
A special interest group with the Enterprise Ethereum Alliance is detailing opportunities and offering examples of how blockchain can create new real estate markets. .... " By Lucas Mearian, Senior Reporter, Computerworld .... "
And a PDF on Real Estate Use cases by using tokenization of investments and agreements:
.... Blockchain as an Enabling Technology
One of the benefits of blockchain technology, and security tokens in particular, is that it offers away to buy and sell properties in more granular pieces. A property, for example, can be divided into individual investment units each identified and embodied via a security token (via the ERC 20 or ERC 721 specifications or a variant thereof). These tokens will identify ownership, provide a mechanism for transactional processing, and serve as the property identifier to allow for trading on regulated secondary markets. .... "
And talking the value of smart contracts (p.9)
"... By utilizing smart contracts, the whole agreement can be automated and payments can be sent and received instantly. A smart contract (deployed on a decentralized blockchain network) can make it possible to write, authenticate, and audit agreements in realtime. This can be done on a global scale and without the need for intermediaries, thus keeping the value between the main parties involved in the deal. Within the smart contract (which is typically publicly available for anyone) the instructions and dependencies are clearly defined so payment can only be executed as long as these conditions are fulfilled. This gives greater transparency to the parties involved and theoretically reduce the number of disputes. Smart contract processing also has the potential to reduce the risk of fraud, as digital identity verification will be a step in the process and only allowed parties can interact with them using their private keys. Every node within a blockchain network is continually validating all transactions in the blockchain thereby reducing the likelihood of a fraudulent transaction. ... "
For Real Estate, Blockchain Could Unshackle Investment
A special interest group with the Enterprise Ethereum Alliance is detailing opportunities and offering examples of how blockchain can create new real estate markets. .... " By Lucas Mearian, Senior Reporter, Computerworld .... "
And a PDF on Real Estate Use cases by using tokenization of investments and agreements:
.... Blockchain as an Enabling Technology
One of the benefits of blockchain technology, and security tokens in particular, is that it offers away to buy and sell properties in more granular pieces. A property, for example, can be divided into individual investment units each identified and embodied via a security token (via the ERC 20 or ERC 721 specifications or a variant thereof). These tokens will identify ownership, provide a mechanism for transactional processing, and serve as the property identifier to allow for trading on regulated secondary markets. .... "
And talking the value of smart contracts (p.9)
"... By utilizing smart contracts, the whole agreement can be automated and payments can be sent and received instantly. A smart contract (deployed on a decentralized blockchain network) can make it possible to write, authenticate, and audit agreements in realtime. This can be done on a global scale and without the need for intermediaries, thus keeping the value between the main parties involved in the deal. Within the smart contract (which is typically publicly available for anyone) the instructions and dependencies are clearly defined so payment can only be executed as long as these conditions are fulfilled. This gives greater transparency to the parties involved and theoretically reduce the number of disputes. Smart contract processing also has the potential to reduce the risk of fraud, as digital identity verification will be a step in the process and only allowed parties can interact with them using their private keys. Every node within a blockchain network is continually validating all transactions in the blockchain thereby reducing the likelihood of a fraudulent transaction. ... "
Deriving Scent from DNA?
Way back when we were looking at scent and flavor in consumer products, we talked to scientists about the possibility of extracting these measures from DNA of component natural plants and flowers. Were told no it was not possible. It seems not to be there yet today, but some interesting advances taken towards doing this. Some clear commercial possibilities as well as understanding our natural world
Science has brought back the scent of a long-dead flower in Engadget By Nick Summers, @nisummers
Well, sort of.
We've lost some parts of our natural world. Swathes of plants and animals have been consumed by evolution, shifting climates or the often-damaging expansion of humankind.
For a moment, though, London's iconic Barbican center will let you smell a fragment of our lost history. In the corner of a new AI exhibit, a cuboid hood dangles from the ceiling. Inside are four nozzles that slowly release carefully-chosen fragrances into the air around you.
Bark. Pine. Mint. I'm no smell expert, but these are the words that sprang to mind as I slowly inhaled the odors.
The artificial blend is, for now, our best guess at what Hibiscadelphus wilderianus, a tree that once stood on the Hawaiian island of Maui, used to smell like. A small rock sits to the right of the nozzles, hinting at the ancient lava fields where the last specimen was plucked from in 1912. It's a modest visual aid, which is why some virtual environments are included in a short documentary that plays on a loop nearby. Taken as a whole, the installation is powerful enough to drown out the rest of the exhibit and, for a brief moment, transport you to another time and place entirely.
The project was a multi-year collaboration between, among others, Ginkgo Bioworks, a company that specializes in made-to-order microbes, the International Flavors & Fragrances Inc. (IFF), Sissel Tolaas, a prolific smell researcher, and Dr. Alexandra Daisy Ginsberg, a multidisciplinary artist and synthetic biology researcher. As Scientific American explains, it all started when Jason Kelly, the CEO of Ginkgo Bioworks, heard about Scent Trek, an initiative by flavor and fragrance giant Givaudan to capture the molecules around exotic flowers and fruits. ... "
Science has brought back the scent of a long-dead flower in Engadget By Nick Summers, @nisummers
Well, sort of.
We've lost some parts of our natural world. Swathes of plants and animals have been consumed by evolution, shifting climates or the often-damaging expansion of humankind.
For a moment, though, London's iconic Barbican center will let you smell a fragment of our lost history. In the corner of a new AI exhibit, a cuboid hood dangles from the ceiling. Inside are four nozzles that slowly release carefully-chosen fragrances into the air around you.
Bark. Pine. Mint. I'm no smell expert, but these are the words that sprang to mind as I slowly inhaled the odors.
The artificial blend is, for now, our best guess at what Hibiscadelphus wilderianus, a tree that once stood on the Hawaiian island of Maui, used to smell like. A small rock sits to the right of the nozzles, hinting at the ancient lava fields where the last specimen was plucked from in 1912. It's a modest visual aid, which is why some virtual environments are included in a short documentary that plays on a loop nearby. Taken as a whole, the installation is powerful enough to drown out the rest of the exhibit and, for a brief moment, transport you to another time and place entirely.
The project was a multi-year collaboration between, among others, Ginkgo Bioworks, a company that specializes in made-to-order microbes, the International Flavors & Fragrances Inc. (IFF), Sissel Tolaas, a prolific smell researcher, and Dr. Alexandra Daisy Ginsberg, a multidisciplinary artist and synthetic biology researcher. As Scientific American explains, it all started when Jason Kelly, the CEO of Ginkgo Bioworks, heard about Scent Trek, an initiative by flavor and fragrance giant Givaudan to capture the molecules around exotic flowers and fruits. ... "
Samsung Puppets the Human Face
With lots of interesting video examples at the link. From Samsung's AI group. Fascinating and also a scary. When will we not need actors at all? First I had seem the term 'puppeting'.
Oh no, Samsung’s AI lab can create a video of you from a single still photo
The company recently showed off AI that allows it to “puppet” someone’s face onto another person’s body–using only a single photo for reference. By Mark Wilson in Fast Company
Two years ago, a new, freely distributed AI software called “Deepfakes” enabled the public to melt reality by placing anyone’s head on someone else’s body in any video. Deepfakes is powerful, scary, and just labor intensive enough that our world hasn’t imploded yet. The AI’s biggest challenge is that for the tech to work convincingly, you have to collect hundreds of videos and images to create a digital mold of the person you want to impersonate.
But what if creating a digital clone didn’t require all this work? What if you could fake someone from a single photo? That’s the promise of new research out of Samsung’s AI lab. Starting with just one photo, Samsung’s latest AI technique can animate the 2D image into a convincing, full motion video. They’ve animated Britney Spears, Neil Patrick Harris, Marilyn Monroe, even the Mona Lisa herself. ... "
Oh no, Samsung’s AI lab can create a video of you from a single still photo
The company recently showed off AI that allows it to “puppet” someone’s face onto another person’s body–using only a single photo for reference. By Mark Wilson in Fast Company
Two years ago, a new, freely distributed AI software called “Deepfakes” enabled the public to melt reality by placing anyone’s head on someone else’s body in any video. Deepfakes is powerful, scary, and just labor intensive enough that our world hasn’t imploded yet. The AI’s biggest challenge is that for the tech to work convincingly, you have to collect hundreds of videos and images to create a digital mold of the person you want to impersonate.
But what if creating a digital clone didn’t require all this work? What if you could fake someone from a single photo? That’s the promise of new research out of Samsung’s AI lab. Starting with just one photo, Samsung’s latest AI technique can animate the 2D image into a convincing, full motion video. They’ve animated Britney Spears, Neil Patrick Harris, Marilyn Monroe, even the Mona Lisa herself. ... "
New IOT Security Features
As more susceptible devices are out there, they need to be better protected.
Rice U. Researchers Unveil IoT Security Feature
Rice University By Jade Boyd
Researchers at Rice University have developed physically unclonable function (PUF) technology, which is 10 times more reliable than current methods of producing unclonable digital fingerprints for Internet of Things (IoT) devices. PUF uses a microchip's physical imperfections to produce unique security keys that can be used to authenticate devices linked to the IoT. The system generates two unique fingerprints for each PUF, known as the "zero-overhead" method. This method uses the same PUF components to make both keys and does not require extra area and latency because of a design feature that also allows the PUF to be about 15 times more energy-efficient than previously developed versions. Said Rice University researcher Kaiyuan Yang, "In our design, the PUF module is always on, but it takes very little power, even less than a conventional system in sleep mode."
Rice U. Researchers Unveil IoT Security Feature
Rice University By Jade Boyd
Researchers at Rice University have developed physically unclonable function (PUF) technology, which is 10 times more reliable than current methods of producing unclonable digital fingerprints for Internet of Things (IoT) devices. PUF uses a microchip's physical imperfections to produce unique security keys that can be used to authenticate devices linked to the IoT. The system generates two unique fingerprints for each PUF, known as the "zero-overhead" method. This method uses the same PUF components to make both keys and does not require extra area and latency because of a design feature that also allows the PUF to be about 15 times more energy-efficient than previously developed versions. Said Rice University researcher Kaiyuan Yang, "In our design, the PUF module is always on, but it takes very little power, even less than a conventional system in sleep mode."
Alexa Reports Your Blood Sugar Level
An example of smart home data reporting, though not sure how they have gotten beyond regulations in the space.
Alexa, Whats my Blood Sugar Level? in Wired
Amazon may be known as the “everything store,” but the company’s tendrils extend far beyond ecommerce. On Thursday, Amazon said Alexa-enabled devices can now handle customers’ sensitive medical data, and it teased the release of a new kit that would allow approved outside developers to build Alexa skills that access users’ private health information, paving the way for the voice assistant to play a bigger role in health care.
With the announcement came the release of new skills giving Alexa the ability to relay and store blood sugar measurements from internet-connected monitoring devices, help schedule doctors’ appointments, pass on post-op instructions from hospitals, and provide prescription delivery updates by securely accessing customers’ private medical information. .... "
Alexa, Whats my Blood Sugar Level? in Wired
Amazon may be known as the “everything store,” but the company’s tendrils extend far beyond ecommerce. On Thursday, Amazon said Alexa-enabled devices can now handle customers’ sensitive medical data, and it teased the release of a new kit that would allow approved outside developers to build Alexa skills that access users’ private health information, paving the way for the voice assistant to play a bigger role in health care.
With the announcement came the release of new skills giving Alexa the ability to relay and store blood sugar measurements from internet-connected monitoring devices, help schedule doctors’ appointments, pass on post-op instructions from hospitals, and provide prescription delivery updates by securely accessing customers’ private medical information. .... "
Thursday, May 23, 2019
Google Duo Group Video
Notable this is available for bot IOS and and Android. Will take a look. Ability to share docs and screens for sharing work? Integration with Google Drive? Support for other business augmentation and functions?
Google Duo's Group Video Calls Roll out to Everyone in Engadget
You can chat with up to seven of your friends at the same time.
Google is making Duo more useful as it's rolling out group video calls to everyone on Android and iOS. You can have up to eight people on a call at once (a far lower limit than FaceTime's 32 and Skype's 50). Group calls gradually went live in some markets this month, but now they'll be available for everyone. ... "
Google Duo's Group Video Calls Roll out to Everyone in Engadget
You can chat with up to seven of your friends at the same time.
Google is making Duo more useful as it's rolling out group video calls to everyone on Android and iOS. You can have up to eight people on a call at once (a far lower limit than FaceTime's 32 and Skype's 50). Group calls gradually went live in some markets this month, but now they'll be available for everyone. ... "
Digital Signage
An area we spent lots of time on in innovation labs, check out the 'digital signage' tag below for more.
About this event and signup,
The latest interactive, future-proof tech puts digital signage at the forefront of in-store customer experience.
Digital signage has gone from non-responsive one-way messaging in the 80’s and 90’s to a broad array of platforms today. It offers retailers innumerable opportunities to engage shoppers, direct foot traffic and energize their in-store sales with targeted offers and promotions from this data-rich source.
Join us for this educational webinar to learn the latest in marketing reach and shopper engagement through IoT-fueled digital signage — including actionable consumer insights from digital displays, display-to-mobile advertising and personalized marketing via displays. You’ll learn how to capture more visitor time, spend and mindshare with these onsite customer experience tools and will hear inspiring retailer case studies.
June 19, 2019, Wednesday, 12:00pm to 1:00pm, Eastern Daylight Time
About the presenters:
Featured Presenter Jeth Harbinson, IoT Segment Lead, Retail Arrow Electronics
Featured Presenter, Ajay Kapoor, CEO, TouchSource
BrainTrust Panelist, Adrian Weidmann, Principal, StoreStream Metrics, LLC
Moderator, Al McClain, CEO, Co-founder, RetailWire
About this event and signup,
The latest interactive, future-proof tech puts digital signage at the forefront of in-store customer experience.
Digital signage has gone from non-responsive one-way messaging in the 80’s and 90’s to a broad array of platforms today. It offers retailers innumerable opportunities to engage shoppers, direct foot traffic and energize their in-store sales with targeted offers and promotions from this data-rich source.
Join us for this educational webinar to learn the latest in marketing reach and shopper engagement through IoT-fueled digital signage — including actionable consumer insights from digital displays, display-to-mobile advertising and personalized marketing via displays. You’ll learn how to capture more visitor time, spend and mindshare with these onsite customer experience tools and will hear inspiring retailer case studies.
June 19, 2019, Wednesday, 12:00pm to 1:00pm, Eastern Daylight Time
About the presenters:
Featured Presenter Jeth Harbinson, IoT Segment Lead, Retail Arrow Electronics
Featured Presenter, Ajay Kapoor, CEO, TouchSource
BrainTrust Panelist, Adrian Weidmann, Principal, StoreStream Metrics, LLC
Moderator, Al McClain, CEO, Co-founder, RetailWire
Use of AI in Retail
Useful to see how big retail is thinking this.
BBQ Guys and Lowe’s discuss best practices for implementing AI tech by Guest contributor Bryan Wassel, Associate Editor, Retail TouchPoints
... Through a special arrangement, presented here for discussion is a summary of a current article from the Retail TouchPoints website. ...
Fine-tuning data science solutions to optimize results has been, relatively speaking, the easy part. Preparing people throughout the retail organization to take advantage of the new insights is the more complicated task, IT executives indicated on a panel at the 2019 Retail Innovation Conference.
“Executives like to believe that 99 percent of your time is spent on building the algorithms involved — but actually that’s the smallest part,” said Doug Jennings, VP of data and analytics at Lowe’s.
Teams across the organization must be educated on how these solutions will affect their jobs and have reasonable expectations about how much things will change. “We have to show some sort of roadmap of where we want to go,” said Jason Stutes, director of analytics & design at BBQ Guys.
One key ingredient is making a dashboard that is able to go through insights piece by piece, enabling marketers to understand the popularity of items beyond just how many were sold. A carefully built machine learning tool helps Lowe’s pull apart historical sales at a very granular level to see just what shoppers are looking for in any given category. Taking into account activities at nearby competing retailers can be invaluable. ... "
BBQ Guys and Lowe’s discuss best practices for implementing AI tech by Guest contributor Bryan Wassel, Associate Editor, Retail TouchPoints
... Through a special arrangement, presented here for discussion is a summary of a current article from the Retail TouchPoints website. ...
Fine-tuning data science solutions to optimize results has been, relatively speaking, the easy part. Preparing people throughout the retail organization to take advantage of the new insights is the more complicated task, IT executives indicated on a panel at the 2019 Retail Innovation Conference.
“Executives like to believe that 99 percent of your time is spent on building the algorithms involved — but actually that’s the smallest part,” said Doug Jennings, VP of data and analytics at Lowe’s.
Teams across the organization must be educated on how these solutions will affect their jobs and have reasonable expectations about how much things will change. “We have to show some sort of roadmap of where we want to go,” said Jason Stutes, director of analytics & design at BBQ Guys.
One key ingredient is making a dashboard that is able to go through insights piece by piece, enabling marketers to understand the popularity of items beyond just how many were sold. A carefully built machine learning tool helps Lowe’s pull apart historical sales at a very granular level to see just what shoppers are looking for in any given category. Taking into account activities at nearby competing retailers can be invaluable. ... "
Lifelong Learning with Nets?
Considerable challenge. True we can get more data over time to improve our learning. But my guess is that the architecture nets will also change as well. How will that change how we use data to produce solutions? Yes, animals and humans continue to learn, but also seem to have a limit to what they can learn based on the structure of knowledge and how its presented. Below an abstract, a number of technical link references in the article itself.
Lifelong Learning in Artificial Neural Networks By Gary Anthes Communications of the ACM, June 2019, Vol. 62 No. 6, Pages 13-15 10.1145/3323685
Over the past decade, artificial intelligence (AI) based on machine learning has reached break-through levels of performance, often approaching and sometimes exceeding the abilities of human experts. Examples include image recognition, language translation, and performance in the game of Go.
These applications employ large artificial neural networks, in which nodes are linked by millions of weighted interconnections. They mimic the structure and workings of living brains, except in one key respect—they don't learn over time, as animals do. Once designed, programmed, and trained by developers, they do not adapt to new data or new tasks without being retrained, often a very time-consuming task.
Real-time adaptability by AI systems has become a hot topic in research. For example, computer scientists at Uber Technologies last year published a paper that describes a method for introducing "plasticity" in neural networks. In several test applications, including image recognition and maze exploration, the researchers showed that previously trained neural networks could adapt to new situations quickly and efficiently without undergoing additional training.
"The usual method with neural networks is to train them slowly, with many examples; in the millions or hundreds of millions," says Thomas Miconi, the lead author of the Uber paper and a computational neuroscientist at Uber. "But that's not the way we work. We learn fast, often from a single exposure, to a new situation or stimulus. With synaptic plasticity, the connections in our brains change automatically, allowing us to form memories very quickly. .... "
Lifelong Learning in Artificial Neural Networks By Gary Anthes Communications of the ACM, June 2019, Vol. 62 No. 6, Pages 13-15 10.1145/3323685
Over the past decade, artificial intelligence (AI) based on machine learning has reached break-through levels of performance, often approaching and sometimes exceeding the abilities of human experts. Examples include image recognition, language translation, and performance in the game of Go.
These applications employ large artificial neural networks, in which nodes are linked by millions of weighted interconnections. They mimic the structure and workings of living brains, except in one key respect—they don't learn over time, as animals do. Once designed, programmed, and trained by developers, they do not adapt to new data or new tasks without being retrained, often a very time-consuming task.
Real-time adaptability by AI systems has become a hot topic in research. For example, computer scientists at Uber Technologies last year published a paper that describes a method for introducing "plasticity" in neural networks. In several test applications, including image recognition and maze exploration, the researchers showed that previously trained neural networks could adapt to new situations quickly and efficiently without undergoing additional training.
"The usual method with neural networks is to train them slowly, with many examples; in the millions or hundreds of millions," says Thomas Miconi, the lead author of the Uber paper and a computational neuroscientist at Uber. "But that's not the way we work. We learn fast, often from a single exposure, to a new situation or stimulus. With synaptic plasticity, the connections in our brains change automatically, allowing us to form memories very quickly. .... "
Designing Robots with Personality
We always attribute a bit of personality into our devices, but what amount and kind is useful for the best results, and minimal unintended consequences??
Character Engineer: Designing robots with a touch of personality.
So, Mark Palatucci EAS’00 wants to put a robot in every home.
That might sound familiar. After all, you may even already have one. But Palatucci, a cofounder of the San Francisco-based robotics company Anki, isn’t thinking about task-oriented automatons or self-directed vacuum cleaners. He’s not even thinking about smart speakers. He’s designing robots with “character”—enough to spark an emotional connection with their owners.
“People are much more willing to put a character in their home than they are just some smart cylinder or smart speaker that doesn’t have any emotion or character built around it,” he says. “It creates a sense of trust that a lot of other products don’t necessarily have.”
And if that trust leads to more engagement with the robot—whether it’s playing games with a robot called Cozmo, or getting Vector, another model, to take a picture when your hands are full—all the better.
Anki’s aim in building robots is to enable people to “build relationships with technology that feel a little more human.” Palatucci, who earned a computer science and engineering degree at Penn, is the company’s head of cloud artificial intelligence and machine learning. Their products have been getting notice. .... "
Character Engineer: Designing robots with a touch of personality.
So, Mark Palatucci EAS’00 wants to put a robot in every home.
That might sound familiar. After all, you may even already have one. But Palatucci, a cofounder of the San Francisco-based robotics company Anki, isn’t thinking about task-oriented automatons or self-directed vacuum cleaners. He’s not even thinking about smart speakers. He’s designing robots with “character”—enough to spark an emotional connection with their owners.
“People are much more willing to put a character in their home than they are just some smart cylinder or smart speaker that doesn’t have any emotion or character built around it,” he says. “It creates a sense of trust that a lot of other products don’t necessarily have.”
And if that trust leads to more engagement with the robot—whether it’s playing games with a robot called Cozmo, or getting Vector, another model, to take a picture when your hands are full—all the better.
Anki’s aim in building robots is to enable people to “build relationships with technology that feel a little more human.” Palatucci, who earned a computer science and engineering degree at Penn, is the company’s head of cloud artificial intelligence and machine learning. Their products have been getting notice. .... "
Evaluating the Use-ability of VR
Useful to see the proposal of a measure here, regarding usability.
How Usable Is VR?
University of Gottingen
Patrick Harms at the University of Gottingen in Germany has designed an automated process for evaluating the usability of virtual reality (VR). Harms' process, which can detect many issues with user friendliness and usability in the virtual environment, begins by recording testers' individual activities and movement, producing "activity lists." A program (MAUSI-VR) mines those lists for typical user behavioral patterns, then assesses this behavior as it relates to defined irregularities. Said Harms, "This makes it possible...to determine how well users of a specific VR are guided by it and whether they usually have to perform ergonomically inconvenient procedures during its operation." ... "
How Usable Is VR?
University of Gottingen
Patrick Harms at the University of Gottingen in Germany has designed an automated process for evaluating the usability of virtual reality (VR). Harms' process, which can detect many issues with user friendliness and usability in the virtual environment, begins by recording testers' individual activities and movement, producing "activity lists." A program (MAUSI-VR) mines those lists for typical user behavioral patterns, then assesses this behavior as it relates to defined irregularities. Said Harms, "This makes it possible...to determine how well users of a specific VR are guided by it and whether they usually have to perform ergonomically inconvenient procedures during its operation." ... "
Wednesday, May 22, 2019
Following Locations on Google Maps
Clever idea. Let places advertise to you by alerts. Monetizes the map with change alerts for a place. Events etc. Now I just want Google to work on repair and maintaining other clever services, like their location sharing. I don't think they can monetize that, except as an attraction service for their maps, and its been broken for a long time. A bad indication longer range.
Now on iOS: Follow your favorite places on Google Maps in the Google Blog
Starting this week, you can stay up to date on your favorite places right from the Google Maps app on iOS. Simply search for a place—whether it’s a new restaurant that just opened up in your neighborhood or that must-try bakery across town—and tap the Follow button. You’ll then be able to see important updates from these places in your For you tab so you can quickly learn about upcoming events, offers and more. ... "
Now on iOS: Follow your favorite places on Google Maps in the Google Blog
Starting this week, you can stay up to date on your favorite places right from the Google Maps app on iOS. Simply search for a place—whether it’s a new restaurant that just opened up in your neighborhood or that must-try bakery across town—and tap the Follow button. You’ll then be able to see important updates from these places in your For you tab so you can quickly learn about upcoming events, offers and more. ... "
Talk on Project Debater
Invitation to the ISSIP Cognitive Systems Institute Group Webinar
Please join us for the next ISSIP CSIG Speaker Series "IBM Project Debater"
Noam Slonim, IBM, When: Thursday, May 23, 10:30am - US Eastern
Zoom Detail Below
Background:
Dr. Noam Slonim is a Distinguished Engineer at IBM Research AI. Noam completed his PhD at the Interdisciplinary Center for Neural Computation at the Hebrew University in 2002. After a few years as an Associate Research Scholar at the Genomics Institute at Princeton University, Noam joined IBM Research in 2007. In 2011, he proposed Project Debater as the next Grand Challenge for IBM Research, and he serves as the Principal Investigator of Project Debater since then.
Task Description:
Project Debater is the first AI system developed to compete in a full-live debate with a human debater. The project, an IBM Grand Challenge, is designed to build coherent, convincing speeches on its own, as well as provide rebuttals to the opponent's main arguments. In February 2019, Project Debater competed against Harish Natarajan, who holds the world record for most debate victories, in an event held in San Francisco and broadcasted live world-wide. In this talk I will tell the story of Project Debater, from conception to a climatic final event, describe its underlying technology, and discuss how it can be leveraged for advancing decision making and critical thinking.
Date and Time : May 23 2019 - 10:30am US Eastern
http://cognitive-science.info/community/weekly-update/ the slides and talk will be posted here
Please retweet - https://twitter.com/sumalaika/status/1130745555855007744
Join LinkedIn Group https://www.linkedin.com/groups/6729452
Please join us for the next ISSIP CSIG Speaker Series "IBM Project Debater"
Noam Slonim, IBM, When: Thursday, May 23, 10:30am - US Eastern
Zoom Detail Below
Background:
Dr. Noam Slonim is a Distinguished Engineer at IBM Research AI. Noam completed his PhD at the Interdisciplinary Center for Neural Computation at the Hebrew University in 2002. After a few years as an Associate Research Scholar at the Genomics Institute at Princeton University, Noam joined IBM Research in 2007. In 2011, he proposed Project Debater as the next Grand Challenge for IBM Research, and he serves as the Principal Investigator of Project Debater since then.
Task Description:
Project Debater is the first AI system developed to compete in a full-live debate with a human debater. The project, an IBM Grand Challenge, is designed to build coherent, convincing speeches on its own, as well as provide rebuttals to the opponent's main arguments. In February 2019, Project Debater competed against Harish Natarajan, who holds the world record for most debate victories, in an event held in San Francisco and broadcasted live world-wide. In this talk I will tell the story of Project Debater, from conception to a climatic final event, describe its underlying technology, and discuss how it can be leveraged for advancing decision making and critical thinking.
Date and Time : May 23 2019 - 10:30am US Eastern
http://cognitive-science.info/community/weekly-update/ the slides and talk will be posted here
Please retweet - https://twitter.com/sumalaika/status/1130745555855007744
Join LinkedIn Group https://www.linkedin.com/groups/6729452
Crafting Intelligible Intelligence
Good overview of the challenge. We can now solve small parts, but then it is a craft making those solutions work together in context and practice. It is a challenge. Include your decision makers, people with knowledge about the data and business practices. Involve them early and often. Consider context and consequences. Mold process scripts and have conversations about how they work. Keep testing and be ready to adapt.
Introductory video:
The Challenge of Crafting Intelligible Intelligence By Daniel S. Weld, Gagan Bansal
Communications of the ACM, June 2019, Vol. 62 No. 6, Pages 70-79
10.1145/3282486
Artificial Intelligence (ai) systems have reached or exceeded human performance for many circumscribed tasks. As a result, they are increasingly deployed in mission-critical roles, such as credit scoring, predicting if a bail candidate will commit another crime, selecting the news we read on social networks, and self-driving cars. Unlike other mission-critical software, extraordinarily complex AI systems are difficult to test: AI decisions are context specific and often based on thousands or millions of factors. Typically, AI behaviors are generated by searching vast action spaces or learned by the opaque optimization of mammoth neural networks operating over prodigious amounts of training data. Almost by definition, no clear-cut method can accomplish these AI tasks. ... "
Introductory video:
The Challenge of Crafting Intelligible Intelligence By Daniel S. Weld, Gagan Bansal
Communications of the ACM, June 2019, Vol. 62 No. 6, Pages 70-79
10.1145/3282486
Artificial Intelligence (ai) systems have reached or exceeded human performance for many circumscribed tasks. As a result, they are increasingly deployed in mission-critical roles, such as credit scoring, predicting if a bail candidate will commit another crime, selecting the news we read on social networks, and self-driving cars. Unlike other mission-critical software, extraordinarily complex AI systems are difficult to test: AI decisions are context specific and often based on thousands or millions of factors. Typically, AI behaviors are generated by searching vast action spaces or learned by the opaque optimization of mammoth neural networks operating over prodigious amounts of training data. Almost by definition, no clear-cut method can accomplish these AI tasks. ... "
Design Thinking
Overview of the techniques involved at the link.
5 Innovative Ways To Design Thinking in CustomerThink By Swarnendu De
Businesses are now embracing the concept of design thinking, as its practitioners have experienced proven success records. Although “Design Thinking” dates back to 1969, until today, most of the companies are still struggling with the concept.
Those who succeed, of course, follow the best practices. Getting the right design achieved offers one with amazing rewards. Companies which strictly follow all the major stages of design thinking process are outperforming with their “Wow” design.
The necessity of designing is increasing gradually, as the consumers now have higher expectations. They want direct access to the marketplaces and grab the best and unique products/services. So, it has now become extremely difficult for entrepreneurs to stand out in the crowd.
So, What Is Design Thinking All About? : ... "
5 Innovative Ways To Design Thinking in CustomerThink By Swarnendu De
Businesses are now embracing the concept of design thinking, as its practitioners have experienced proven success records. Although “Design Thinking” dates back to 1969, until today, most of the companies are still struggling with the concept.
Those who succeed, of course, follow the best practices. Getting the right design achieved offers one with amazing rewards. Companies which strictly follow all the major stages of design thinking process are outperforming with their “Wow” design.
The necessity of designing is increasing gradually, as the consumers now have higher expectations. They want direct access to the marketplaces and grab the best and unique products/services. So, it has now become extremely difficult for entrepreneurs to stand out in the crowd.
So, What Is Design Thinking All About? : ... "
USPS Tests Self Driving Trucks
Not home delivery yet, but trucking.
USPS starts testing self-driving trucks
The US Postal Service has begun a two-week test using self-driving trucks to transport mail between Phoenix and Dallas. The startup that developed the vehicles, TuSimple, has raised $178 million from investors including China's Sina and the US-based chipmaker Nvidia. ... "
USPS starts testing self-driving trucks
The US Postal Service has begun a two-week test using self-driving trucks to transport mail between Phoenix and Dallas. The startup that developed the vehicles, TuSimple, has raised $178 million from investors including China's Sina and the US-based chipmaker Nvidia. ... "
Patient Adherence Measures in Medical Regimens
Had not heard the term adherence used before, but see the measure including a combination of compliance and persistence. We typically used compliance by itself to bundle both, but I like thinking of both as measures.
Improving Patient Adherence through Data-driven Insights By Jason Hichborn, Sari Kaganoff, Nisha Subramanian, and Ziv Yaar
When patients fail to follow prescribed medical regimens, outcomes suffer. A McKinsey study points to areas pharmaceutical companies can address to combat this long-standing industry issue. ...
' .... Persistence. How long patients take a drug before either switching to a new drug or stopping treatment entirely. This is measured by how many patients continue to fill their prescriptions.
Compliance. How closely patients follow the prescribed treatment plan. This is measured by how many persistent patients fill their prescribed doses on schedule, based on the approved product label. We have considered patients to be compliant if at least 80 percent of doses, according to approved product label, were filled within the study period.
Adherence. Combined view of compliance and persistence, measured by the share of all patients, who fill their prescribed doses on schedule, based on the approved product label. Similar to compliance, we have considered patients to be adherent if at least 80 percent of doses, according to approved product label, were filled within the study period. ... "
Improving Patient Adherence through Data-driven Insights By Jason Hichborn, Sari Kaganoff, Nisha Subramanian, and Ziv Yaar
When patients fail to follow prescribed medical regimens, outcomes suffer. A McKinsey study points to areas pharmaceutical companies can address to combat this long-standing industry issue. ...
' .... Persistence. How long patients take a drug before either switching to a new drug or stopping treatment entirely. This is measured by how many patients continue to fill their prescriptions.
Compliance. How closely patients follow the prescribed treatment plan. This is measured by how many persistent patients fill their prescribed doses on schedule, based on the approved product label. We have considered patients to be compliant if at least 80 percent of doses, according to approved product label, were filled within the study period.
Adherence. Combined view of compliance and persistence, measured by the share of all patients, who fill their prescribed doses on schedule, based on the approved product label. Similar to compliance, we have considered patients to be adherent if at least 80 percent of doses, according to approved product label, were filled within the study period. ... "
Subscribe to:
Posts (Atom)