Simple but well stated. How do you test? Whats important? Useful tips: See also 'Chaos Engineering', at the tag below, now also in common use.
4 keys to a top-down testing strategy
By Hans Buwalda, Chief Technology Officer, LogiGear
It's difficult enough to keep up with advances in software testing. Add to that the growing number of misunderstandings between test teams and leaders about how, where, and when to test, and it becomes even more challenging to achieve on-time, quality releases.
If you involve senior leaders in the testing process from the beginning, you can overcome many of these problems. While those leaders won't be doing the manual work, they should understand at a basic level what it takes to test and deliver great software.
From there, it's possible to establish leadership from the top by setting clear expectations around quality and by providing the proper support for team members. This support includes education, training, tools, and more.
As firms look to adopt testing in continuous delivery, they are questioning whether test plans still have a place. They do. Modern test plans can provide a vehicle for creating and communicating the test strategy and approach. They give teams an opportunity to communicate assumptions and approaches, including exclusions such as why a factor wasn't tested.
Here are some key testing tips to keep in mind : ....
Wednesday, July 31, 2019
Why Digital Transformations Fail
Currently reading, the author is a former colleague who is on top of this problem for the complex enterprise. Advanced methods emerging today, like AI need data, and thus need the enterprise to be acting digitally to make it available. This book is a great start.
Why Digital Transformations Fail: The Surprising Disciplines of How to Take Off and Stay Ahead Hardcover – July 23, 2019 by Tony Saldanha (Author), Robert A. McDonald (Foreword)
5.0 out of 5 stars 4 customer reviews
Former Procter & Gamble Vice President for IT and Shared Services, Tony Saldanha gives you the keys to a successful digital transformation: a proven five-stage model and a disciplined process for executing it.
Digital transformation is more important than ever now that we're in the Fourth Industrial Revolution, where the lines between the physical, digital, and biological worlds are becoming ever more blurred. But fully 70 percent of digital transformations fail.
Why? Tony Saldanha, a globally awarded industry thought-leader who led operations around the world and major digital changes at Procter & Gamble, discovered it's not due to innovation or technological problems. Rather, the devil is in the details: a lack of clear goals and a disciplined process for achieving them. In this book, Saldanha lays out a five-stage process for moving from digitally automating processes here and there to making digital technology the very backbone of your company. For each of these five stages, Saldanha describes two associated disciplines vital to the success of that stage and a checklist of questions to keep you on track.
You want to disrupt before you are disrupted--be the next Netflix, not the next Blockbuster. Using dozens of case studies and his own considerable experience, Saldanha shows how digital transformation can be made routinely successful, and instead of representing an existential threat, it will become the opportunity of a lifetime. .... "
Why Digital Transformations Fail: The Surprising Disciplines of How to Take Off and Stay Ahead Hardcover – July 23, 2019 by Tony Saldanha (Author), Robert A. McDonald (Foreword)
5.0 out of 5 stars 4 customer reviews
Former Procter & Gamble Vice President for IT and Shared Services, Tony Saldanha gives you the keys to a successful digital transformation: a proven five-stage model and a disciplined process for executing it.
Digital transformation is more important than ever now that we're in the Fourth Industrial Revolution, where the lines between the physical, digital, and biological worlds are becoming ever more blurred. But fully 70 percent of digital transformations fail.
Why? Tony Saldanha, a globally awarded industry thought-leader who led operations around the world and major digital changes at Procter & Gamble, discovered it's not due to innovation or technological problems. Rather, the devil is in the details: a lack of clear goals and a disciplined process for achieving them. In this book, Saldanha lays out a five-stage process for moving from digitally automating processes here and there to making digital technology the very backbone of your company. For each of these five stages, Saldanha describes two associated disciplines vital to the success of that stage and a checklist of questions to keep you on track.
You want to disrupt before you are disrupted--be the next Netflix, not the next Blockbuster. Using dozens of case studies and his own considerable experience, Saldanha shows how digital transformation can be made routinely successful, and instead of representing an existential threat, it will become the opportunity of a lifetime. .... "
Think to Type Getting Closer
Though I remember this was close years ago. Introducing anticipatory completion may help, but also with some of the usual errors.
Facebook is inching closer to a think-to-type computer system
There's a long way to go to 100 word-per-minute mental typing, though. By Steve Dent, @stevetdent in Engadget
Elon Musk isn't the only one who wants us to communicate via brainwaves. Facebook also has ambitious plans to interface with computers using wearables and one day let us type rapidly with our brains. Now, neuroscientists from the University of California, San Francisco (backed by Facebook's Reality Labs) have demonstrated a system that can translate speech into text in real time using brain activity only. While impressive, it shows that the technology still has a long ways to go.
Brain-computer interface systems already exist, but require users to mentally select a letter at a time on a virtual keyboard, a process that tends to be very slow. The UCSF researchers, however, tried to use context to help the machines translate entire words and phrases. .... "
Facebook is inching closer to a think-to-type computer system
There's a long way to go to 100 word-per-minute mental typing, though. By Steve Dent, @stevetdent in Engadget
Elon Musk isn't the only one who wants us to communicate via brainwaves. Facebook also has ambitious plans to interface with computers using wearables and one day let us type rapidly with our brains. Now, neuroscientists from the University of California, San Francisco (backed by Facebook's Reality Labs) have demonstrated a system that can translate speech into text in real time using brain activity only. While impressive, it shows that the technology still has a long ways to go.
Brain-computer interface systems already exist, but require users to mentally select a letter at a time on a virtual keyboard, a process that tends to be very slow. The UCSF researchers, however, tried to use context to help the machines translate entire words and phrases. .... "
Amazon Managed Blockchain
Received from AWS: See also FAQ.
Amazon Managed Blockchain
Easily create and manage scalable blockchain networks
Amazon Managed Blockchain is a fully managed service that makes it easy to create and manage scalable blockchain networks using the popular open source frameworks Hyperledger Fabric and Ethereum*.
Blockchain makes it possible to build applications where multiple parties can execute transactions without the need for a trusted, central authority. Today, building a scalable blockchain network with existing technologies is complex to set up and hard to manage. To create a blockchain network, each network member needs to manually provision hardware, install software, create and manage certificates for access control, and configure networking components. Once the blockchain network is running, you need to continuously monitor the infrastructure and adapt to changes, such as an increase in transaction requests, or new members joining or leaving the network.
Amazon Managed Blockchain is a fully managed service that allows you to set up and manage a scalable blockchain network with just a few clicks. Amazon Managed Blockchain eliminates the overhead required to create the network, and automatically scales to meet the demands of thousands of applications running millions of transactions. Once your network is up and running, Managed Blockchain makes it easy to manage and maintain your blockchain network. It manages your certificates and lets you easily invite new members to join the network.
Get started with Hyperledger Fabric using Amazon Managed Blockchain here.
For applications that need an immutable and verifiable ledger database, visit Amazon QLDB here.
*Hyperledger Fabric available today. Ethereum coming soon. ...
Amazon Managed Blockchain
Easily create and manage scalable blockchain networks
Amazon Managed Blockchain is a fully managed service that makes it easy to create and manage scalable blockchain networks using the popular open source frameworks Hyperledger Fabric and Ethereum*.
Blockchain makes it possible to build applications where multiple parties can execute transactions without the need for a trusted, central authority. Today, building a scalable blockchain network with existing technologies is complex to set up and hard to manage. To create a blockchain network, each network member needs to manually provision hardware, install software, create and manage certificates for access control, and configure networking components. Once the blockchain network is running, you need to continuously monitor the infrastructure and adapt to changes, such as an increase in transaction requests, or new members joining or leaving the network.
Amazon Managed Blockchain is a fully managed service that allows you to set up and manage a scalable blockchain network with just a few clicks. Amazon Managed Blockchain eliminates the overhead required to create the network, and automatically scales to meet the demands of thousands of applications running millions of transactions. Once your network is up and running, Managed Blockchain makes it easy to manage and maintain your blockchain network. It manages your certificates and lets you easily invite new members to join the network.
Get started with Hyperledger Fabric using Amazon Managed Blockchain here.
For applications that need an immutable and verifiable ledger database, visit Amazon QLDB here.
*Hyperledger Fabric available today. Ethereum coming soon. ...
Polly Makes for Better Text to Voice
Just heard a demonstration of Amazon's New Polly voice delivery system. Very much better then the current 'reading' capability. Will be trying it with some of my test text files. Key also is the ability to retrain to new voice styles.
Amazon’s text-to-speech service Polly gets a newscaster-style voice. in By Mike Wheatley in Silicon Angle
Amazon Web Services Inc. is taking on Google LLC in human voice replication, adding two new features today to Amazon Polly, a cloud-based service that transforms text into lifelike speech and is used to create applications that can talk.
The first of the new features is called Neural Text-To-Speech, which Amazon says delivers “significant improvements” in speech quality by boosting the “naturalness” and “expressiveness” of synthesized voices.
One of the great things about Neural Text-To-Speech is that it’s able to learn new speaking styles with just a few hours of training, thanks to a new artificial intelligence model that Amazon wrote about in a research paper last year. That model works by combining large amounts of standard, neutral speech with just a few hours of additional voice data in the target speaking style. New supplementary data can be added as desired to create various additional speaking styles. .... "
More about AWS Polly.
Amazon’s text-to-speech service Polly gets a newscaster-style voice. in By Mike Wheatley in Silicon Angle
Amazon Web Services Inc. is taking on Google LLC in human voice replication, adding two new features today to Amazon Polly, a cloud-based service that transforms text into lifelike speech and is used to create applications that can talk.
The first of the new features is called Neural Text-To-Speech, which Amazon says delivers “significant improvements” in speech quality by boosting the “naturalness” and “expressiveness” of synthesized voices.
One of the great things about Neural Text-To-Speech is that it’s able to learn new speaking styles with just a few hours of training, thanks to a new artificial intelligence model that Amazon wrote about in a research paper last year. That model works by combining large amounts of standard, neutral speech with just a few hours of additional voice data in the target speaking style. New supplementary data can be added as desired to create various additional speaking styles. .... "
More about AWS Polly.
Microsoft Moves to Teams, Drops Skype for Business Online
Have now been using Teams since its inception. Works well for team operations, chats and goal focused collaboration. Skype works but has always been shaky, its advantage is its huge global enrollment. How will either of these operate versus Slack? Right now Teams is free. But converting between any of these will be cumbersome for the enterprise.
Microsoft will drop Skype for Business Online on July 31, 2021
As part of its Skype for Business Online to Teams transition, Microsoft will enable Skype consumer users to communicate with Teams ones using chat and calling starting in Q1 2020.
By Mary Jo Foley for All About Microsoft .... '
Microsoft will drop Skype for Business Online on July 31, 2021
As part of its Skype for Business Online to Teams transition, Microsoft will enable Skype consumer users to communicate with Teams ones using chat and calling starting in Q1 2020.
By Mary Jo Foley for All About Microsoft .... '
Toward a General Theory of Neural Nets
Good thoughtful piece in Quanta Mag. Does not mean we have solved the general problem, nor does it mean we have even successfully used the model of biological neurons completely for AI problems. I do like the steam engine analog. We know somewhat how to use them to solve some problems. But there is still a long way to go.
Foundations Built for a General Theory of Neural Networks By Kevin Hartnett, Senior Writer
Neural networks can be as unpredictable as they are powerful. Now mathematicians are beginning to reveal how a neural network’s form will influence its function.
withstand an earthquake of a certain strength.
But with one of the most important technologies of the modern world, we’re effectively building blind. We play with different designs, tinker with different setups, but until we take it out for a test run, we don’t really know what it can do or where it will fail.
This technology is the neural network, which underpins today’s most advanced artificial intelligence systems. Increasingly, neural networks are moving into the core areas of society: They determine what we learn of the world through our social media feeds, they help doctors diagnose illnesses, and they even influence whether a person convicted of a crime will spend time in jail.
Yet “the best approximation to what we know is that we know almost nothing about how neural networks actually work and what a really insightful theory would be,” said Boris Hanin, a mathematician at Texas A&M University and a visiting scientist at Facebook AI Research who studies neural networks.
He likens the situation to the development of another revolutionary technology: the steam engine. At first, steam engines weren’t good for much more than pumping water. Then they powered trains, which is maybe the level of sophistication neural networks have reached. Then scientists and mathematicians developed a theory of thermodynamics, which let them understand exactly what was going on inside engines of any kind. Eventually, that knowledge took us to the moon.
“First you had great engineering, and you had some great trains, then you needed some theoretical understanding to go to rocket ships,” Hanin said. .... "
Foundations Built for a General Theory of Neural Networks By Kevin Hartnett, Senior Writer
Neural networks can be as unpredictable as they are powerful. Now mathematicians are beginning to reveal how a neural network’s form will influence its function.
withstand an earthquake of a certain strength.
But with one of the most important technologies of the modern world, we’re effectively building blind. We play with different designs, tinker with different setups, but until we take it out for a test run, we don’t really know what it can do or where it will fail.
This technology is the neural network, which underpins today’s most advanced artificial intelligence systems. Increasingly, neural networks are moving into the core areas of society: They determine what we learn of the world through our social media feeds, they help doctors diagnose illnesses, and they even influence whether a person convicted of a crime will spend time in jail.
Yet “the best approximation to what we know is that we know almost nothing about how neural networks actually work and what a really insightful theory would be,” said Boris Hanin, a mathematician at Texas A&M University and a visiting scientist at Facebook AI Research who studies neural networks.
He likens the situation to the development of another revolutionary technology: the steam engine. At first, steam engines weren’t good for much more than pumping water. Then they powered trains, which is maybe the level of sophistication neural networks have reached. Then scientists and mathematicians developed a theory of thermodynamics, which let them understand exactly what was going on inside engines of any kind. Eventually, that knowledge took us to the moon.
“First you had great engineering, and you had some great trains, then you needed some theoretical understanding to go to rocket ships,” Hanin said. .... "
Tuesday, July 30, 2019
Merck Drone Medicine Delivery in Puerto Rico
Drones move forward for advanced delivery tasks in disasters Some intriguing details are included.
Merck takes part in test of medicine-delivery drones in Puerto Rico
Merck, Volans-I, Softbox and AT&T are working with Direct Relief to test the drone system, which will deliver medicines to remote areas of the island. By Alaric Dearment in MedCityNews
As the devastation wrought by Hurricane Maria in Puerto Rico becomes clearer, one of the largest drugmakers in the country is participating in a test of a drone system designed to deliver medicines to areas affected by disasters.
Merck & Co. – together with Softbox Systems, Volans-I, AT&T and the nonprofit group Direct Relief – began piloting the test of emergency medical supply deliveries on the island last week. The drones carry medicines to which people often lose access in disasters and include temperature-controlled units that can carry products requiring refrigeration. Volans-i manufactures the long-range drones, while Softbox makes the packaging system for transporting cold-chain medications. .... "
Merck takes part in test of medicine-delivery drones in Puerto Rico
Merck, Volans-I, Softbox and AT&T are working with Direct Relief to test the drone system, which will deliver medicines to remote areas of the island. By Alaric Dearment in MedCityNews
As the devastation wrought by Hurricane Maria in Puerto Rico becomes clearer, one of the largest drugmakers in the country is participating in a test of a drone system designed to deliver medicines to areas affected by disasters.
Merck & Co. – together with Softbox Systems, Volans-I, AT&T and the nonprofit group Direct Relief – began piloting the test of emergency medical supply deliveries on the island last week. The drones carry medicines to which people often lose access in disasters and include temperature-controlled units that can carry products requiring refrigeration. Volans-i manufactures the long-range drones, while Softbox makes the packaging system for transporting cold-chain medications. .... "
IEEE Guide to Robotics
I note that IEEE has a guide to robotics online. Which shows an number of types of robotic implementations. of which Some of which I had never seen. Not complete by any means, I see there is no category about micro robotics. But has number of examples that are presented in outline and reviewed. Worth a scan.
Sponsored by the IEEE Robotics and Automation Society Which has more technical articles and coverage of the topic. Also worth looking at.
Sponsored by the IEEE Robotics and Automation Society Which has more technical articles and coverage of the topic. Also worth looking at.
Deciding When to Trust
Have recently been involved with the concept of 'smart contracts' and trustble agreements. How might this idea be included in the construction of such things? Things that we can trust in a neuroscience sense? Is trust just a way to accurately forecast an agent's future behavior?
How do Our Brains Decide When to Trust? By Paul J. Sak, in the HBR
Trust is the enabler of global business — without it, most market transactions would be impossible. It is also a hallmark of high-performing organizations. Employees in high-trust companies are more productive, are more satisfied with their jobs, put in greater discretionary effort, are less likely to search for new jobs, and even are healthier than those working in low-trust companies. Businesses that build trust among their customers are rewarded with greater loyalty and higher sales. And negotiators who build trust with each other are more likely to find value-creating deals.
Despite the primacy of trust in commerce, its neurobiological underpinnings were not well understood until recently. Over the past 20 years, research has revealed why we trust strangers, which leadership behaviors lead to the breakdown of trust, and how insights from neuroscience can help colleagues build trust with each other — and help boost a company’s bottom line.
THE BIOLOGY OF TRUST
Human brains have two neurological idiosyncrasies that allow us to trust and collaborate with people outside our immediate social group (something no other animal is capable of doing). The first involves our hypertrophied cortex, the brain’s outer surface, where insight, planning, and abstract thought largely occur. Parts of the cortex let us do an amazing trick: transport ourselves into someone else’s mind. Called theory of mind by psychologists, it’s essentially our ability to think, “If I were her, I would do this.” It lets us forecast others’ actions so that we can coordinate our behavior with theirs. ..... " (Details at the ink)
How do Our Brains Decide When to Trust? By Paul J. Sak, in the HBR
Trust is the enabler of global business — without it, most market transactions would be impossible. It is also a hallmark of high-performing organizations. Employees in high-trust companies are more productive, are more satisfied with their jobs, put in greater discretionary effort, are less likely to search for new jobs, and even are healthier than those working in low-trust companies. Businesses that build trust among their customers are rewarded with greater loyalty and higher sales. And negotiators who build trust with each other are more likely to find value-creating deals.
Despite the primacy of trust in commerce, its neurobiological underpinnings were not well understood until recently. Over the past 20 years, research has revealed why we trust strangers, which leadership behaviors lead to the breakdown of trust, and how insights from neuroscience can help colleagues build trust with each other — and help boost a company’s bottom line.
THE BIOLOGY OF TRUST
Human brains have two neurological idiosyncrasies that allow us to trust and collaborate with people outside our immediate social group (something no other animal is capable of doing). The first involves our hypertrophied cortex, the brain’s outer surface, where insight, planning, and abstract thought largely occur. Parts of the cortex let us do an amazing trick: transport ourselves into someone else’s mind. Called theory of mind by psychologists, it’s essentially our ability to think, “If I were her, I would do this.” It lets us forecast others’ actions so that we can coordinate our behavior with theirs. ..... " (Details at the ink)
Stuntronics by Disney Imagineering
Beware stunt doubles. We toured Disney way back, and saw very early prototypes of this idea. Both stunt doubles and actors in general should be aware of a future like this.
Stuntronics in IEEE Robotics
Stuntronics are animatronic stunt doubles. They combine advanced robotic technology with the exploration of untethered dynamic movement to perform aerial flips, twists, and poses with repeatability and precision. ....
CREATOR
Walt Disney Imagineering video at the link. ...."
Stuntronics in IEEE Robotics
Stuntronics are animatronic stunt doubles. They combine advanced robotic technology with the exploration of untethered dynamic movement to perform aerial flips, twists, and poses with repeatability and precision. ....
CREATOR
Walt Disney Imagineering video at the link. ...."
Autonomous Aircraft Landing
I thought this was commonly possible, but I note here it is being done without dependence on ground based antennas, so the plane, as I understand it is completely autonomous. So will this be in the future of aviation?
German Scientists Pull Off Autonomous Aircraft Landing
ScienceAlert
Peter Dockrill
Today, many commercial planes and other large jets rely on an Instrument Landing System (ILS), which uses radio signals and on-board autopilot programs to guide landing aircraft on their final approach. C2Land, developed by researchers at the Technical University of Munich (TUM) in Germany, is similar to ILS, but does not require any ground-based antennas. The C2Land system uses GPS for flight control, in conjunction with a computer vision-augmented navigation system for landing. The technology uses an optical positioning system at altitudes below 200 feet and on the ground after touchdown, as an additional source of positioning information. Said TUM flight system dynamics researcher Martin Kügler, "Automatic landing is essential, especially in the context of the future role of aviation."
German Scientists Pull Off Autonomous Aircraft Landing
ScienceAlert
Peter Dockrill
Today, many commercial planes and other large jets rely on an Instrument Landing System (ILS), which uses radio signals and on-board autopilot programs to guide landing aircraft on their final approach. C2Land, developed by researchers at the Technical University of Munich (TUM) in Germany, is similar to ILS, but does not require any ground-based antennas. The C2Land system uses GPS for flight control, in conjunction with a computer vision-augmented navigation system for landing. The technology uses an optical positioning system at altitudes below 200 feet and on the ground after touchdown, as an additional source of positioning information. Said TUM flight system dynamics researcher Martin Kügler, "Automatic landing is essential, especially in the context of the future role of aviation."
Cisco On IOT in New Orleans Preventing Crime
Used to spend quite a bit of time in New Orleans. Never thought of it as particularly low crime, based on what I saw in the papers. Like to see that change. But aims to be better yet with IOT as Cisco outlines in their blog. How much this will be bashed with bias implications remains to be seen.
Keeping the City of New Orleans safe with Cisco IoT
While the Big Easy has long been famous for letting the good times roll, New Orleans is increasingly recognized for good times AND less crime. In fact, since the 2017 launch of its Real-Time Crime Center (RTCC), the City has become a leader in using Cisco IoT solutions to support public safety.
Core to the City’s IoT solutions are hundreds of IP cameras installed on nearly every corner. These cameras – powered securely by a wireless network of ruggedized Cisco ISR with the Cisco IR829 integrated service routers (ISR) – stream live video to the RTCC. In a matter of minutes, the RTCC processes and shares relevant footage with officers in the field. It also gathers and analyzes video footage as part of longer-term investigations of criminal activity and quality-of-life concerns.
“The Cisco IR829 has supported everything we’ve asked it to do. It has been extremely reliable and very robust in every aspect of every solution we’ve thrown at it.” Richard Couget, Network Manager, City of New Orleans
Imagine it: Rather than showing up at a scene with little or no background information, police arrive armed with critical insights. In the past, it might take an officer 20 minutes to interview both parties involved in a car accident. With accident footage in tow, the officer can handle the situation and return to the field far more quickly.
In other cases, the City’s IP cameras catch crimes in progress. Within minutes, the RTCC sends footage to nearby walking-beat police officers who can quickly identify and arrest the culprits. And whether responding within minutes or days after a crime has occurred, having concrete evidence in hand enables greater police efficiency when interviewing witnesses and suspects. .... "
Keeping the City of New Orleans safe with Cisco IoT
While the Big Easy has long been famous for letting the good times roll, New Orleans is increasingly recognized for good times AND less crime. In fact, since the 2017 launch of its Real-Time Crime Center (RTCC), the City has become a leader in using Cisco IoT solutions to support public safety.
Core to the City’s IoT solutions are hundreds of IP cameras installed on nearly every corner. These cameras – powered securely by a wireless network of ruggedized Cisco ISR with the Cisco IR829 integrated service routers (ISR) – stream live video to the RTCC. In a matter of minutes, the RTCC processes and shares relevant footage with officers in the field. It also gathers and analyzes video footage as part of longer-term investigations of criminal activity and quality-of-life concerns.
“The Cisco IR829 has supported everything we’ve asked it to do. It has been extremely reliable and very robust in every aspect of every solution we’ve thrown at it.” Richard Couget, Network Manager, City of New Orleans
Imagine it: Rather than showing up at a scene with little or no background information, police arrive armed with critical insights. In the past, it might take an officer 20 minutes to interview both parties involved in a car accident. With accident footage in tow, the officer can handle the situation and return to the field far more quickly.
In other cases, the City’s IP cameras catch crimes in progress. Within minutes, the RTCC sends footage to nearby walking-beat police officers who can quickly identify and arrest the culprits. And whether responding within minutes or days after a crime has occurred, having concrete evidence in hand enables greater police efficiency when interviewing witnesses and suspects. .... "
Labels:
Cisco,
Crime,
IOT,
New Orleans,
RTCC,
Smart City
Amazon to Retrain Third of Workforce
Internal retraining makes much sense given the cost of acquisition and HR.
Amazon to Retrain a Third of Its U.S. Workforce
The Wall Street Journal
Chip Cutter
Amazon will spend up to $700 million to retrain 100,000 of its U.S. workers by 2025, one of the biggest corporate retraining initiatives on record. The company said it will expand its existing training programs and launch some new ones to help employees move into more advanced jobs. The training is voluntary, and most of the programs will be free to employees. Some of the programs include more advanced training, such as its Machine Learning University, which will be open to thousands of current software engineers with computer science backgrounds. Peter Cappelli of the University of Pennsylvania’s Wharton School said Amazon’s retraining programs will likely help the company recruit and retain workers, so “It’s not altruistic. There’s some hard-nosed business-decision-making behind this.” .... '
Amazon to Retrain a Third of Its U.S. Workforce
The Wall Street Journal
Chip Cutter
Amazon will spend up to $700 million to retrain 100,000 of its U.S. workers by 2025, one of the biggest corporate retraining initiatives on record. The company said it will expand its existing training programs and launch some new ones to help employees move into more advanced jobs. The training is voluntary, and most of the programs will be free to employees. Some of the programs include more advanced training, such as its Machine Learning University, which will be open to thousands of current software engineers with computer science backgrounds. Peter Cappelli of the University of Pennsylvania’s Wharton School said Amazon’s retraining programs will likely help the company recruit and retain workers, so “It’s not altruistic. There’s some hard-nosed business-decision-making behind this.” .... '
Monday, July 29, 2019
AI Enhanced Editing of Sports Coverage
Watched some of the recent Wimbledon, and IBM frequently pointed out that Watson was choosing and editing and delivering the film clips based on measures like human applause. And then writing copy based on some 20 million clips? Well it didn't impress me, but I have never been responsible for measuring real time editing of many, many sources of tape. So it seems it may soon become the standard thing.
IBM’s Wimbledon-watching A.I. is poised to revolutionize sports broadcasts in DigitalTrends.
Among the most lauded essays ever written about the game of tennis is David Foster Wallace’s 2006 article “Roger Federer as Religious Experience.” Originally appearing in the New York Times, the approximately 6,000-word tribute to one of the world’s most supremely talented players reads, as its title makes clear, more like a divine celebration than a piece of sportswriting.
Wallace (and he was certainly not the first writer to do this) gushed about high-level sporting achievements as though they were more than just superb technique; as if they were, somehow, a transcendent portal to godliness. Ordinary mortals like you and I could comprehend what was happening, but only barely. In order to truly appreciate Federer’s athletic feats, we needed a member of the priesthood — a talented youth player like Wallace had been — who could make it intelligible to us.
Why mention Wallace’s almost decade-and-half old essay on a tech site? Because IBM recently unveiled the latest iteration of its impressive A.I. technology — and it’s learned to appreciate tennis on a whole new level. Well, sort of. .... "
IBM’s Wimbledon-watching A.I. is poised to revolutionize sports broadcasts in DigitalTrends.
Among the most lauded essays ever written about the game of tennis is David Foster Wallace’s 2006 article “Roger Federer as Religious Experience.” Originally appearing in the New York Times, the approximately 6,000-word tribute to one of the world’s most supremely talented players reads, as its title makes clear, more like a divine celebration than a piece of sportswriting.
Wallace (and he was certainly not the first writer to do this) gushed about high-level sporting achievements as though they were more than just superb technique; as if they were, somehow, a transcendent portal to godliness. Ordinary mortals like you and I could comprehend what was happening, but only barely. In order to truly appreciate Federer’s athletic feats, we needed a member of the priesthood — a talented youth player like Wallace had been — who could make it intelligible to us.
Why mention Wallace’s almost decade-and-half old essay on a tech site? Because IBM recently unveiled the latest iteration of its impressive A.I. technology — and it’s learned to appreciate tennis on a whole new level. Well, sort of. .... "
DARPA and Zero Knowledge Proofs
Does make sense that this kind of capability would be generally useful. A key kind of cybersecurity.
Generating zero-knowledge proofs for defense capabilities by DARPA
There are times when the highest levels of privacy and security are required to protect a piece of information, but there is still a need to prove the information's existence and accuracy. For the Department of Defense (DoD), the proof could be the verification of a relevant capability. How can one verify this capability without revealing any sensitive details about it? In the commercial world, this struggle manifests itself across banking transactions, cybersecurity threat disclosure, and beyond. One approach to addressing this challenge in cryptography is with zero-knowledge proofs. A zero-knowledge proof is a method where one party can prove to another party that they know a certain fact without revealing any sensitive information needed to demonstrate that the fact is true.
"A zero-knowledge proof involves a statement of fact and the underlying proof of its accuracy," said Dr. Josh Baron, program manager in DARPA's Information Innovation Office (I2O). "The holder of the fact does not want to reveal the underlying information to convince its audience that the fact is accurate. Take, for example, a bank withdrawal. You may want a system that allows you to make a withdrawal without also having to share your bank balance. The system would need some way of verifying that there are sufficient funds to draw from without having to know the exact amount of money sitting within your account."
In recent years, there has been a marked increase in the efficiency and real-world use of zero-knowledge proofs. Most of these uses have been within the cryptocurrency domain where there is a need to provide certain verifiable data without revealing personal or other sensitive information. While useful in this context, the zero-knowledge proofs created are specialized for this task. They prioritize communication and verification efficiency but do not necessarily scale for transactions that are more complex. For highly complex proof statements like those that the DoD may wish to employ, novel and more efficient approaches are needed. .... '
Definitions: https://en.wikipedia.org/wiki/Zero-knowledge_proof
Generating zero-knowledge proofs for defense capabilities by DARPA
There are times when the highest levels of privacy and security are required to protect a piece of information, but there is still a need to prove the information's existence and accuracy. For the Department of Defense (DoD), the proof could be the verification of a relevant capability. How can one verify this capability without revealing any sensitive details about it? In the commercial world, this struggle manifests itself across banking transactions, cybersecurity threat disclosure, and beyond. One approach to addressing this challenge in cryptography is with zero-knowledge proofs. A zero-knowledge proof is a method where one party can prove to another party that they know a certain fact without revealing any sensitive information needed to demonstrate that the fact is true.
"A zero-knowledge proof involves a statement of fact and the underlying proof of its accuracy," said Dr. Josh Baron, program manager in DARPA's Information Innovation Office (I2O). "The holder of the fact does not want to reveal the underlying information to convince its audience that the fact is accurate. Take, for example, a bank withdrawal. You may want a system that allows you to make a withdrawal without also having to share your bank balance. The system would need some way of verifying that there are sufficient funds to draw from without having to know the exact amount of money sitting within your account."
In recent years, there has been a marked increase in the efficiency and real-world use of zero-knowledge proofs. Most of these uses have been within the cryptocurrency domain where there is a need to provide certain verifiable data without revealing personal or other sensitive information. While useful in this context, the zero-knowledge proofs created are specialized for this task. They prioritize communication and verification efficiency but do not necessarily scale for transactions that are more complex. For highly complex proof statements like those that the DoD may wish to employ, novel and more efficient approaches are needed. .... '
Definitions: https://en.wikipedia.org/wiki/Zero-knowledge_proof
Definition of a Smart Contract
Good, non technical view of the smart contract, more details at the link:
Definition of a Smart Contract
What's a smart contract (and how does it work)? By Lucas Mearian in Computerworld
Smart contracts are potentially one of the most useful tools associated with blockchain, and they can enable the transfer of everything from bitcoin and fiat currency to goods transported around the world. Here's what they do and why they're likely to gain traction. .... "
Definition of a Smart Contract
What's a smart contract (and how does it work)? By Lucas Mearian in Computerworld
Smart contracts are potentially one of the most useful tools associated with blockchain, and they can enable the transfer of everything from bitcoin and fiat currency to goods transported around the world. Here's what they do and why they're likely to gain traction. .... "
Free Digital Transformations Webinar
Upcoming Webinar by colleague Tony Saldanha, expert on this topic, has just published a book on the topic. I will be attending and report on the talk, more information below:
" ... Don't Let Your Digital Transformation Project Fail
Change the way you work – today!
Join our Free Webinar to find out: August 7th, 10AM ET
-Why Digital Transformation often fails
-The 5 stages you and your business need to go through to ensure success
-What it means for your people and for the future of work at your company
-What do you have to change today?
Our Featured Expert is an Advisor to Fortune 100 Companies Tony Saldanha, President of Transformant, former VP Procter & Gamble, Global Business Services, Next Gen. Services
Host: Przemek Berendt CEO and Co-founder of Talent Alpha
Tony Saldanha is a globally recognised expert and thought-leader in Global Business Services (GBS) and Information Technology. He ran Procter & Gamble's famed multi-billion dollar GBS and IT operations in every region across the world during a 27-year career. Tony has over three decades of international business expertise in the US, Europe, and Asia. He was named to Computerworld’s Premier 100 IT Professionals list in 2013. Tony's experiences include GBS design and operations, CIO positions, acquisitions and divestitures, outsourcing, disruptive innovation and the creation of new business models. Tony is currently President of Transformant, a consulting organisation that advises over 20 Fortune 100 companies around the world in digital transformation and global business services. He is also a founder of two Blockchain and AI companies and an adviser to venture capital companies. His book titled Why Digital Transformations Fail has been released this month.
Register for Free! Here: https://hello.talent-alpha.com/summer-camp-3-tony-saldanha-webinar#
August 7th (10 AM EST US / 3PM UK / 4 PM CET - 45 mins.)
Talent Alpha Inc. is committed to protecting and respecting your privacy, and we’ll only use your personal information to administer your account and to provide the products and services you requested from us. From time to time, we would like to contact you about our products and services, as well as other content that may be of interest to you. .... "
" ... Don't Let Your Digital Transformation Project Fail
Change the way you work – today!
Join our Free Webinar to find out: August 7th, 10AM ET
-Why Digital Transformation often fails
-The 5 stages you and your business need to go through to ensure success
-What it means for your people and for the future of work at your company
-What do you have to change today?
Our Featured Expert is an Advisor to Fortune 100 Companies Tony Saldanha, President of Transformant, former VP Procter & Gamble, Global Business Services, Next Gen. Services
Host: Przemek Berendt CEO and Co-founder of Talent Alpha
Tony Saldanha is a globally recognised expert and thought-leader in Global Business Services (GBS) and Information Technology. He ran Procter & Gamble's famed multi-billion dollar GBS and IT operations in every region across the world during a 27-year career. Tony has over three decades of international business expertise in the US, Europe, and Asia. He was named to Computerworld’s Premier 100 IT Professionals list in 2013. Tony's experiences include GBS design and operations, CIO positions, acquisitions and divestitures, outsourcing, disruptive innovation and the creation of new business models. Tony is currently President of Transformant, a consulting organisation that advises over 20 Fortune 100 companies around the world in digital transformation and global business services. He is also a founder of two Blockchain and AI companies and an adviser to venture capital companies. His book titled Why Digital Transformations Fail has been released this month.
Register for Free! Here: https://hello.talent-alpha.com/summer-camp-3-tony-saldanha-webinar#
August 7th (10 AM EST US / 3PM UK / 4 PM CET - 45 mins.)
Talent Alpha Inc. is committed to protecting and respecting your privacy, and we’ll only use your personal information to administer your account and to provide the products and services you requested from us. From time to time, we would like to contact you about our products and services, as well as other content that may be of interest to you. .... "
We See Shapes, DL Sees Textures
Interesting observation here, what are its ultimate implications for AI?
Where We See Shapes, AI Sees Textures in QuantaMagazine
To researchers’ surprise, deep learning vision algorithms often fail at classifying images because they mostly take cues from textures, not shapes.
To make deep learning algorithms use shapes to identify objects, as humans do, researchers trained the systems with images that had been “painted” with irrelevant textures. The systems’ performance improved, a result that may hold clues about the evolution of our own vision.
Jordana Cepelewicz Staff Writer
When you look at a photograph of a cat, chances are that you can recognize the pictured animal whether it’s ginger or striped — or whether the image is black and white, speckled, worn or faded. You can probably also spot the pet when it’s shown curled up behind a pillow or leaping onto a countertop in a blur of motion. You have naturally learned to identify a cat in almost any situation. In contrast, machine vision systems powered by deep neural networks can sometimes even outperform humans at recognizing a cat under fixed conditions, but images that are even a little novel, noisy or grainy can throw off those systems completely.
A research team in Germany has now discovered an unexpected reason why: While humans pay attention to the shapes of pictured objects, deep learning computer vision algorithms routinely latch on to the objects’ textures instead.
This finding, presented at the International Conference on Learning Representations in May, highlights the sharp contrast between how humans and machines “think,” and illustrates how misleading our intuitions can be about what makes artificial intelligences tick. It may also hint at why our own vision evolved the way it did. ...
Deep learning algorithms work by, say, presenting a neural network with thousands of images that either contain or do not contain cats. The system finds patterns in that data, which it then uses to decide how best to label an image it has never seen before. The network’s architecture is modeled loosely on that of the human visual system, in that its connected layers let it extract increasingly abstract features from the image. But the system makes the associations that lead it to the right answer through a black-box process that humans can only try to interpret after the fact. “We’ve been trying to figure out what leads to the success of these deep learning computer vision algorithms, and what leads to their brittleness,” said Thomas Dietterich, a computer scientist at Oregon State University who was not involved in the new study. ... "
Where We See Shapes, AI Sees Textures in QuantaMagazine
To researchers’ surprise, deep learning vision algorithms often fail at classifying images because they mostly take cues from textures, not shapes.
To make deep learning algorithms use shapes to identify objects, as humans do, researchers trained the systems with images that had been “painted” with irrelevant textures. The systems’ performance improved, a result that may hold clues about the evolution of our own vision.
Jordana Cepelewicz Staff Writer
When you look at a photograph of a cat, chances are that you can recognize the pictured animal whether it’s ginger or striped — or whether the image is black and white, speckled, worn or faded. You can probably also spot the pet when it’s shown curled up behind a pillow or leaping onto a countertop in a blur of motion. You have naturally learned to identify a cat in almost any situation. In contrast, machine vision systems powered by deep neural networks can sometimes even outperform humans at recognizing a cat under fixed conditions, but images that are even a little novel, noisy or grainy can throw off those systems completely.
A research team in Germany has now discovered an unexpected reason why: While humans pay attention to the shapes of pictured objects, deep learning computer vision algorithms routinely latch on to the objects’ textures instead.
This finding, presented at the International Conference on Learning Representations in May, highlights the sharp contrast between how humans and machines “think,” and illustrates how misleading our intuitions can be about what makes artificial intelligences tick. It may also hint at why our own vision evolved the way it did. ...
Deep learning algorithms work by, say, presenting a neural network with thousands of images that either contain or do not contain cats. The system finds patterns in that data, which it then uses to decide how best to label an image it has never seen before. The network’s architecture is modeled loosely on that of the human visual system, in that its connected layers let it extract increasingly abstract features from the image. But the system makes the associations that lead it to the right answer through a black-box process that humans can only try to interpret after the fact. “We’ve been trying to figure out what leads to the success of these deep learning computer vision algorithms, and what leads to their brittleness,” said Thomas Dietterich, a computer scientist at Oregon State University who was not involved in the new study. ... "
Apple to Buy Most of Intel's 5G Modem Business
An interesting move, an attempt to make Apple emerge in 5G? Compete with Qualcomm.
Apple’s spending $1 billion to buy most of Intel’s 5G modem business
Apple iPhone in Technology Review
The technology and people it’s acquiring will reinforce its push into 5G wireless services
The news: The colossus of Cupertino is buying most of Intel’s modem activities. The deal will see Intel exit a business where it was a relative minnow compared with rivals like Qualcomm, whose modems are still used in most models of Apple’s iPhone. Intel had announced its intention to sell earlier this year. .... "
Apple’s spending $1 billion to buy most of Intel’s 5G modem business
Apple iPhone in Technology Review
The technology and people it’s acquiring will reinforce its push into 5G wireless services
The news: The colossus of Cupertino is buying most of Intel’s modem activities. The deal will see Intel exit a business where it was a relative minnow compared with rivals like Qualcomm, whose modems are still used in most models of Apple’s iPhone. Intel had announced its intention to sell earlier this year. .... "
Sunday, July 28, 2019
Supercomputers + Graphic Processing for Parameter Simulation
Another example of the use of simulation, and much faster computing to check proposed solutions. We used just such methods in the enterprise, though well before we had the computing power mentioned here. Its like traversing a solution space via partially automated game to find the right parameters to find best or even 'right' answers.
Supercomputers Use Graphics Processors to Solve Longstanding Turbulence Question
Imperial College London
Hayley Dunning
Researchers at Imperial College London in the U.K. have solved a longstanding question in turbulence—the seemingly random changes in velocity and pressure that occur when a fluid flows fast enough—using supercomputers running simulations on graphics processors originally developed for gaming. The researchers found a solution that allows them to check empirical models of turbulence against the "correct" answer, to determine how well they are describing what actually happens, or if that needs adjusting. The supercomputer-created simulations allowed the researchers to find the exact parameters describing how turbulence dissipates in the flow, and determined various requirements that empirical turbulence models must satisfy. .... "
Supercomputers Use Graphics Processors to Solve Longstanding Turbulence Question
Imperial College London
Hayley Dunning
Researchers at Imperial College London in the U.K. have solved a longstanding question in turbulence—the seemingly random changes in velocity and pressure that occur when a fluid flows fast enough—using supercomputers running simulations on graphics processors originally developed for gaming. The researchers found a solution that allows them to check empirical models of turbulence against the "correct" answer, to determine how well they are describing what actually happens, or if that needs adjusting. The supercomputer-created simulations allowed the researchers to find the exact parameters describing how turbulence dissipates in the flow, and determined various requirements that empirical turbulence models must satisfy. .... "
IBM Gives Cancer AI to Open Source
Like the potential of clearly beneficial approaches being shared this way. What outcomes have come out of this work to date?
IBM Gives Cancer-Killing Drug AI Project to the Open Source Community
ZDNet
Charlie Osborne
IBM has released to the open source community three artificial intelligence (AI) projects designed to address the challenge of curing cancer. The projects, led by researchers at IBM's Computational Systems Biology Group in Switzerland, involve developing AI and machine learning approaches to help accelerate the understanding of the leading drivers and molecular mechanisms of different cancers. The first project, PaccMann, is working to develop an algorithm that can automatically analyze chemical compounds and predict which are most likely to overcome cancer strains. The second project, "Interaction Network infErence from vectoR representATions of words" (INtERAcT), aims to develop a tool that can automatically extract information from the thousands of papers published every year on cancer research. The third project, "pathway-induced multiple kernel learning," focuses on an algorithm that uses datasets describing what is currently known about molecular interactions to predict the prognosis of cancer patients. .... "
IBM Gives Cancer-Killing Drug AI Project to the Open Source Community
ZDNet
Charlie Osborne
IBM has released to the open source community three artificial intelligence (AI) projects designed to address the challenge of curing cancer. The projects, led by researchers at IBM's Computational Systems Biology Group in Switzerland, involve developing AI and machine learning approaches to help accelerate the understanding of the leading drivers and molecular mechanisms of different cancers. The first project, PaccMann, is working to develop an algorithm that can automatically analyze chemical compounds and predict which are most likely to overcome cancer strains. The second project, "Interaction Network infErence from vectoR representATions of words" (INtERAcT), aims to develop a tool that can automatically extract information from the thousands of papers published every year on cancer research. The third project, "pathway-induced multiple kernel learning," focuses on an algorithm that uses datasets describing what is currently known about molecular interactions to predict the prognosis of cancer patients. .... "
Robotic Lenses
In New Scientist, complete article requires subscription:
A robotic lens can be controlled by simply looking around or blinking By Leah Crane
Blink twice to zoom in. A new soft lens can be controlled by your eye movements, pivoting left and right as you look around and zooming in and out when you blink.
The human eyeball is electric – there is a steady electrical potential between its front and back, even when your eyes are closed or in total darkness. When you move your eyes to look around or blink, the motion of the electrical potential can be measured. Shengqiang Cai at the University of California San Diego and his colleagues used these signals, …
A robotic lens can be controlled by simply looking around or blinking By Leah Crane
Blink twice to zoom in. A new soft lens can be controlled by your eye movements, pivoting left and right as you look around and zooming in and out when you blink.
The human eyeball is electric – there is a steady electrical potential between its front and back, even when your eyes are closed or in total darkness. When you move your eyes to look around or blink, the motion of the electrical potential can be measured. Shengqiang Cai at the University of California San Diego and his colleagues used these signals, …
Defining, Measuring Cleanliness
Worked early on in grocery.
How do consumers define cleanliness in grocery stores? by Tom Ryan in Retailwire
Cleanliness in grocery stores means much more than “clean up on aisle five.” According to a recent survey of Consumer Reports members, cleanliness standards in supermarkets, warehouse clubs and other grocery stores also includes bright lighting, shiny floors, gleaming glass and counters, and well-tended displays.
As part of the report, Cleaning Services Group (CSG), a janitorial and building services contractor that supports hospitals and retailers, provided its take on what qualities convince consumers that a supermarket is clean:
Spotless entries: Keeping sidewalks outside venues free of coffee stains, cigarette butts, gum residue and other signs of grit. Stores dedicated to cleanliness regularly power wash their sidewalks.
Sanitizers: Hand sanitizers in the vestibule and germ-prone areas such as the meat section reflect a concern for cleanliness. Some stores offer sanitary wipes for cart handles.
Gleaming floors: Polished concrete in its natural, light gray color is replacing tan and brown colored floors that look messy as their colors fade. Dedicated retailers wash and buff their floors daily.
Restrooms: In newer stores, restrooms can be found by the entrance and also near the fresh-prepared food dining area. Newer designs have bright lighting, multiple stalls, air fresheners and better accessibility. Dedicated stores inspect restrooms several times an hour. .... "
How do consumers define cleanliness in grocery stores? by Tom Ryan in Retailwire
Cleanliness in grocery stores means much more than “clean up on aisle five.” According to a recent survey of Consumer Reports members, cleanliness standards in supermarkets, warehouse clubs and other grocery stores also includes bright lighting, shiny floors, gleaming glass and counters, and well-tended displays.
As part of the report, Cleaning Services Group (CSG), a janitorial and building services contractor that supports hospitals and retailers, provided its take on what qualities convince consumers that a supermarket is clean:
Spotless entries: Keeping sidewalks outside venues free of coffee stains, cigarette butts, gum residue and other signs of grit. Stores dedicated to cleanliness regularly power wash their sidewalks.
Sanitizers: Hand sanitizers in the vestibule and germ-prone areas such as the meat section reflect a concern for cleanliness. Some stores offer sanitary wipes for cart handles.
Gleaming floors: Polished concrete in its natural, light gray color is replacing tan and brown colored floors that look messy as their colors fade. Dedicated retailers wash and buff their floors daily.
Restrooms: In newer stores, restrooms can be found by the entrance and also near the fresh-prepared food dining area. Newer designs have bright lighting, multiple stalls, air fresheners and better accessibility. Dedicated stores inspect restrooms several times an hour. .... "
On the Further Emergence of the Very Small
Good piece on the emergence fo the very small devices aimed at tasks for many purposes.
Micro machines: How the next big thing in robotics is actually quite small in Digitaltrends
A micro bristle bot beside a penny. Max Planck Institute’s Physical Intelligence Department ...
Half a century after Neil Armstrong memorably uttered the words “one giant leap for mankind,” technological innovation has gotten smaller. Yes, we still thrill to enormous, sky-scraping buildings and the gravity-defying power of rockets, but many of the biggest advances take place on a scale that’s unimaginably tiny next to those of yesteryear. New generations of mobile devices — be they laptops, smartphones and smart watches — shave mere millimeters off the thickness of their already thin predecessors; making already small and portable devices even smaller and more portable. CRISPR/cas9 technology allows scientists to edit single genes; potentially eradicating deadly diseases as a result. New nanometer-scale processes allow chip designers to squeeze ever more transistors onto the surface of integrated circuits; doubling computing power every 12-18 months in the process.
The world of robotics is no different. Think that robots like Boston Dynamics’ canine-inspired Spot robot or humanoid Atlas robot are at the top of the innovation pile, simply because they’re the most visible? Not so fast! On the tinier end of the spectrum, the advances may not be quite so apparent — but, at their scale, they may be even more exciting.
Welcome to the world of microscale robots, a genre of robotics that’s less stop-and-stare attention-grabbing than its metallic big brothers and sisters, but potentially every bit as transformative. These robots could be useful for a broad range of applications, from carrying out microscale or nanoscale surgical feats to exploring other planets. ....
Micro machines: How the next big thing in robotics is actually quite small in Digitaltrends
A micro bristle bot beside a penny. Max Planck Institute’s Physical Intelligence Department ...
Half a century after Neil Armstrong memorably uttered the words “one giant leap for mankind,” technological innovation has gotten smaller. Yes, we still thrill to enormous, sky-scraping buildings and the gravity-defying power of rockets, but many of the biggest advances take place on a scale that’s unimaginably tiny next to those of yesteryear. New generations of mobile devices — be they laptops, smartphones and smart watches — shave mere millimeters off the thickness of their already thin predecessors; making already small and portable devices even smaller and more portable. CRISPR/cas9 technology allows scientists to edit single genes; potentially eradicating deadly diseases as a result. New nanometer-scale processes allow chip designers to squeeze ever more transistors onto the surface of integrated circuits; doubling computing power every 12-18 months in the process.
The world of robotics is no different. Think that robots like Boston Dynamics’ canine-inspired Spot robot or humanoid Atlas robot are at the top of the innovation pile, simply because they’re the most visible? Not so fast! On the tinier end of the spectrum, the advances may not be quite so apparent — but, at their scale, they may be even more exciting.
Welcome to the world of microscale robots, a genre of robotics that’s less stop-and-stare attention-grabbing than its metallic big brothers and sisters, but potentially every bit as transformative. These robots could be useful for a broad range of applications, from carrying out microscale or nanoscale surgical feats to exploring other planets. ....
Saturday, July 27, 2019
Sensor Based Skin for Prosthetic Touch
New advances in touch. We experimented with devices on the shelf to let consumers touch-engage with a product. So this might be a way to quantitatively test the experience?
A sensor-filled “skin” could give prosthetic hands a better sense of touch
The “electronic skin,” inspired by the nervous system, can sense temperature, pressure, or humidity. It could be used to give prosthetic limbs a more complex sense of touch.
Humans are amazing: Your body is a sensing machine, thanks to the roughly 45 miles of nerves inside your body that connect your skin, brain, and muscles. A team from the University of Singapore has now used that nervous system as inspiration to create a "skin" for robots that, one day, could improve their ability to detect and understand their environment.
How it works: Sheets of silicon were covered with 240 sensors that can pick up contact, pressure,
temperature, and humidity. These are able to simultaneously transmit all this data to a single decoder, and should still work when the system is scaled up to 10,000 sensors, according to Benjamin Tee, the coauthor of the study, which was published in Science Robotics today.
What’s new: Flexible robotic “skin” has been tested in previous studies, but this system is the first to enable many sensors to feed back to a single receiver, allowing it to act as a whole system rather than a bunch of individual electrodes, Tee said. Crucially, it still works even if the individual receptors are damaged, making it more resilient than previous iterations. .... "
A sensor-filled “skin” could give prosthetic hands a better sense of touch
The “electronic skin,” inspired by the nervous system, can sense temperature, pressure, or humidity. It could be used to give prosthetic limbs a more complex sense of touch.
Humans are amazing: Your body is a sensing machine, thanks to the roughly 45 miles of nerves inside your body that connect your skin, brain, and muscles. A team from the University of Singapore has now used that nervous system as inspiration to create a "skin" for robots that, one day, could improve their ability to detect and understand their environment.
How it works: Sheets of silicon were covered with 240 sensors that can pick up contact, pressure,
temperature, and humidity. These are able to simultaneously transmit all this data to a single decoder, and should still work when the system is scaled up to 10,000 sensors, according to Benjamin Tee, the coauthor of the study, which was published in Science Robotics today.
What’s new: Flexible robotic “skin” has been tested in previous studies, but this system is the first to enable many sensors to feed back to a single receiver, allowing it to act as a whole system rather than a bunch of individual electrodes, Tee said. Crucially, it still works even if the individual receptors are damaged, making it more resilient than previous iterations. .... "
Integrating Machine Learning and Graph Algorithms
Just attended the Neo4j webinar for Improving Machine Learning Predictions Using Graph Algorithms. Nicely informative. Click below to watch a recording of the webinar, view the slides, and learn more about the topic with our additional resources.
Watch the Webinar Recording https://youtu.be/LWw94LVhfLk View the Webinar Slides
Please feel free to download a free copy of Amy and Mark's book mentioned in the webinar - Graph Algorithms: Practical Examples in Apache Spark and Neo4j:
Watch the Webinar Recording https://youtu.be/LWw94LVhfLk View the Webinar Slides
Please feel free to download a free copy of Amy and Mark's book mentioned in the webinar - Graph Algorithms: Practical Examples in Apache Spark and Neo4j:
Regulations Needed for Digital Ledgers
Regulations for Blockchain systems and contracts
Columbia DataScience (@DSI_Columbia)
How is blockchain technology affecting the securities market and what regulations are needed to govern digital ledgers? This Q&A with a Columbia law professor explains it: bit.ly/2KXa5wR
@CUSEAS @Columbia_BIZ @ColumbiaLaw ....
Columbia DataScience (@DSI_Columbia)
How is blockchain technology affecting the securities market and what regulations are needed to govern digital ledgers? This Q&A with a Columbia law professor explains it: bit.ly/2KXa5wR
@CUSEAS @Columbia_BIZ @ColumbiaLaw ....
Friday, July 26, 2019
Powering Drones for Days with Ultralight Photovoltaics
Something like this might fill the skies. The efficiency increase seems small, but perhaps is key to the task at hand.
Drones will fly for days with new photovoltaic engine by Linda Vu
UC Berkeley researchers just broke another record in photovoltaic efficiency, an achievement that could lead to an ultralight engine that can power drones for days.
For the past 15 years, the efficiency of converting heat into electricity with thermovoltaics has been stalled at 23 percent. But a groundbreaking physical insight has allowed researchers to raise this efficiency to 29 percent. Using a novel design, the researchers are now aiming to reach 50 percent efficiency in the near future by applying well-established scientific concepts.
This breakthrough has big implications for technologies that currently rely on heavy batteries for power. Thermophotovoltaics are an ultralight alternative power source that could allow drones and other unmanned aerial vehicles to operate continuously for days. It could also be used to power deep space probes for centuries and eventually an entire house with a generator the size of an envelope.
Their work was described in a paper published this week in Proceedings of the National Academy of Sciences. .... "
Drones will fly for days with new photovoltaic engine by Linda Vu
UC Berkeley researchers just broke another record in photovoltaic efficiency, an achievement that could lead to an ultralight engine that can power drones for days.
For the past 15 years, the efficiency of converting heat into electricity with thermovoltaics has been stalled at 23 percent. But a groundbreaking physical insight has allowed researchers to raise this efficiency to 29 percent. Using a novel design, the researchers are now aiming to reach 50 percent efficiency in the near future by applying well-established scientific concepts.
This breakthrough has big implications for technologies that currently rely on heavy batteries for power. Thermophotovoltaics are an ultralight alternative power source that could allow drones and other unmanned aerial vehicles to operate continuously for days. It could also be used to power deep space probes for centuries and eventually an entire house with a generator the size of an envelope.
Their work was described in a paper published this week in Proceedings of the National Academy of Sciences. .... "
Smart Assistive Driving Tech Allows Control of Multiple Trucks
New look at how to make supply chains more efficient.
EMERGING TECH
Smart assistive driving tech allows one truck driver to control multiple trucks By Luke Dormehl in DigitalTrends @lukedormehl
Few people argue that self-driving vehicles aren’t on the horizon as far as future transport technologies go. But before we reach the point, there are other ways that smart assistive driving technology can be used — without humans necessarily having to be removed from the process altogether.
That’s where connected vehicle company Peloton Technology’s new tech enters the picture. Unveiled at the recent Automated Vehicle Symposium 2019 in Orlando, Peloton’s vision for the future of Level 4 automation technology allows one (human) driver to control multiple trucks at the same time.
By utilizing vehicle-to-vehicle communications and radar-based active braking systems, combined with sophisticated vehicle control algorithms, the L4 Automated Following system lets one A.I.-driven truck follow another human-driven vehicle. Doing so can not only allow one driver to transport more goods, but also leads to greater fuel economy and safety. Accelerating in one truck will cause the other to follow, while braking works the same way. All of this happens almost instantaneously. .... "
EMERGING TECH
Smart assistive driving tech allows one truck driver to control multiple trucks By Luke Dormehl in DigitalTrends @lukedormehl
Few people argue that self-driving vehicles aren’t on the horizon as far as future transport technologies go. But before we reach the point, there are other ways that smart assistive driving technology can be used — without humans necessarily having to be removed from the process altogether.
That’s where connected vehicle company Peloton Technology’s new tech enters the picture. Unveiled at the recent Automated Vehicle Symposium 2019 in Orlando, Peloton’s vision for the future of Level 4 automation technology allows one (human) driver to control multiple trucks at the same time.
By utilizing vehicle-to-vehicle communications and radar-based active braking systems, combined with sophisticated vehicle control algorithms, the L4 Automated Following system lets one A.I.-driven truck follow another human-driven vehicle. Doing so can not only allow one driver to transport more goods, but also leads to greater fuel economy and safety. Accelerating in one truck will cause the other to follow, while braking works the same way. All of this happens almost instantaneously. .... "
Google Gives 100K Google Minis to the Paralyzed
Google is giving out free devices for the paralyzed. Great effort and also a way to illustrate the particular value of assistants for paralytics and other related accessibility applications. Read the below testimony to its value:
Google Nest: For individuals with paralysis, Google Nest gives help at home in Google Blog.
Editor’s note: Today's post comes from Garrison Redd, who shares how his Google Home Mini helped him regain independence, and how it can improve the lives of people living with paralysis.
It’s been nearly 20 years since my life changed—that’s two decades of learning to navigate life in a wheelchair. There are many obstacles for people living with paralysis, so I have to find creative ways to get things done. While I’m more independent than most, there have been times when I couldn’t join my friends for a drink because the bar had steep steps. Or I’ve been on a date where there wasn’t space between tables so everyone had to get up and cause a commotion.
But some of the greatest challenges and hurdles I face are at home. When you’re paralyzed, your home goes from being a place of comfort and security to a reminder of what you’ve lost. Light switches and thermostats are usually too high up on the wall and, if my phone falls on the floor, I may not be able to call a friend or family member if I need help. These may seem like simple annoyances but, to members of the paralysis community, they reinforce the lack of control and limitations we often face. .... "
Google Nest: For individuals with paralysis, Google Nest gives help at home in Google Blog.
Editor’s note: Today's post comes from Garrison Redd, who shares how his Google Home Mini helped him regain independence, and how it can improve the lives of people living with paralysis.
It’s been nearly 20 years since my life changed—that’s two decades of learning to navigate life in a wheelchair. There are many obstacles for people living with paralysis, so I have to find creative ways to get things done. While I’m more independent than most, there have been times when I couldn’t join my friends for a drink because the bar had steep steps. Or I’ve been on a date where there wasn’t space between tables so everyone had to get up and cause a commotion.
But some of the greatest challenges and hurdles I face are at home. When you’re paralyzed, your home goes from being a place of comfort and security to a reminder of what you’ve lost. Light switches and thermostats are usually too high up on the wall and, if my phone falls on the floor, I may not be able to call a friend or family member if I need help. These may seem like simple annoyances but, to members of the paralysis community, they reinforce the lack of control and limitations we often face. .... "
Captioning Audible Books
The idea below is a kind of captioning of audio from a a book being read to you. Gives you two streams, audio and visual, for a given book. I would find this useful, sometimes one or the other works better, also seems its useful for accessibility as well. Sometimes I want to 'reread' a section of text I may not have understood audibly, and that's usually easier to visually scan than hear. Also good for things that are best shown rather than described, like pictures or charts or equations. Had thought a number of times that captions would be useful. A recent book on Leonardo kept pointing me to a pdf for painting illustrations, those could have been made accessible in captions.
But it seems some publishers believe its giving too much away:
Publishers are pissed about Amazon’s upcoming Audible Captions feature
Some are asking for their books to be withheld from the feature
By Andrew Liptak @AndrewLiptak in TheVerge ...
But it seems some publishers believe its giving too much away:
Publishers are pissed about Amazon’s upcoming Audible Captions feature
Some are asking for their books to be withheld from the feature
By Andrew Liptak @AndrewLiptak in TheVerge ...
Gut vs Brain
Perhaps a radical view, but things other than out brain do think for us, should we use them as models for intelligence as well?
Questioning the Cranial Paradigm
A Talk By Caroline A. Jones [6.19.19]
Part of the definition of intelligence is always this representation model. . . . I’m pushing this idea of distribution—homeostatic surfing on worldly engagements that the body is always not only a part of but enabled by and symbiotic on. Also, the idea of adaptation as not necessarily defined by the consciousness that we like to fetishize. Are there other forms of consciousness? Here’s where the gut-brain axis comes in. Are there forms that we describe as visceral gut feelings that are a form of human consciousness that we’re getting through this immune brain?
Questioning the Cranial Paradigm:
Caroline Jones: I want us to think about the gut-brain axis and the powerful analog system of our immune brain, also thought of as a mobile brain. The cranial paradigm is what I’m here to question and offer you questions about. Mainframe is a kind of discourse that haunts the field that we’re talking about, and the cranium comes with that metaphor that we all live by.
What do we mean when we say the word "intelligence"? The immune system is the fascinating, distributed, mobile, circulating system that learns and teaches at the level of the cell. It has memory, some of which lasts our entire life, some of which has to be refreshed every twenty years, every twelve years, a booster shot every six years. This is a very fascinating component of our body’s intelligence that, as far as we know, is not conscious, but even that has to be questioned and studied. .... "
Questioning the Cranial Paradigm
A Talk By Caroline A. Jones [6.19.19]
Part of the definition of intelligence is always this representation model. . . . I’m pushing this idea of distribution—homeostatic surfing on worldly engagements that the body is always not only a part of but enabled by and symbiotic on. Also, the idea of adaptation as not necessarily defined by the consciousness that we like to fetishize. Are there other forms of consciousness? Here’s where the gut-brain axis comes in. Are there forms that we describe as visceral gut feelings that are a form of human consciousness that we’re getting through this immune brain?
Questioning the Cranial Paradigm:
Caroline Jones: I want us to think about the gut-brain axis and the powerful analog system of our immune brain, also thought of as a mobile brain. The cranial paradigm is what I’m here to question and offer you questions about. Mainframe is a kind of discourse that haunts the field that we’re talking about, and the cranium comes with that metaphor that we all live by.
What do we mean when we say the word "intelligence"? The immune system is the fascinating, distributed, mobile, circulating system that learns and teaches at the level of the cell. It has memory, some of which lasts our entire life, some of which has to be refreshed every twenty years, every twelve years, a booster shot every six years. This is a very fascinating component of our body’s intelligence that, as far as we know, is not conscious, but even that has to be questioned and studied. .... "
Google Assistant Rolls Out in Waze
Continued better integration of Google methods within their Assistant.
The Google Assistant is now available in Waze
Austin Chang
Director, Google Assistant
Think about the last time you were stuck in traffic—the minutes you spent staring at a long line of red taillights probably didn't feel productive. The Assistant can already help with navigation in Google Maps, so it’s easier to search for places along your route or add a new stop while you’re on the go. And starting to roll out today in the U.S., you can get help from the Assistant in Waze on Android phones in English. .... "
The Google Assistant is now available in Waze
Austin Chang
Director, Google Assistant
Think about the last time you were stuck in traffic—the minutes you spent staring at a long line of red taillights probably didn't feel productive. The Assistant can already help with navigation in Google Maps, so it’s easier to search for places along your route or add a new stop while you’re on the go. And starting to roll out today in the U.S., you can get help from the Assistant in Waze on Android phones in English. .... "
Odds vs Probability
Good thought, since I am from an engineering background, I hear probability rather than odds.
Are you mixing up odds with probability? in TowardDataScience
Odds and probability are different, and too many people make decisions without knowing that.
Keith McNulty
In day-to-day life people use the words ‘odds’ and ‘probability’ interchangeably. They are both terms that imply an estimate of chance. I also see these terms used interchangeably in the workplace. People can say that the ‘odds are twice as high’, and they can understand that to mean ‘the probability is double’. Well, that’s wrong!
Odds and probability are related concepts but very different in scale and meaning. When mixed up in the wrong contexts this can lead to mistaken estimates of chance, which can then lead to erroneous decision making.
In this article, I want to illustrate what those differences are and how, in confusing the two, you can really affect analysis and research.
What is the difference between probability and odds? .... "
Are you mixing up odds with probability? in TowardDataScience
Odds and probability are different, and too many people make decisions without knowing that.
Keith McNulty
In day-to-day life people use the words ‘odds’ and ‘probability’ interchangeably. They are both terms that imply an estimate of chance. I also see these terms used interchangeably in the workplace. People can say that the ‘odds are twice as high’, and they can understand that to mean ‘the probability is double’. Well, that’s wrong!
Odds and probability are related concepts but very different in scale and meaning. When mixed up in the wrong contexts this can lead to mistaken estimates of chance, which can then lead to erroneous decision making.
In this article, I want to illustrate what those differences are and how, in confusing the two, you can really affect analysis and research.
What is the difference between probability and odds? .... "
Thursday, July 25, 2019
Pricing Smart Products
Towards business models of smart products.
5 Questions to Consider When Pricing Smart Products
Nicolaj Siggelkow, Christian Terwiesch in HBR
Imagine you are the CEO of an oral care company and you have been selling a product called the Power Brush 2000, a good electric toothbrush. Your revenue model was likely focused on profiting from selling the toothbrush with some creative pricing coming from the replacement heads (a typical “razor-razorblade” model perhaps with a subscription plan à la Dollar Shave Club). Now, your R&D group has a new product ready for launch, a toothbrush for the 21st century. The toothbrush is smart (it has built-in sensors and AI to detect plaques and cavities) and is connected via Bluetooth to the Internet. So, let’s call it the Smart Connect XL3000. Your job now is to price the Smart Connect XL3000, or, more broadly speaking, to articulate a revenue model.
The revenue model is one of the most important elements of a firm’s strategy. It defines the ways in which a firm gets compensated for the value that its products or services generate. In the old days, revenue models primarily consisted of picking a “good” price. Connected, smart devices are changing this paradigm.
Companies that pursue what we call a connected strategy (ie., those that transform their connection to customers from episodic interactions to a more frequent and data-driven relationship) have a bigger set of revenue models to choose from. In other words, the price can now depend on factors that previously could not be used to influence the pricing decision. To think systematically about revenue models and to spot opportunities for improvement, we find it helpful to ask the following five questions .... '
5 Questions to Consider When Pricing Smart Products
Nicolaj Siggelkow, Christian Terwiesch in HBR
Imagine you are the CEO of an oral care company and you have been selling a product called the Power Brush 2000, a good electric toothbrush. Your revenue model was likely focused on profiting from selling the toothbrush with some creative pricing coming from the replacement heads (a typical “razor-razorblade” model perhaps with a subscription plan à la Dollar Shave Club). Now, your R&D group has a new product ready for launch, a toothbrush for the 21st century. The toothbrush is smart (it has built-in sensors and AI to detect plaques and cavities) and is connected via Bluetooth to the Internet. So, let’s call it the Smart Connect XL3000. Your job now is to price the Smart Connect XL3000, or, more broadly speaking, to articulate a revenue model.
The revenue model is one of the most important elements of a firm’s strategy. It defines the ways in which a firm gets compensated for the value that its products or services generate. In the old days, revenue models primarily consisted of picking a “good” price. Connected, smart devices are changing this paradigm.
Companies that pursue what we call a connected strategy (ie., those that transform their connection to customers from episodic interactions to a more frequent and data-driven relationship) have a bigger set of revenue models to choose from. In other words, the price can now depend on factors that previously could not be used to influence the pricing decision. To think systematically about revenue models and to spot opportunities for improvement, we find it helpful to ask the following five questions .... '
Alexa Introduces third Startup Class
Interesting list of developers follows:
Amazon’s Alexa Accelerator introduces its third startup class By Kyle Wiggers
A year to the day after Amazon revealed the companies selected to participate in the second annual Alexa Accelerator, a 13-week program that grants 10 startups access to Amazon employees and mentors from the Seattle AI community and Techstars incubator network, the tech giant and Techstars today announced the third cohort.
This time around, Amazon and Techstars sought early-stage firms in health care, fitness and wellness, enterprise collaboration and productivity, property tech, and machine learning services verticals. (The first cohort honed in on games and interactive experiences, while the second cohort was largely focused on more practical applications, such as water conservation and accessibility.) Over the course of roughly six months, they narrowed down the list of applicants to nine companies that address challenges in retail, management, education, gaming, and a raft of related segments. ...
“The 2019 Alexa Accelerator, powered by Techstars, offers another glimpse into how Alexa can make customers’ lives easier, more productive, and more entertaining,” Amazon said in a press release. “[These] early-stage startups [will receive] the support they need to grow their network, gain traction, incorporate Alexa, and engage with investors.” .... "
(List follows)
Amazon’s Alexa Accelerator introduces its third startup class By Kyle Wiggers
A year to the day after Amazon revealed the companies selected to participate in the second annual Alexa Accelerator, a 13-week program that grants 10 startups access to Amazon employees and mentors from the Seattle AI community and Techstars incubator network, the tech giant and Techstars today announced the third cohort.
This time around, Amazon and Techstars sought early-stage firms in health care, fitness and wellness, enterprise collaboration and productivity, property tech, and machine learning services verticals. (The first cohort honed in on games and interactive experiences, while the second cohort was largely focused on more practical applications, such as water conservation and accessibility.) Over the course of roughly six months, they narrowed down the list of applicants to nine companies that address challenges in retail, management, education, gaming, and a raft of related segments. ...
“The 2019 Alexa Accelerator, powered by Techstars, offers another glimpse into how Alexa can make customers’ lives easier, more productive, and more entertaining,” Amazon said in a press release. “[These] early-stage startups [will receive] the support they need to grow their network, gain traction, incorporate Alexa, and engage with investors.” .... "
(List follows)
Alexa Prize Primary Tools
Sciences used in the Socialbot Grand Challenge.
AI Tools Let Alexa Prize Participants Focus on Science By Anu Venkatesh
March 4 marks the kickoff of the third Alexa Prize Socialbot Grand Challenge, in which university teams build socialbots capable of conversing on a wide range of topics and make them available to millions of Alexa customers through the invitation “Alexa, let’s chat”. Student teams can begin applying to the competition on March 4, and in the subsequent six weeks, the Alexa Prize team will make a series of roadshow appearances at tech hubs in the U.S. and Europe to meet with students and answer questions about the program.
As we gear up for the third Alexa Prize Socialbot Grand Challenge, the Alexa science blog is reviewing some of the technical accomplishments from the second, which were reported in a paper released in late 2018. This post examines contributions by Amazon’s Alexa Prize team; a second post will examine innovations from the participating university teams.
To ensure that Alexa Prize contestants can concentrate on dialogue systems — the core technology of socialbots — Amazon scientists and engineers built a set of machine learning modules that handle fundamental conversational tasks and a development environment that lets contestants easily mix and match existing modules with those of their own design.
The Amazon team provided contestants with five primary tools:
An automatic-speech-recognition system, tailored to the broader vocabulary of “open-domain” conversations;
Contextual topic and dialogue act models, which identify topics of conversation and types of utterance, such as requests for information, clarifications, and instructions;
A sensitive-content detector;
A conversation evaluator model, which estimates how coherent and engaging responses generated by the contestants’ dialogue systems are; and CoBot, a development environment that integrates tools from the Alexa Skills Kit, services from Amazon Web Services, and the Alexa Prize team’s models and automatically handles socialbot deployment. .... "
AI Tools Let Alexa Prize Participants Focus on Science By Anu Venkatesh
March 4 marks the kickoff of the third Alexa Prize Socialbot Grand Challenge, in which university teams build socialbots capable of conversing on a wide range of topics and make them available to millions of Alexa customers through the invitation “Alexa, let’s chat”. Student teams can begin applying to the competition on March 4, and in the subsequent six weeks, the Alexa Prize team will make a series of roadshow appearances at tech hubs in the U.S. and Europe to meet with students and answer questions about the program.
As we gear up for the third Alexa Prize Socialbot Grand Challenge, the Alexa science blog is reviewing some of the technical accomplishments from the second, which were reported in a paper released in late 2018. This post examines contributions by Amazon’s Alexa Prize team; a second post will examine innovations from the participating university teams.
To ensure that Alexa Prize contestants can concentrate on dialogue systems — the core technology of socialbots — Amazon scientists and engineers built a set of machine learning modules that handle fundamental conversational tasks and a development environment that lets contestants easily mix and match existing modules with those of their own design.
The Amazon team provided contestants with five primary tools:
An automatic-speech-recognition system, tailored to the broader vocabulary of “open-domain” conversations;
Contextual topic and dialogue act models, which identify topics of conversation and types of utterance, such as requests for information, clarifications, and instructions;
A sensitive-content detector;
A conversation evaluator model, which estimates how coherent and engaging responses generated by the contestants’ dialogue systems are; and CoBot, a development environment that integrates tools from the Alexa Skills Kit, services from Amazon Web Services, and the Alexa Prize team’s models and automatically handles socialbot deployment. .... "
Hierarchical Clustering Example
Here a good technical, but readily understandable, example of hierarchical clustering. With Code. We used similar methods in the enterprise, but not for crime data clustering. We used routines in SAS.
US Arrests: Hierarchical Clustering using DIANA and AGNES
Posted by Neeraj in DSC:
Data Science and Machine Learning are furtive, they go un-noticed but are present in all ways possible and everywhere. They contribute significantly in all the fields they are applied and leave us with evidence which we can rely on and take data-driven directions. Today, a very interesting area we are going to see an example of Data Science and Machine Learning is ‘Crimes’. We are going to focus on types of crimes taken place across 50 states in the USA and cluster them. We cluster them for the following reasons:
To understand state-wise crime demographics
To make laws applicable in states depending upon the type of crime taking place most often
Getting police and forces ready by the type of crime done in respective states
Predicting the crimes that may happen and thus taking measures in advance
The above are the few applications which can be considered but they are not exhaustive, depending upon the data reports produced by the algorithms there can be more such applications which can be deployed. Our concern today is to understand how we can deploy Hierarchical clustering in two ways i.e. DIANA and AGNES for the USA Arrests data. .... "
US Arrests: Hierarchical Clustering using DIANA and AGNES
Posted by Neeraj in DSC:
Data Science and Machine Learning are furtive, they go un-noticed but are present in all ways possible and everywhere. They contribute significantly in all the fields they are applied and leave us with evidence which we can rely on and take data-driven directions. Today, a very interesting area we are going to see an example of Data Science and Machine Learning is ‘Crimes’. We are going to focus on types of crimes taken place across 50 states in the USA and cluster them. We cluster them for the following reasons:
To understand state-wise crime demographics
To make laws applicable in states depending upon the type of crime taking place most often
Getting police and forces ready by the type of crime done in respective states
Predicting the crimes that may happen and thus taking measures in advance
The above are the few applications which can be considered but they are not exhaustive, depending upon the data reports produced by the algorithms there can be more such applications which can be deployed. Our concern today is to understand how we can deploy Hierarchical clustering in two ways i.e. DIANA and AGNES for the USA Arrests data. .... "
Labels:
Analytics,
Clustering,
Crime,
DSC,
SAS,
Statistics
Voice Powered Gaming on Alexa
Have never done voice powered gaming, how well does it work? Another channel that makes you think about when/why voice first can be best, or be engaging or even work at all. We immediately thought about hands-free.
Alexa, tell me about the rise of voice-powered gaming
Games played with voice commands are catching on, and now Amazon is betting on the nascent industry. by Tanya Basu in Technology Review
Imagine you’re on a voyage in deep space when you’re suddenly awakened from a cryo-slumber to discover your ship is under siege from ... something. Still partly in stasis, you have only one chance to save yourself and your crew. You need to steer the ship to safety—with only your voice.
That’s the concept behind Vortex, a game from voice-first Portuguese studio Doppio, released earlier this year. The company is set to release another game on Amazon this fall. .... "
Alexa, tell me about the rise of voice-powered gaming
Games played with voice commands are catching on, and now Amazon is betting on the nascent industry. by Tanya Basu in Technology Review
Imagine you’re on a voyage in deep space when you’re suddenly awakened from a cryo-slumber to discover your ship is under siege from ... something. Still partly in stasis, you have only one chance to save yourself and your crew. You need to steer the ship to safety—with only your voice.
That’s the concept behind Vortex, a game from voice-first Portuguese studio Doppio, released earlier this year. The company is set to release another game on Amazon this fall. .... "
Where is Cortana Going?
Long time follower of the concept of an 'assistant'. Have played with Cortana in a number of forms. TheVerge does a good update below on where it is and they may be going today. Its all about augmentation, now in a number of contexts, including the home an office. Believe they could do more with stronger office connections.
Cortana isn’t dead, but it’s no longer an Alexa competitor By Tom Warren @tomwarren in TheVerge
Cortana started off life as a digital assistant for Windows Phone, before making its way to Windows 10, iOS, and Android. With Windows Phone dead and very few people using Cortana on a PC, Microsoft has made the difficult decision to give up competing with Alexa and Google Assistant. Microsoft CEO Satya Nadella revealed earlier this year that the company no longer sees Cortana as a competitor to those other digital assistants, and that it’s embracing the idea of having rivals on its platform. We’re now starting to see how that will work, and what it means for the future of Cortana. .... "
Business Models for IOT
A general kind of thought. But there are specific ideas by type of technology which are useful to think about.
The Internet of Things Needs a Business Model. Here It Is
by Michael Blanding in HBS Working Knowledge
Companies have struggled to find the right opportunities for selling the Internet of Things. Rajiv Lal says that’s all about to change.
The Internet of Things (IoT) has been near the top of the technology-hype lists for years. In 2018, Gartner’s Hype Cycle for Emerging Technologies ranked IoT platforms as cresting the “peak of inflated expectations” stage and ready to tumble into the dreaded “trough of disillusionment,” like a barrel careening over Niagara Falls.
It’s not that IoT has flopped; far from it. Everything in our homes seems to be connected. IoT devices enable networks that make possible smart speakers, smart TVs, smart thermostats, and even smart refrigerators that can help automate our lives. The consumer market, however, is only the beginning of what is possible through the IoT.
“What you see in the consumer domain is interesting, but it’s not where the economic action is,” says Rajiv Lal, the Stanley Roth Sr. Professor of Retailing at Harvard Business School. “Most of the IoT applications that matter are in the business-to-business space.”
Indeed, the kinds of innovation possible in the B2B world seem limitless. By placing sensors on machinery and connecting them to the internet, companies can capture real-time data on their assets and processes, exploit efficiencies, proactively identify problems, and develop new workflows.
So why, then, do most IoT ventures fail? Despite the industry's potential value of $11 trillion, according to McKinsey, more than 75 percent of businesses don’t make it off the ground, Lal says. That’s because most companies don’t know how to harness the potential of IoT effectively.
“We can put sensors on something and get great data out of it, but then the question becomes what are we going to do with that data—and how are we going to make money off of it?” .... "
The Internet of Things Needs a Business Model. Here It Is
by Michael Blanding in HBS Working Knowledge
Companies have struggled to find the right opportunities for selling the Internet of Things. Rajiv Lal says that’s all about to change.
The Internet of Things (IoT) has been near the top of the technology-hype lists for years. In 2018, Gartner’s Hype Cycle for Emerging Technologies ranked IoT platforms as cresting the “peak of inflated expectations” stage and ready to tumble into the dreaded “trough of disillusionment,” like a barrel careening over Niagara Falls.
It’s not that IoT has flopped; far from it. Everything in our homes seems to be connected. IoT devices enable networks that make possible smart speakers, smart TVs, smart thermostats, and even smart refrigerators that can help automate our lives. The consumer market, however, is only the beginning of what is possible through the IoT.
“What you see in the consumer domain is interesting, but it’s not where the economic action is,” says Rajiv Lal, the Stanley Roth Sr. Professor of Retailing at Harvard Business School. “Most of the IoT applications that matter are in the business-to-business space.”
Indeed, the kinds of innovation possible in the B2B world seem limitless. By placing sensors on machinery and connecting them to the internet, companies can capture real-time data on their assets and processes, exploit efficiencies, proactively identify problems, and develop new workflows.
So why, then, do most IoT ventures fail? Despite the industry's potential value of $11 trillion, according to McKinsey, more than 75 percent of businesses don’t make it off the ground, Lal says. That’s because most companies don’t know how to harness the potential of IoT effectively.
“We can put sensors on something and get great data out of it, but then the question becomes what are we going to do with that data—and how are we going to make money off of it?” .... "
Wednesday, July 24, 2019
GM Delays Self Driving Car
Now have been asked this question a number of times: When?
GM won't deliver self-driving cars by the end of the year after all
Its self-driving car subsidiary Cruise says more testing is needed.
By Christine Fisher, @cfisherwrites in Engadget ...
GM won't deliver self-driving cars by the end of the year after all
Its self-driving car subsidiary Cruise says more testing is needed.
By Christine Fisher, @cfisherwrites in Engadget ...
How Much Data do we Generate?
Usually not a fan of infographics, but this one makes the case well, click through for it. Sounds good, but experience tells me its still not that all the data that you will need.
Domo’s Latest ‘Data Never Sleeps’ Infographic – Just How Much Data Are We Generating Now?
Staff report in Datanami
Domo – a cloud software company based in Utah – has released the seventh annual iteration of its popular “Data Never Sleeps” infographic. The infographic, which was introduced in 2013, highlights the scale of global data by presenting the amount of data generated every minute on popular apps and social media platforms like Instagram, Twitter, Twitch, and Tinder.
The numbers – as always – are staggering. The infographic shows that almost 700,000 aggregate hours of Netflix are watched every minute – an increase, Domo says, of 614 percent over last year’s report. Similarly, the report saw a 21 percent increase in use of Tinder and a 12 percent increase in photos shared per minute on Instagram. .... "
Domo’s Latest ‘Data Never Sleeps’ Infographic – Just How Much Data Are We Generating Now?
Staff report in Datanami
Domo – a cloud software company based in Utah – has released the seventh annual iteration of its popular “Data Never Sleeps” infographic. The infographic, which was introduced in 2013, highlights the scale of global data by presenting the amount of data generated every minute on popular apps and social media platforms like Instagram, Twitter, Twitch, and Tinder.
The numbers – as always – are staggering. The infographic shows that almost 700,000 aggregate hours of Netflix are watched every minute – an increase, Domo says, of 614 percent over last year’s report. Similarly, the report saw a 21 percent increase in use of Tinder and a 12 percent increase in photos shared per minute on Instagram. .... "
Hybrid Google Cloud and Blockchain Smart Contract Apps
In the Google Cloud Blog, they suggest we can create cloud-Blockchain hybrids, and integrate them with smart contracts. Want to see some real life examples of this. Pass them along. The examples below the fold are interesting:
Further commentary in Gartner: https://blogs.gartner.com/avivah-litan/2019/07/23/google-in-blockchain/
Building hybrid blockchain/cloud applications with Ethereum and Google Cloud By Allen Day ... Developer Advocate, Google Cloud
Adoption of blockchain protocols and technologies can be accelerated by integrating with modern internet resources and public cloud services. In this blog post, we describe a few applications of making internet-hosted data available inside an immutable public blockchain: placing BigQuery data available on-chain using a Chainlink oracle smart contract. Possible applications are innumerable, but we've focused this post on a few that we think are of high and immediate utility: prediction marketplaces, futures contracts, and transaction privacy.
Hybrid cloud-blockchain applications
Blockchains focus on mathematical effort to create a shared consensus. Ideas quickly sprang up to extend this model to allow party-to-party agreements, i.e. contracts. This concept of smart contracts was first described in a 1997 article by computer scientist Nick Szabo. An early example of inscribing agreements into blocks was popularized by efforts such as Colored Coins on the Bitcoin blockchain.
Smart contracts are embedded into the source of truth of the blockchain, and are therefore effectively immutable after they’re a few blocks deep. This provides a mechanism to allow participants to commit crypto-economic resources to an agreement with a counterparty, and to trust that contract terms will be enforced automatically and without requiring third party execution or arbitration, if desired.
But none of this addresses a fundamental issue: where to get the variables with which the contract is evaluated. If the data are not derived from recently added on-chain data, a trusted source of external data is required. Such a source is called an oracle.
In previous work, we made public blockchain data freely available in BigQuery through the Google Cloud Public Datasets Program for eight different cryptocurrencies. In this article, we'll refer to that work as Google's crypto public datasets. You can find more details and samples of these datasets in the GCP Marketplace. This dataset resource has resulted in a number of GCP customers developing business processes based on automated analysis of the indexed blockchain data, such as SaaS profit sharing, mitigating service abuse by characterizing network participants, and using static analysis techniques to detect software vulnerabilities and malware. However, these applications share a common attribute: they're all using the crypto public datasets as an input to an off-chain business process. ....
Further commentary in Gartner: https://blogs.gartner.com/avivah-litan/2019/07/23/google-in-blockchain/
Building hybrid blockchain/cloud applications with Ethereum and Google Cloud By Allen Day ... Developer Advocate, Google Cloud
Adoption of blockchain protocols and technologies can be accelerated by integrating with modern internet resources and public cloud services. In this blog post, we describe a few applications of making internet-hosted data available inside an immutable public blockchain: placing BigQuery data available on-chain using a Chainlink oracle smart contract. Possible applications are innumerable, but we've focused this post on a few that we think are of high and immediate utility: prediction marketplaces, futures contracts, and transaction privacy.
Hybrid cloud-blockchain applications
Blockchains focus on mathematical effort to create a shared consensus. Ideas quickly sprang up to extend this model to allow party-to-party agreements, i.e. contracts. This concept of smart contracts was first described in a 1997 article by computer scientist Nick Szabo. An early example of inscribing agreements into blocks was popularized by efforts such as Colored Coins on the Bitcoin blockchain.
Smart contracts are embedded into the source of truth of the blockchain, and are therefore effectively immutable after they’re a few blocks deep. This provides a mechanism to allow participants to commit crypto-economic resources to an agreement with a counterparty, and to trust that contract terms will be enforced automatically and without requiring third party execution or arbitration, if desired.
But none of this addresses a fundamental issue: where to get the variables with which the contract is evaluated. If the data are not derived from recently added on-chain data, a trusted source of external data is required. Such a source is called an oracle.
In previous work, we made public blockchain data freely available in BigQuery through the Google Cloud Public Datasets Program for eight different cryptocurrencies. In this article, we'll refer to that work as Google's crypto public datasets. You can find more details and samples of these datasets in the GCP Marketplace. This dataset resource has resulted in a number of GCP customers developing business processes based on automated analysis of the indexed blockchain data, such as SaaS profit sharing, mitigating service abuse by characterizing network participants, and using static analysis techniques to detect software vulnerabilities and malware. However, these applications share a common attribute: they're all using the crypto public datasets as an input to an off-chain business process. ....
Technology of Choice and Defining Selections
Though I have never worked in the apparel space. I have worked in spaces where consumers make many choices in context, and companies aim to insert new choices to maximize engagement, while strengthening demand by marketing influence.
I happened on this article in Stitch Fix, an online personal styling service in the United States, talking about their use of technology.
Some fascinating things here. both decision oriented and mathematically defined. Algorithms of choice. Useful beyond the realm of apparel selection? I think so:
In their Multithreaded Blog:
WELCOME TO Stitch Fix (If you read the whole thing at the link below the math is covered)
We are reinventing the retail industry through innovative technology.
Simulacra and Selection: Styling at Stitch Fix ...
Modern retailers aid and influence customer decisions, using techniques like recommender systems and market basket analysis to deliver personalized and contextual item suggestions. While such methods typically just augment a traditional browsing experience, Stitch Fix goes a step further by exclusively delivering curated selections of items, via algorithmically-assisted stylist recommendations1.
For most of the history of Stitch Fix, stylists have worked with a styling platform that functions in a fairly straightforward way. For simplicity and concreteness, imagine an e-commerce platform with various filters, wherein stylists are able to browse for clothing items and add them to a cart. The role of our styling algorithm in this system is to rank the items based on information the client provides us, and the role of the stylist is to select an assortment of items that a client will love.
I’m going to gloss over all the details of the styling algorithm except for one important and nuanced point: it is trained to estimate the probability that a particular client will like a given item if a stylist decides to send it. This is a natural and useful framing: we observe what happens to items that stylists select, but not the counterfactual for items they don’t select. However, this selection bias comes with some occasionally perplexing caveats.
One observation of long-standing Stitch Fix lore is the “shorts in winter” problem: the styling algorithm tends to assign a high score to shorts during the dead of winter. Do we somehow think that everyone wants to stock up on shorts despite the chilly weather? Of course not: this only means that shorts are likely to be successful if a stylist chooses to send them, which they won’t do without a very good reason—e.g., a client requesting them for a tropical cruise. This is an amusing example but the problem is broader: stylists need to spend a decent fraction of their time browsing through items that, for one reason or another, are clearly a poor match. .... "
I happened on this article in Stitch Fix, an online personal styling service in the United States, talking about their use of technology.
Some fascinating things here. both decision oriented and mathematically defined. Algorithms of choice. Useful beyond the realm of apparel selection? I think so:
In their Multithreaded Blog:
WELCOME TO Stitch Fix (If you read the whole thing at the link below the math is covered)
We are reinventing the retail industry through innovative technology.
Simulacra and Selection: Styling at Stitch Fix ...
Modern retailers aid and influence customer decisions, using techniques like recommender systems and market basket analysis to deliver personalized and contextual item suggestions. While such methods typically just augment a traditional browsing experience, Stitch Fix goes a step further by exclusively delivering curated selections of items, via algorithmically-assisted stylist recommendations1.
For most of the history of Stitch Fix, stylists have worked with a styling platform that functions in a fairly straightforward way. For simplicity and concreteness, imagine an e-commerce platform with various filters, wherein stylists are able to browse for clothing items and add them to a cart. The role of our styling algorithm in this system is to rank the items based on information the client provides us, and the role of the stylist is to select an assortment of items that a client will love.
I’m going to gloss over all the details of the styling algorithm except for one important and nuanced point: it is trained to estimate the probability that a particular client will like a given item if a stylist decides to send it. This is a natural and useful framing: we observe what happens to items that stylists select, but not the counterfactual for items they don’t select. However, this selection bias comes with some occasionally perplexing caveats.
One observation of long-standing Stitch Fix lore is the “shorts in winter” problem: the styling algorithm tends to assign a high score to shorts during the dead of winter. Do we somehow think that everyone wants to stock up on shorts despite the chilly weather? Of course not: this only means that shorts are likely to be successful if a stylist chooses to send them, which they won’t do without a very good reason—e.g., a client requesting them for a tropical cruise. This is an amusing example but the problem is broader: stylists need to spend a decent fraction of their time browsing through items that, for one reason or another, are clearly a poor match. .... "
Ecology of Machine Intelligence. When?
Classic question, when will machines have our general intelligence? What will this coexistence then look like? And what is the resulting 'Ecology'? In the Edge:
Ecology of Intelligence A Talk By Frank Wilczek
I don't think a singularity is imminent, although there has been quite a bit of talk about it. I don't think the prospect of artificial intelligence outstripping human intelligence is imminent because the engineering substrate just isn’t there, and I don't see the immediate prospects of getting there. I haven’t said much about quantum computing, other people will, but if you’re waiting for quantum computing to create a singularity, you’re misguided. That crossover, fortunately, will take decades, if not centuries.
There’s this tremendous drive for intelligence, but there will be a long period of coexistence in which there will be an ecology of intelligence. Humans will become enhanced in different ways and relatively trivial ways with smartphones and access to the Internet, but also the integration will become more intimate as time goes on. Younger people who interact with these devices from childhood will be cyborgs from the very beginning. They will think in different ways than current adults do. .... "
FRANK WILCZEK is the Herman Feshbach Professor of Physics at MIT, recipient of the 2004 Nobel Prize in physics, and author of A Beautiful Question: Finding Nature’s Deep Design. ... "
Ecology of Intelligence A Talk By Frank Wilczek
I don't think a singularity is imminent, although there has been quite a bit of talk about it. I don't think the prospect of artificial intelligence outstripping human intelligence is imminent because the engineering substrate just isn’t there, and I don't see the immediate prospects of getting there. I haven’t said much about quantum computing, other people will, but if you’re waiting for quantum computing to create a singularity, you’re misguided. That crossover, fortunately, will take decades, if not centuries.
There’s this tremendous drive for intelligence, but there will be a long period of coexistence in which there will be an ecology of intelligence. Humans will become enhanced in different ways and relatively trivial ways with smartphones and access to the Internet, but also the integration will become more intimate as time goes on. Younger people who interact with these devices from childhood will be cyborgs from the very beginning. They will think in different ways than current adults do. .... "
FRANK WILCZEK is the Herman Feshbach Professor of Physics at MIT, recipient of the 2004 Nobel Prize in physics, and author of A Beautiful Question: Finding Nature’s Deep Design. ... "
Known UnKnowns: Uncertainty in AI
Ultimately is always an issue. Consider it early. Test it often.
Known Unknowns: Designing Uncertainty Into the AI-Powered System
from ODSC - Open Data Science
Uncertainty may be a fearful state for many people, but for data scientists and developers training the next wave of AI, uncertainty may be a good thing. Designing uncertainty directly into the system could help AI focus on what experts need to leverage state of the art AI and use it to inform our world. ... "
Known Unknowns: Designing Uncertainty Into the AI-Powered System
from ODSC - Open Data Science
Uncertainty may be a fearful state for many people, but for data scientists and developers training the next wave of AI, uncertainty may be a good thing. Designing uncertainty directly into the system could help AI focus on what experts need to leverage state of the art AI and use it to inform our world. ... "
Explaining Simplistic Forecasts to Management
Recall this kind of thing coming up many times. We addressed it by installing a clickable pop up to anticipate the question and make the case for the forecast. Agree, its about selling the forecast,in context to management. Good thoughts on the issue below from SAS:
How do I explain a flat-line forecast to senior management? By Charlie Chase SAS
How do you explain flat-line forecasts to senior management? Or, do you just make manual overrides to adjust the forecast?
When there is no detectable trend or seasonality associated with your demand history, or something has disrupted the trend and/or seasonality, simple time series methods (i.e. naïve and simple exponential smoothing) will often generate a flat-line forecast reflecting the current demand level. Because a flat-line is often an unlikely reflection of the future, delivering a flat-line forecast to management may require explanation. And sometimes, explaining is not enough.
Today, we have large scale automatic hierarchical statistical forecasting systems to automatically build statistical models up/down a business hierarchy for hundreds of thousands, and in some cases millions, of data series. As you add more historical data, and causal factors (i.e., price, promotions, advertising, in-store merchandizing, economic data and others), the system re-diagnoses this information and rebuilds (tweaks) the models automatically. They also automatically identify and correct for outliers and other anomalies in the demand history.
The ability to use stacked neural network (NN) plus time series models have proven to be the best forecasting method according to the recent M4 competition. Stacked NN + time series ensemble models are just another statistical method that can be used along with traditional methods (e.g. naïve, exponential smoothing, ARIMA, ARIMAX, dynamic regression, unobserved components models, weighted combined models and others).
We all know how hard it is to beat a naïve model over time. As a result, naïve models are now the benchmark for evaluating forecasts. If your forecast can’t beat a naïve model, then why are you spending so much time developing and adjusting (manual overrides) statistical forecasts?
Subsequently, we all know that not all products are forecastable using statistical methods because of sparse data, randomness, lack of historical demand data, and no access to causal information. However, it’s not just a matter of forecast accuracy, but also whether you can sell the forecast to senior management. ..... "
How do I explain a flat-line forecast to senior management? By Charlie Chase SAS
How do you explain flat-line forecasts to senior management? Or, do you just make manual overrides to adjust the forecast?
When there is no detectable trend or seasonality associated with your demand history, or something has disrupted the trend and/or seasonality, simple time series methods (i.e. naïve and simple exponential smoothing) will often generate a flat-line forecast reflecting the current demand level. Because a flat-line is often an unlikely reflection of the future, delivering a flat-line forecast to management may require explanation. And sometimes, explaining is not enough.
Today, we have large scale automatic hierarchical statistical forecasting systems to automatically build statistical models up/down a business hierarchy for hundreds of thousands, and in some cases millions, of data series. As you add more historical data, and causal factors (i.e., price, promotions, advertising, in-store merchandizing, economic data and others), the system re-diagnoses this information and rebuilds (tweaks) the models automatically. They also automatically identify and correct for outliers and other anomalies in the demand history.
The ability to use stacked neural network (NN) plus time series models have proven to be the best forecasting method according to the recent M4 competition. Stacked NN + time series ensemble models are just another statistical method that can be used along with traditional methods (e.g. naïve, exponential smoothing, ARIMA, ARIMAX, dynamic regression, unobserved components models, weighted combined models and others).
We all know how hard it is to beat a naïve model over time. As a result, naïve models are now the benchmark for evaluating forecasts. If your forecast can’t beat a naïve model, then why are you spending so much time developing and adjusting (manual overrides) statistical forecasts?
Subsequently, we all know that not all products are forecastable using statistical methods because of sparse data, randomness, lack of historical demand data, and no access to causal information. However, it’s not just a matter of forecast accuracy, but also whether you can sell the forecast to senior management. ..... "
Component Identification, Learning
Quite interesting project. A way to further connect data to parts and then wholes? A kind of basic semantic representation? Can think of a number of ways this could produce much value.
Intel Does the Hard Work so Robots Can Operate Your Microwave
Tech Crunch
By Darrell Etherington
Intel artificial intelligence (AI) researchers, in partnership with the University of California, San Diego and Stanford University, have compiled a large dataset of three-dimensional objects featuring highly detailed, hierarchically structured and fully annotated information. The PartNet dataset organizes objects into segmented components, in a manner applicable to building AI learning models, for identifying and manipulating actual objects. PartNet lists more than 570,000 components across more than 26,000 objects, with parts common to objects across categories labeled as corresponding to one another. This enables AIs taught to recognize a part on one object variant, to identify it on another. ... "
Intel Does the Hard Work so Robots Can Operate Your Microwave
Tech Crunch
By Darrell Etherington
Intel artificial intelligence (AI) researchers, in partnership with the University of California, San Diego and Stanford University, have compiled a large dataset of three-dimensional objects featuring highly detailed, hierarchically structured and fully annotated information. The PartNet dataset organizes objects into segmented components, in a manner applicable to building AI learning models, for identifying and manipulating actual objects. PartNet lists more than 570,000 components across more than 26,000 objects, with parts common to objects across categories labeled as corresponding to one another. This enables AIs taught to recognize a part on one object variant, to identify it on another. ... "
Tuesday, July 23, 2019
Dialogue Mapping Talk
Will be attending.
CSIG (Cognitive Systems Institute Group) Talk - July 25, 2019 -10:30-11am US Eastern
Talk Title: Dialogue Mapping with IBIS for More Productive Meetings Speaker: Paul Fernhout, Software Developer
Abstract: Tens of billions of US dollars a year are wasted on unproductive unfun meetings. Worse, even "productive" meetings sometimes fail to consider a diversity of opinions and so produce suboptimal decisions (e.g. Fukushima Daiichi's seawall height). How can Cognitive Systems help people make better decisions in meetings more quickly? How can we help people with strong disagreements collaborate in mapping landscape of possibilities in a fun way? One option is to visualize the thinking going on in a meeting using Dialog Mapping(TM) developed by Jeff Conklin and associates, which visualizes discussions using the Issue-Based Information Systems (IBIS) grammar consisting of Issues/Questions, Options/Answers, and supporting Pros & Cons. This talk will explain more about Dialogue Mapping and (hopefully) provide a live demonstration.
Bio: Paul Fernhout is passionate about helping people collaborate to make better decisions more quickly using computers. He has worked as a software developer on decision-support projects for a wide variety of organizations ranging from non-profits to multi-nationals to governments, as well as on independent FOSS projects with his wife related to educational simulations, evolutionary design tools, information organizers, and Participative Narrative Inquiry. He has also written about technology and social change.
Zoom meeting Link: https://zoom.us/j/7371462221; Zoom Call in: (415) 762-9988 or (646) 568-7788 Meeting id 7371462221
Zoom International Numbers: https://zoom.us/zoomconference
http://cognitive-science.info/community/weekly-update/ for recordings & slides, and for any date & time changes
Join Group: https://www.linkedin.com/groups/6729452/ (CognitiveSystemesInstitute)to receive notifications Thu, July 25, 10:30am US Eastern https://zoom.us/j/7371462221
More Details Here : http://cognitive-science.info/community/weekly-update/
Via Karolyn Schalk, Susan Malaika
CSIG (Cognitive Systems Institute Group) Talk - July 25, 2019 -10:30-11am US Eastern
Talk Title: Dialogue Mapping with IBIS for More Productive Meetings Speaker: Paul Fernhout, Software Developer
Abstract: Tens of billions of US dollars a year are wasted on unproductive unfun meetings. Worse, even "productive" meetings sometimes fail to consider a diversity of opinions and so produce suboptimal decisions (e.g. Fukushima Daiichi's seawall height). How can Cognitive Systems help people make better decisions in meetings more quickly? How can we help people with strong disagreements collaborate in mapping landscape of possibilities in a fun way? One option is to visualize the thinking going on in a meeting using Dialog Mapping(TM) developed by Jeff Conklin and associates, which visualizes discussions using the Issue-Based Information Systems (IBIS) grammar consisting of Issues/Questions, Options/Answers, and supporting Pros & Cons. This talk will explain more about Dialogue Mapping and (hopefully) provide a live demonstration.
Bio: Paul Fernhout is passionate about helping people collaborate to make better decisions more quickly using computers. He has worked as a software developer on decision-support projects for a wide variety of organizations ranging from non-profits to multi-nationals to governments, as well as on independent FOSS projects with his wife related to educational simulations, evolutionary design tools, information organizers, and Participative Narrative Inquiry. He has also written about technology and social change.
Zoom meeting Link: https://zoom.us/j/7371462221; Zoom Call in: (415) 762-9988 or (646) 568-7788 Meeting id 7371462221
Zoom International Numbers: https://zoom.us/zoomconference
http://cognitive-science.info/community/weekly-update/ for recordings & slides, and for any date & time changes
Join Group: https://www.linkedin.com/groups/6729452/ (CognitiveSystemesInstitute)to receive notifications Thu, July 25, 10:30am US Eastern https://zoom.us/j/7371462221
More Details Here : http://cognitive-science.info/community/weekly-update/
Via Karolyn Schalk, Susan Malaika
On Innovation Labs
We had multiple major innovation labs. I helped design and implement and run them. Have lots of opinions about them. Agree to what is stated below, and more so. Its got to be much more than just theater.
Why Innovation Labs Fail, and How to Ensure Yours Doesn’t By Simone Bhan Ahuja in the HBR
What do Walmart, Facebook, and Lockheed Martin have in common? They all recently unveiled lavish new innovation labs. These kinds of labs go by different names — accelerators, business incubators, research hubs — and my research suggests their numbers are growing. Over half of financial services firms have started their own creative spaces, and you’d be hard-pressed to find a health care company or retailer without at least one innovation lab, whether it’s a conference room with sticky notes or a 20,000-square-foot incubator space, like the one launched by Starbucks in November of last year.
That’s all great news, generally speaking. Innovation labs are a safe place for organizations to run experiments and iterate on projects, and they’re an important investment for firms that have rigid approaches or that work in highly regulated industries. But do they actually add value and generate growth? According to a report from Capgemini, the vast majority of innovation labs — up to 90%, one expert says — fail to deliver on their promise.
From doing extensive research for my book Disrupt-It-Yourself and advisory work with large corporations in various sectors, I’ve found that there are three reasons many labs come up short. Here’s what companies should watch out for.
Lack of Alignment with the Business
Legendary innovation spaces like Xerox PARC and Bell Labs can evoke images of extreme secrecy and complete isolation from the core business. That sort of separation can be important, especially in companies where bureaucracy tends to neutralize new ideas. But separation alone is seldom a problem.
The problem tends to be that the innovation center doesn’t have a clear strategy that’s aligned with the company’s — or doesn’t have one at all. Many labs install kegs and offer kombucha on tap to get the creative gears turning, and then begin to ideate with only a limited idea of their goals. Some of the innovation teams I’ve met recently seem unsure if they are charged with serving the core business or with disrupting it. This lack of strategy is a common symptom of “innovation theater”: Boards and C-suite leaders unveil labs that are mostly for show, so they can check the box of having a team dedicated to innovation — and especially to disruption. Yet the curtain comes down quickly, either because ideas from these labs are disconnected from real customer needs or because no one is on the hook to carry the ideas through to implementation. ... "
Why Innovation Labs Fail, and How to Ensure Yours Doesn’t By Simone Bhan Ahuja in the HBR
What do Walmart, Facebook, and Lockheed Martin have in common? They all recently unveiled lavish new innovation labs. These kinds of labs go by different names — accelerators, business incubators, research hubs — and my research suggests their numbers are growing. Over half of financial services firms have started their own creative spaces, and you’d be hard-pressed to find a health care company or retailer without at least one innovation lab, whether it’s a conference room with sticky notes or a 20,000-square-foot incubator space, like the one launched by Starbucks in November of last year.
That’s all great news, generally speaking. Innovation labs are a safe place for organizations to run experiments and iterate on projects, and they’re an important investment for firms that have rigid approaches or that work in highly regulated industries. But do they actually add value and generate growth? According to a report from Capgemini, the vast majority of innovation labs — up to 90%, one expert says — fail to deliver on their promise.
From doing extensive research for my book Disrupt-It-Yourself and advisory work with large corporations in various sectors, I’ve found that there are three reasons many labs come up short. Here’s what companies should watch out for.
Lack of Alignment with the Business
Legendary innovation spaces like Xerox PARC and Bell Labs can evoke images of extreme secrecy and complete isolation from the core business. That sort of separation can be important, especially in companies where bureaucracy tends to neutralize new ideas. But separation alone is seldom a problem.
The problem tends to be that the innovation center doesn’t have a clear strategy that’s aligned with the company’s — or doesn’t have one at all. Many labs install kegs and offer kombucha on tap to get the creative gears turning, and then begin to ideate with only a limited idea of their goals. Some of the innovation teams I’ve met recently seem unsure if they are charged with serving the core business or with disrupting it. This lack of strategy is a common symptom of “innovation theater”: Boards and C-suite leaders unveil labs that are mostly for show, so they can check the box of having a team dedicated to innovation — and especially to disruption. Yet the curtain comes down quickly, either because ideas from these labs are disconnected from real customer needs or because no one is on the hook to carry the ideas through to implementation. ... "
Clothing Brand uses IOTA Blockchain
IOTA, which does a novel kind of blockchain consensus model, using a directed acyclic graph (DAG). Have been following it for a while for its potential in smart contract applications.
Clothing brand Alyx turns to Iota’s blockchain to track garment authenticity
Kyt Dotson in Silicon Angle
Clothing designer Matthew William’s luxury fashion brand Alyx announced today that it will use Iota Foundation’s blockchain distributed ledger technology to track the production of clothing from raw materials to the final product.
The system, which will be used to instill consumer trust, combines the efforts of materials science company Avery Dennison and supply chain visibility firm Evrything along with Alyx and Iota.
Using a quick response or QR code printed on a tag connected to the clothing item in question, a customer can use an app to follow a shirt or dress to “track to rack” to see where the materials were sourced from, where the textiles were manufactured, what factory the garment was sewn in and finally shipped to the retail store.
This, luxury brands have begun to believe, will fill in a missing rung when it comes to brand trust. By allowing consumers to better understand the origins of their clothing, brands hope that they can instill a better sense of genuine quality in the minds of customers.
“Blockchain and distributed ledger technology is the future for effective brand protection,” said Matthew Williams, the British fashion designer behind the Alyx label. “By supplying product information, supply chain traceability and transparent dialogue with the consumer, the brand’s authenticity is globally secured.” ... '
Clothing brand Alyx turns to Iota’s blockchain to track garment authenticity
Kyt Dotson in Silicon Angle
Clothing designer Matthew William’s luxury fashion brand Alyx announced today that it will use Iota Foundation’s blockchain distributed ledger technology to track the production of clothing from raw materials to the final product.
The system, which will be used to instill consumer trust, combines the efforts of materials science company Avery Dennison and supply chain visibility firm Evrything along with Alyx and Iota.
Using a quick response or QR code printed on a tag connected to the clothing item in question, a customer can use an app to follow a shirt or dress to “track to rack” to see where the materials were sourced from, where the textiles were manufactured, what factory the garment was sewn in and finally shipped to the retail store.
This, luxury brands have begun to believe, will fill in a missing rung when it comes to brand trust. By allowing consumers to better understand the origins of their clothing, brands hope that they can instill a better sense of genuine quality in the minds of customers.
“Blockchain and distributed ledger technology is the future for effective brand protection,” said Matthew Williams, the British fashion designer behind the Alyx label. “By supplying product information, supply chain traceability and transparent dialogue with the consumer, the brand’s authenticity is globally secured.” ... '
Labels:
Alyx,
Apparel,
Authenticity,
Avery Dennison,
Brands,
Fashion,
IOTA
AI Drug Hunting for Pharma
Note the mention of simulations as a means of determining the effectiveness of prospective drugs.
AI Drug Hunters Could Give Big Pharma a Run for Its Money
Bloomberg
By Robert Langreth
July 15, 2019
Using the latest neural-network algorithms, DeepMind, the artificial intelligence (AI) arm of Alphabet, beat seasoned biologists at 50 top labs from around the world in predicting the shapes of proteins. The company's win at the CASP13 meeting in Mexico in December has serious implications, as a tool able to accurately model protein structures could speed up the development of new drugs. Although DeepMind's simulation was unable to produce the atomic-level resolution necessary for drug discovery, its victory points to the potential for practical application of AI in one of the most expensive and failure-prone parts of the pharmaceutical business. AI could be used, for example, to scan millions of high-resolution cellular images to identify therapies researchers might otherwise have missed. In the short term, experts say AI-based simulations likely will be used to determine whether prospective drugs will be effective before proceeding to a full clinical trial. .... "
AI Drug Hunters Could Give Big Pharma a Run for Its Money
Bloomberg
By Robert Langreth
July 15, 2019
Using the latest neural-network algorithms, DeepMind, the artificial intelligence (AI) arm of Alphabet, beat seasoned biologists at 50 top labs from around the world in predicting the shapes of proteins. The company's win at the CASP13 meeting in Mexico in December has serious implications, as a tool able to accurately model protein structures could speed up the development of new drugs. Although DeepMind's simulation was unable to produce the atomic-level resolution necessary for drug discovery, its victory points to the potential for practical application of AI in one of the most expensive and failure-prone parts of the pharmaceutical business. AI could be used, for example, to scan millions of high-resolution cellular images to identify therapies researchers might otherwise have missed. In the short term, experts say AI-based simulations likely will be used to determine whether prospective drugs will be effective before proceeding to a full clinical trial. .... "
Machine Learning vs AI
Good set of examples for Marketing and related terminology ....
The Difference Between AI and Machine Learning and How Marketers Use Them to Increase ROI by Alain Stephan in Business2com
A recent survey of 300 B2B marketers, performed by EverString and Heinz Marketing, found that less than one-fifth of respondents truly understand the difference between artificial intelligence, machine learning, and predictive modeling.
If this confusion is keeping marketers from adopting AI, it’s unfortunate. AI is such a powerful asset to marketers—it’s helping brands and agencies analyze marketing data with unprecedented precision, providing them with the insights to make smarter optimizations that drive more revenue at lower costs. ... "
The Difference Between AI and Machine Learning and How Marketers Use Them to Increase ROI by Alain Stephan in Business2com
A recent survey of 300 B2B marketers, performed by EverString and Heinz Marketing, found that less than one-fifth of respondents truly understand the difference between artificial intelligence, machine learning, and predictive modeling.
If this confusion is keeping marketers from adopting AI, it’s unfortunate. AI is such a powerful asset to marketers—it’s helping brands and agencies analyze marketing data with unprecedented precision, providing them with the insights to make smarter optimizations that drive more revenue at lower costs. ... "
Secure Cloud Architecture for Smart Cities
Having seen how municipalities are being attacked by malware, this is becoming essential.
A Secure Cloud Architecture for Smart Cities
Government Computer News
Stephanie Kanowitz Syracuse University
July 11, 2019
Syracuse University researchers have issued a new blueprint designed to help smart cities and communities create a hybrid cloud architecture that upholds confidentiality, access control, least privileges, and security of personally identifiable information. The Smart City and Community Challenge cloud privacy security rights inclusive architecture action cluster developed the framework, which is designed to back up critical systems in the event of attacks. The architecture employs a three-tiered data/risk classification scheme, with workflows applied to data depending on its classification. Officials then assign probability, impact, and overall ratings to each risk, and install mitigation controls. The researchers first tested the architecture by applying it to a network of city-owned smart streetlights in Syracuse, NY; other projects under consideration for the architecture include catch-basin monitoring and water-metering projects, in addition to others involving the ethics of artificial intelligence, facial recognition, and machine learning. ... "
A Secure Cloud Architecture for Smart Cities
Government Computer News
Stephanie Kanowitz Syracuse University
July 11, 2019
Syracuse University researchers have issued a new blueprint designed to help smart cities and communities create a hybrid cloud architecture that upholds confidentiality, access control, least privileges, and security of personally identifiable information. The Smart City and Community Challenge cloud privacy security rights inclusive architecture action cluster developed the framework, which is designed to back up critical systems in the event of attacks. The architecture employs a three-tiered data/risk classification scheme, with workflows applied to data depending on its classification. Officials then assign probability, impact, and overall ratings to each risk, and install mitigation controls. The researchers first tested the architecture by applying it to a network of city-owned smart streetlights in Syracuse, NY; other projects under consideration for the architecture include catch-basin monitoring and water-metering projects, in addition to others involving the ethics of artificial intelligence, facial recognition, and machine learning. ... "
Monday, July 22, 2019
Wal-Mart Integrates Online and In-Store Teams
The direction of Wharton here seems to be first a kind of re-training, and also a means of understanding customers in multiple contexts. Should be powerful to the extent that it works.
Walmart shakes things up, further integrating online and physical store teams In Retailwire by Tom Ryan with further expert comment.
In a move to better integrate its physical stores with its online enterprise, Walmart is combining both its supply chain and finance teams that work with its e-commerce site and stores.
“Our customers want one, seamless Walmart experience,” wrote CEO Doug McMillon in an employee memo obtained by numerous news outlets. “Earning more of our customers’ business in food and consumables is foundational to our strategy, and, at the same time, we will expand our ability to serve them with general merchandise in stores and through our broad e-commerce assortment as we continue to invest and build our e-commerce business.”
Greg Smith, current EVP of the U.S. supply chain, will head the new combined supply chain team. Nate Faust, currently leading e-commerce fulfillment, will transition to a new role.
Walmart U.S. CFO Michael Dastugue will oversee the combined finance team. Jeff Shotts, current e-commerce CFO, will lead Walmart’s U.S. marketplace business. Steve Schmitt, currently Sam’s Club CFO, will become the new U.S. e-commerce CFO, reporting to Mr. Dastugue. .... "
Walmart shakes things up, further integrating online and physical store teams In Retailwire by Tom Ryan with further expert comment.
In a move to better integrate its physical stores with its online enterprise, Walmart is combining both its supply chain and finance teams that work with its e-commerce site and stores.
“Our customers want one, seamless Walmart experience,” wrote CEO Doug McMillon in an employee memo obtained by numerous news outlets. “Earning more of our customers’ business in food and consumables is foundational to our strategy, and, at the same time, we will expand our ability to serve them with general merchandise in stores and through our broad e-commerce assortment as we continue to invest and build our e-commerce business.”
Greg Smith, current EVP of the U.S. supply chain, will head the new combined supply chain team. Nate Faust, currently leading e-commerce fulfillment, will transition to a new role.
Walmart U.S. CFO Michael Dastugue will oversee the combined finance team. Jeff Shotts, current e-commerce CFO, will lead Walmart’s U.S. marketplace business. Steve Schmitt, currently Sam’s Club CFO, will become the new U.S. e-commerce CFO, reporting to Mr. Dastugue. .... "
Database Archive with Map Interface
Free spatio-temporal database archive. The world map interface is interesting, you can play with the data on the map, then download data as needed. You can also add your own data to the map.
Dataset Archive Helps Researchers Quickly Find a Needle in a Haystack
University of California, Riverside
Holly Ober
July 17, 2019
Researchers at the University of California, Riverside (UCR) have developed the UCR Spatio-temporal Active Repository (UCR STAR), a free archive of large spatio-temporal datasets available through an interactive exploratory interface. UCR STAR's interface is similar to that of Google Maps, as users can zoom in and out and pan around to get a quick overview of the data distribution, coverage, and accuracy. Important details are displayed once a dataset is selected, and the subset download feature allows users to quickly download the data for a given geographical region. Said UCR’s Ahmed Eldawy, “The map interface visualizes the data, so you can see if it’s a good fit. It’s like a catalog for datasets.” ... '
Dataset Archive Helps Researchers Quickly Find a Needle in a Haystack
University of California, Riverside
Holly Ober
July 17, 2019
Researchers at the University of California, Riverside (UCR) have developed the UCR Spatio-temporal Active Repository (UCR STAR), a free archive of large spatio-temporal datasets available through an interactive exploratory interface. UCR STAR's interface is similar to that of Google Maps, as users can zoom in and out and pan around to get a quick overview of the data distribution, coverage, and accuracy. Important details are displayed once a dataset is selected, and the subset download feature allows users to quickly download the data for a given geographical region. Said UCR’s Ahmed Eldawy, “The map interface visualizes the data, so you can see if it’s a good fit. It’s like a catalog for datasets.” ... '
Very Tiny, Vibration Powered Robotics
Power of the very small.
Vibration-Powered Robots Are the Size of the World's Smallest Ant
Georgia Tech Research Horizons
By John Toon
Researchers at the Georgia Institute of Technology (Georgia Tech) have developed tiny three-dimensional (3D)-printed robots that move by harnessing vibration from piezoelectric actuators, ultrasound sources, or tiny speakers. The bots respond to different vibration frequencies, depending on their configurations, allowing users to control individual devices by adjusting the vibration. The bots are about two millimeters long, about the size of the world's smallest ant, and can cover four times their own length in one second. The researchers built a “playground” in which multiple micro-bots can move around as the researchers learn more about what they can do. Said Georgia Tech's Azadeh Ansari, “We are working to make the technology robust, and we have a lot of potential applications in mind. We are working at the intersection of mechanics, electronics, biology and physics. It’s a very rich area, and there’s a lot of room for multidisciplinary concepts.” .... "
Vibration-Powered Robots Are the Size of the World's Smallest Ant
Georgia Tech Research Horizons
By John Toon
Researchers at the Georgia Institute of Technology (Georgia Tech) have developed tiny three-dimensional (3D)-printed robots that move by harnessing vibration from piezoelectric actuators, ultrasound sources, or tiny speakers. The bots respond to different vibration frequencies, depending on their configurations, allowing users to control individual devices by adjusting the vibration. The bots are about two millimeters long, about the size of the world's smallest ant, and can cover four times their own length in one second. The researchers built a “playground” in which multiple micro-bots can move around as the researchers learn more about what they can do. Said Georgia Tech's Azadeh Ansari, “We are working to make the technology robust, and we have a lot of potential applications in mind. We are working at the intersection of mechanics, electronics, biology and physics. It’s a very rich area, and there’s a lot of room for multidisciplinary concepts.” .... "
Subscribe to:
Posts (Atom)