Interesting example of personal data being used in automated systems and how it is handled.
Confidence in automated systems from Fraunhofer
Research News / 3.2.2020
When it comes to cars that drive themselves, most people are still hesitant. There are similar reservations with respect to onboard sensors gathering data on a driver’s current state of health. As part of the SECREDAS project, a research consortium including the Fraunhofer Institute for Experimental Software Engineering IESE is investigating the safety, security and privacy of these systems. The aim is to boost confidence in such technology.
A new system controls whether, and under what circumstances, personal data is allowed to be transferred to a specific destination.
© Fraunhofer IESE
A new system controls whether, and under what circumstances, personal data is allowed to be transferred to a specific destination.
There is still some way to go before people can be persuaded to embrace a new technology like self-driving cars. When it comes to taking decisions in road traffic, we tend to place greater trust in human drivers than in software. Boosting confidence in such connected, automated systems and their ability to meet safety and data privacy concerns – whether in the field of mobility or medicine: that’s the aim of the consortium behind the SECREDAS project. SECREDAS – which stands for “Product security for cross domain reliable dependable automated systems” – brings together 69 partners from 16 European countries, including the Fraunhofer Institute for Experimental Software Engineering IESE. This project is seeking to ensure that European OEMs remain competitive in this field. It has total funding of 51.6 million euros, with the EU contributing around 15 million euros to this sum.
Increasing the safety of self-driving cars
The control of autonomous vehicles lies to an ever greater extent in the hands of neural networks. These are used to assess everyday road-traffic situations: Is the traffic light red? Is another vehicle about to cross the road ahead? The problem with neural networks, however, is that it remains unclear just how they come to such decisions. “We’re therefore developing a safety supervisor. This will monitor in real time decisions taken by the neural network. If necessary, it can intervene on the basis of this assessment,” says Mohammed Naveed Akram from Fraunhofer IESE. “The safety supervisor uses classical algorithms, which focus on key parameters rather than assessing the overall situation – that’s what the neural networks do. Our work for the SECREDAS project is mainly about identifying suitable metrics for this purpose, but we are also looking at how best to take appropriate counter measures in order to avert danger.” ... ."
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment