/* ---- Google Analytics Code Below */

Saturday, June 05, 2021

Thinking Federated AI

Good general idea,  further detailed at the link.

Changing How Data is Used  By Samuel Greengard,  Commissioned by CACM Staff, June 1, 2021

The ability to examine data in deep and revealing ways has fundamentally changed the world. Yet, despite enormous advances in analytics and machine learning, researchers, businesses, and governments remain hamstrung by a basic but nagging problem: a lot of data is sensitive and sharing it presents security and privacy risks.

Although numerous technologies, such as data tokenization, k-anonymity, homomorphic encryption and Trusted Execution Environments (TEE) have helped tame the beast, they do not address the fundamental problem that sensitive data is crossing undesired boundaries. Consequently, many organizations hesitate to share data that could yield remarkable insights. The problem is especially acute when organizations have trade secrets, customer data, or personally identifiable information (PII) embedded in data.

An emerging framework called Federated AI (artificial intelligence) Learning could change all of this, while ushering in broader changes in the way data is used, and owned. The technology has major repercussions for machine learning, security, and privacy. "It represents the future of secure machine learning," says Eliano Marques, vice president of data and AI at data security firm Protegrity.

Rethinking the Model

Federated AI takes the conventional idea of machine learning and turns it on its head. Instead of multiple groups sending data to a central cloud where the machine learning takes place, the algorithm travels to the computing device. All training is performed on the client or device, and when the algorithm determines that it is finished, it exits the device, taking the results with it. "There is no data sharing," says Jigar Mody, head of AI services at Oracle.  ... ' 

No comments: