“Decentralized AI: How Federated Learning is changing the security game”
1 min read
Summary
Federated Learning (FL) is a machine learning model where numerous devices and users learn a task together without sharing data, keeping the data stored on the device it is generated on and only sending the parameters to be part of a broader model.
Created to allay privacy concerns around machine learning, FL is especially pertinent to healthcare, mobile and edge computing and smart city technology, as it prioritises privacy, low latency and distributed learning.
However, it also creates new security concerns, namely threats regarding distributed model updates, adversarial data poisoning and model inversion attacks.
Pen testers should look for gradient leakage, data poisoning attack simulations, intercepting and modifying model updates and backdoor injection when testing FL for security vulnerabilities.
Best practices include using secure aggregation, participant identification and data integrity checks, anomaly detection and adversarial testing tools, and regular security audits and pen testing.