Federated learning and differential privacy promise to protect user data by training AI models without centralizing sensitive information, but research reveals significant vulnerabilities. Gradient inversion attacks can reconstruct training data from model updates, Byzantine attacks can poison shared models, and the

Sort: