Skip to content
printicon
Show report in:

UMINF 24.10

Navigating Data Privacy and Utility: A Strategic Perspective

Privacy in machine learning should not merely be viewed as an afterthought; rather, it must serve as the foundation upon which machine learning systems are designed. In this thesis, along with the centralized machine learning, we also consider the distributed environments for training machine learning models, particularly federated learning. Federated learning lets multiple clients or organizations train a machine learning model in a collaborative manner without moving their data. Each client participating to the federation shares the model parameters learnt by training a machine learning model on its data. Even though the set up of federated learning keeps the data local, there is still a risk of sensitive information leaking through the model updates. For instance, attackers could potentially use the updates of the model parameters to figure out details about the data held by clients. So, while federated learning is designed to protect privacy, it still faces challenges in ensuring that the data remains secure throughout the training process.

Originally, federated learning was introduced in the context of deep learning models. However, this thesis focuses on federated learning for decision trees. Decision Trees are intuitive, and interpretable models, making them popular in a wide range of applications, especially where explanability of the decisions made by the decision tree model is important. However, Decision Trees are vulnerable to inference attacks, particularly when the structure of the decision tree is exposed. To mitigate these vulnerabilities, a key contribution of this thesis is the development of novel federated learning algorithms that incorporate privacy-preserving techniques, such as k-anonymity and differential privacy, into the construction of decision trees. By doing so, we seek to ensure user privacy without significantly compromising the performance of the model. Machine learning models learn patterns from data, and during this process, they might leak sensitive information. Each step of the machine learning pipeline presents unique vulnerabilities, making it essential to assess and quantify the privacy risks involved. One focus of this thesis is the quantification of privacy by devising a data reconstruction attack tailored to Principal Component Analysis (PCA), a widely used dimensionality reduction technique. Furthermore, various protection mechanisms are evaluated in terms of their effectiveness in preserving privacy against such reconstruction attacks while maintaining the utility of the model. In addition to federated learning, this thesis also addresses the privacy concerns associated with synthetic datasets generated by models such as generative networks. Specifically, we perform an Attribute Inference Attack on synthetic datasets, and quantify privacy by calculating the Inference Accuracy—a metric that reflects the success of the attacker in estimating sensitive attributes of target individuals.

Overall, this thesis contributes to the development of privacy-preserving algorithms for decision trees in federated learning and introduces methods to quantify privacy in machine learning systems. Also, the findings of this thesis sets a ground for further research at the intersection of privacy, and machine learning.

Additionally, this thesis addresses the critical concern of the privacy of publishing synthetic datasets. We tailored an Attribute Inference Attack (AIA) to assess the privacy of synthetic data generated by various algorithms. The approach ensures that synthetic data retains the utility of the original data without replicating it, thereby maintaining the safety of users.

This thesis contributes to the advancement of privacy-preserving machine learning techniques and sets the ground for future research. By striking a balance between privacy and utility, the work supports the development of more trustworthy ML applications that protect user privacy without significantly compromising model performance.

Keywords

Data Reconstruction Attacks, Decision Trees, Differential Privacy, Federated Learning, Generative Networks, Machine Learning, Principal Component Analysis, Privacy, Synthetic Datasets, k-anonymity

Authors

Saloni Kwatra

Back Edit this report
Entry responsible: Saloni Kwatra

Page Responsible: Frank Drewes
2024-11-13