Skip to content
Show report in:

UMINF 20.11

Privacy-Guardian: The Vital Need in Machine Learning with Big Data

Social Network Sites (SNS) such as Facebook and Twitter, have been playing a great role in our lives. On one hand, they help to connect people who would not otherwise be connected before. Many recent breakthroughs in AI such as facial recognition [Kow+18], were achieved thanks to the amount of available data on the Internet via SNS (hereafter Big Data). On the other hand, many people have tried to avoid SNS to protect their privacy [Sti+13]. However, Machine Learning (ML), as the core of AI, was not designed with privacy in mind. For instance, one of the most popular supervised machine learning algorithms, Support Vector Machines (SVMs), try to solve a quadratic optimization problem in which the data of people involved in the training process is also published within the SVM models. Similarly, many other ML applications (e.g., ClearView) compromise the privacy of individuals presented in the data, especially when the big data era enhances the data federation. Thus, in the context of machine learning with big data, it is important to (1) protect sensitive information (privacy protection) while (2) preserving the quality of the output of algorithms (i.e., data utility). For the vital need of privacy in machine learning with big data, this thesis studies on: (1) how to construct information infrastructures for data federation with privacy guarantee in the big data era; (2) how to protect privacy while learning ML models with a good trade-off between data utility and privacy. To the first point, we proposed different frameworks empowered by privacy-aware algorithms. Regarding the second point, we proposed different neural architectures to capture the sensitivities of user data, from which, the algorithms themselves decide how much they should learn from user data to protect their privacy while achieve good performances for downstream tasks. The current outcomes of the thesis are: (1) privacy-guarantee data federation infrastructure for data analysis on sensitive data; (2) privacy utilities for privacy-concern analysis; and (3) privacy-aware algorithms for learning on personal data. For each outcome, extensive experimental studies were conducted on real-life social network dataset to evaluate aspects of the proposed approaches. Insights and outcomes from this thesis can be used by both academia and industry to provide privacy-guarantee data analysis and data learning in big data containing personal information. They also have the potential to facilitate relevant research in privacy-aware learning and its related evaluation methods.


Privacy-aware machine learning, differential privacy, dp-embeddings


Xuan-Son Vu

Back Edit this report
Entry responsible: Son Vu

Page Responsible: Frank Drewes