ABSTRACT
Generally when users share information about themselves on some online platforms, they knowingly or unknowingly allow this data to be used by the companies behind these companies for various purposes including selling this information to advertisers as well as using it to better enrich their predictive models. In the event of the user changing their minds on allowing such data about them to be able to be used by the companies, it becomes a strenuous task for the companies to get rid of the influence of this collected data, especially when it has been used to train their machine learning models. Recent legislations by governing bodies, like the European Union, grant people the right to choose where data about them may be used, including a right to have their data and its resulting influence be completely removed from a company’s databases and machine learning models. To be able to do this at scale new machine unlearning solutions need to be invented. In this paper, we look at some of these early models of machine unlearning strategies that have been proposed.
- Thomas Baumhauer, Pascal Schöttle, and Matthias Zeppelzauer. 2020. Machine unlearning: Linear filtration for logit-based classifiers. arXiv preprint arXiv:2002.02730(2020).Google Scholar
- Lucas Bourtoule, Varun Chandrasekaran, Christopher A Choquette-Choo, Hengrui Jia, Adelin Travers, Baiwu Zhang, David Lie, and Nicolas Papernot. 2019. Machine unlearning. arXiv preprint arXiv:1912.03817(2019).Google Scholar
- Yinzhi Cao and Junfeng Yang. 2015. Towards making systems forget with machine unlearning. In 2015 IEEE Symposium on Security and Privacy. IEEE, 463–480.Google ScholarDigital Library
- Aditya Golatkar, Alessandro Achille, Avinash Ravichandran, Marzia Polito, and Stefano Soatto. 2020. Mixed-Privacy Forgetting in Deep Networks. arXiv preprint arXiv:2012.13431(2020).Google Scholar
- Aditya Golatkar, Alessandro Achille, and Stefano Soatto. 2020. Eternal sunshine of the spotless net: Selective forgetting in deep networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 9304–9312.Google ScholarCross Ref
- Lev Manovich. 2011. Trending: The promises and the challenges of big social data. Debates in the digital humanities 2, 1 (2011), 460–475.Google Scholar
- Claudia Perlich, Brian Dalessandro, Troy Raeder, Ori Stitelman, and Foster Provost. 2014. Machine learning for targeted display advertising: Transfer learning in action. Machine learning 95, 1 (2014), 103–127.Google Scholar
- Emil Protalinski. 2011. Facebook denies cookie tracking allegations. Internet article, www. zdnet. com 2 (2011).Google Scholar
- W Gregory Voss. 2016. European union data privacy law reform: General data protection regulation, privacy shield, and the right to delisting. The Business Lawyer 72, 1 (2016), 221–234.Google Scholar
- Fred Wilson. 2006. The freemium business model. A VC Blog, March 23(2006), 201.Google Scholar
Recommendations
When Machine Unlearning Jeopardizes Privacy
CCS '21: Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications SecurityThe right to be forgotten states that a data owner has the right to erase their data from an entity storing it. In the context of machine learning (ML), the right to be forgotten requires an ML model owner to remove the data owner's data from the ...
A Review on Machine Unlearning
AbstractRecently, an increasing number of laws have governed the useability of users’ privacy. For example, Article 17 of the General Data Protection Regulation (GDPR), the right to be forgotten, requires machine learning applications to remove a portion ...
Hierarchical Machine Unlearning
Learning and Intelligent OptimizationAbstractIn recent years, deep neural networks have enjoyed tremendous success in industry and academia, especially for their applications in visual recognition and natural language processing. While large-scale deep models bring incredible performance, ...
Comments