The last handful of years has seen immense evolution and adoption of Machine Learning (ML) in multifarious online realms, from influencing our shopping lists to who we connect with or follow on a social network. Additionally, the COVID-19 pandemic has accelerated the growth and consumption of these digital services across the globe. Regardless of our fascination or loathe for it, ML-powered services and products are influencing our decision-making power and dominating our lives heavily. The scope of impact of these intelligent systems implies that the confidentiality of both data and the underlying algorithm is highly critical. A slight slack in the design of these systems can lead to disastrous outcomes propelled by cyber-attacks, reverse engineering, and leakage of sensitive data like personal conversations, financial transactions, medical history, and so on. With an imperative agenda of retaining the confidentiality of data, maintaining the privacy of proprietary design, and staying compliant with the latest regulations and policies, Privacy-Preserving Machine Learning ensures trust among all the stakeholders. This chapter will analyze the contemporary interpretation of Privacy-Preserving Machine Learning and the significance it holds in myriad settings. We will cover prevalent types of privacy attacks on ML systems like inferences of the membership, input, parameter, and property. Next, the exposition will examine some privacy-enhancing techniques such as differential privacy, federated learning, and synthetic data along with modern-day advancements like dataset condensation among others. Furthermore, the synopsis will also discuss some tools to quantify and effectively measure privacy risks in statistical and Machine Learning algorithms. Additionally, we will go over some of the recent policy developments to regulate data protection and privacy worldwide and how that is shaping the industry. In closing, we will leave readers with some thoughts on future directions for the development of better and smarter techniques to maximize data utility and privacy in tandem. The goal is to ignite the public dialogue regarding privacy impacts, ethical consequences, fairness, and real-world harms of non-privacy-compliant ML systems.