Many experiments were carried out in the area of PPDP to ensure the privacy of an individual. One of those models is K-anonymity, which ensures that the EC contains minimum of k records. But it doesn’t pay more attention on sensitivity of attributes, so that the privacy may be compromised in most of the cases. K-Anonymity is the base for many researches till now. The key concept of this algorithm was used in different areas of privacy models. To avoid undesirable and unlawful effects of privacy on sequence data, while designing a technological framework, they introduce privacy-by-design [
9], without distributing the knowledge discovery data mining. In this, authors used k-anonymity framework for sequence data, and notation of k-anonymity for sequence datasets provides protection against attacks. Fung et al. [
10] extended the k-anonymization algorithm for cluster analysis. They achieve privacy by partitioning the original data into clusters and class labels encode the cluster information, then the k-anonymization will be achieved with clusters. Lefevre et al. [
11] proposed an algorithm called Incognito, which is the collection of multiple bottom-up generalization algorithms. By this method authors tried to generate all possible K-Anonymous full-domain generalizations. Generalization is the process of substituting parent values with children, which is the method mainly used to provide privacy. Wang et al. [
12] proposed Bottom-up generalization in to address the efficiency issue in k-anonymization. Data utility is also important after privacy [
13]. Jordi et al. shows that data utility improvement of the published datasets by micro aggregation based K-Anonymity. Machanavajjhala et al. [
14] proposed
l-diversity in, which suggests that an EC should contain
l-different “well represented” values of SAs. It doesn’t consider any difference between different sensitive attributes. E-learning is the new way learn the courses at home using internet. In Mohd et al. [
15] proposed a new model to ensure the trust in online e-learning activities. Authors used identity management (IM) to protect the privacy of learners. IM ensures the protection of personal information with some degree of participant anonymity or pseudonymity. Further, because participants can hold multiple identities or can adopt new pseudonymous personas, a reliable and trustworthy mechanism for reputation transfer (RT) from one person to another is required. Such a reputation transfer model must preserve privacy and at the same time prevent linkability of learners’ identities and personas. In this paper, authors present a privacy-preserving reputation management (RM) system which allows secure transfer of reputation. Emiliano et al. [
16] proposed Hummingbird, which protects the tweet contents, followers’ interests and hash tags from attackers through a centralized server. This privacy concept was spreaded over different areas like Wireless Sensor Networks (WSN), where the privacy is the main concern. The wireless itself is untrusted, where an anonymous nodes can also get connected. In [
17] authors proposed a novel secured method called Three-factor user authentication scheme for distributed WSNs. To avoid the collisions in communication with privacy preserving data mining Larr et al. [
7] proposed an anonymous ID assignment where this ID number will iteratively assign to the nodes Drushina et al. [
18] proposed a network coding method for privacy in networks by removes the statistical dependence between incoming and outgoing messages, so that tracing is not possible. Bayardo et al. [
19] proposed an algorithm to prune the non optimal anonymous tables by set of enumeration tree, where each node represents k-anonymous solution. Valeria et al. [
20] proposed an algorithm for machine learning operations called Ridge regression. This algorithm takes large number of data points and finds the best-fit linear curve through these points as input. Xiaokui et al. [
21] proposed privacy preserving data-leak detection (DLD) solution to solve the issue where a special set of sensitive data digests is used in detection. The advantage of their method is that it enables the data owner to safely delegate the detection operation to a semi honest provider without revealing the sensitive data to the provider. Huang et al. [
22] proposed a novel privacy model called (v,l)-anonymity, which mainly concentrate on the vulnerabilities in sensitivity. It supports the existing privacy models and provides the different way of privacy. Authors also propose a new method of assigning sensitive levels to the sensitive values. They define sensitivity classification and presented a measure which is called as levels of sensitive values (LSV) measure to calculate the sensitive levels. This model can also work efficiently with multiple sensitive attributes. Qinghai et al. [
23] proposed a privacy-preserving data publishing method, namely MNSACM, to publish micro data with multiple numerical sensitive attributes which uses the ideas of clustering and Multi-Sensitive Bucketization (MSB). Sweeney [
24] experimented using k-anonymity to identify the various attacks by considering multi-level databases. Bredereck et al. [
25] adopted a data-driven approach towards the design of algorithms for k-Anonymity and related problems. Tsai et al. [
26] reviewed studies on the data analytics from the traditional data analysis to the recent big data analysis. Zhang et al. [
27] gave a survey of big data processing systems such as batch, stream, graph, and machine learning processing and also discussed some possible future work directions.