Introduction
Discussion
Research gap
-
Our survey study is not limited to proof of retrievability (POR), in contrast to [37]. It includes all forms of verification, including proven data possession (PDP), proof of retrievability (POR), and power of ownership (PoW). Different Key Management Techniques used in cloud storage to improve security at cloud storage were also added here .
Contribution
-
Identification of possible attacks on storage level services which may arise on physical cloud storage mitigating explored solutions
-
Summarizing of possible characteristics of data integrity strategies to examine data integrity auditing soundness, phases, classification, etc. to understand and analyse security loopholes
-
Literature review on comparative analysis based on all characteristics, motivation, limitation, accuracy, method, and probable attacks
-
Discussion on design goal issues along with security level issues on data integrity strategy to analyse dynamic performance efficiency, different key management techniques to achieve security features, to analyse server attacks, etc.
-
Identification of security issues in data integrity strategy and its mitigation solution
-
Discussion about the future direction of new data integrity schemes of cloud computing.
Issues of physical cloud storage
-
In capability of CSP: Managing big cloud storage may create a data loss problem for CSP due to lack of insufficient computational capacity, sometimes cannot meet user’s requirement, missing a user-friendly data serialization standard with easily readable and editable syntax, due to changes of a life cycle in a cloud environment [66].
-
Loses control of cloud data over a distributed cloud environment may give vulnerable chances to unauthorized users to manipulate valuable data of valid one [67].
-
Lack of Scalability of physical cloud storage: Scalability means all hardware resources are merged to provide more resources to the distributed cloud system. It might be beneficial for illegitimate access and modify cloud storage and physical data centers [68].
-
Unfair resource allocation strategy: Generally, monitoring data is stored in a shared pool in a public cloud environment which might not be preferable to cloud users who are not interested to leave any footprint on their work distribution/data transmission by a public cloud-hosted software component which will be the reason for a future mediocre of original data fetching [69].
-
Lack of performance monitoring of cloud storage: Generally, monitoring data is stored in a shared pool in a public cloud which might not be preferable to cloud users who are not interested to leave any footprint on their work distribution/data transmission by a public cloud-hosted software component [70].
-
Malicious cloud storage provider: Lack of transparency and access control policies are basic parameters of a cloud service provider being a malicious storage provider. Due to the missing of these two parameters, it’s quite easy to disclose confidential data of cloud users towards others for business profit [72].
-
Data Pooling: Resource pooling is an important aspect of cloud computing. Due to this aspect, data recovery policies and data confidentiality schemes are broken [73].
-
Data lock-in: Every cloud storage provider does not have a standard format to store data. Therefore, cloud users face a binding problem to switch data from one provider to another due to dynamic changes in resource requirements [39].
Potential Attacks | Storage Issues | Threats | Mitigation Solution with references | Applied Methods |
---|---|---|---|---|
DoS | No prediction format to formulate required time/storage to store/process data into cloud storage, data threat | Vulnerable service takes place instead of original service | Kerberos protocol Attribute-Based Proxy Signature Improved Dynamic Immune Algorithm(IDIA) | |
Phishing | Lack of storage monitoring, Unaccredited access to physical cloud storage | Data confidentiality disclose | Propose phishing detection technique [51] | a hybrid classifier approach and hyper-parameter classifier tuning |
Brute Force Attack /online dictionary Attack | Unaccredited access to physical cloud storage | Data confidentiality disclose, Violation of Data Authenticity | Propose data obfuscation scheme [52] | Least Significant Bit(LSB) substitution method |
MITC Attack | Improper security against internal and external malicious attacks | Abnormality in service availability | Propose string authentication technique [53] | Chaotic maps and fuzzy extractors |
Port Scanning | Improper security internal and external malicious attack | Abnormality in service availability | firewall policies [54] | Distributed firewalls/Controllers |
Identity Theft | Unaccredited access to physical cloud storage, Untrusty cloud storage, data threat | SLA violation, security policies violation | key-based semantic secure Bloom filter (KSSBF), compact password-authenticated key exchange protocol (CompactPAKE), OTP,Evolutionary System Model based Privacy Preserving-(EMPPC) | |
Risk Spoofing | Incapability of CSP’s monitoring, Untrusty cloud storage, data lock-in | Lack of internal security, logging violation | Monitoring secure data policies [59] | Symmetric Searchable Encryption (SSE) or Attribute-Based Encryption (ABE) |
Data Loss/Leakage | Incapability of CSP, continuous storage monitoring, lack of scalability | Malicious Insider, Malicious Cloud Storage Provider | Data encryption method, Public data auditing technique | |
Shared Technology issue | Unfair resource allocation strategy, no standard data storing format, Shared Technology Issue | VMs are become vulnerable due to loose control of a hypervisor | Virtual Machine Monitoring Scheme [65] | Xen, KVM |
Key management techniques with regards to storage level in cloud
-
Hierarchical Key Technique: Some research articles [77] provide secret sharing and key hierarchy derivation technique in combination with user password to enhance key security, protecting the key and preventing the attacker from using the key to recover the data.
-
Private Key Update Technique:This identity-based encryption technique [78] helps to update the private keys of the non-revoked group users instead of the authenticators of the revoked user when the authenticators are not updated, and it does away with the complex certificate administration found in standard PKI systems.
-
Key Separation Technique: This cryptographic method aids in maintaining the privacy of shared sensitive data while offering consumers effective and efficient storage services [79].
-
Attribute-based Encryption Key Technique: Instead of disclosing decryption keys, this method achieves the conventional notion of semantic security for data secrecy, whereas existing methods only do so by establishing a lesser security notion [80, 81]. It is used to share data with users in a confidential manner.
-
Multiple Key Technique:This k-NN query-based method improves security by assisting the Data owner(DO) and each query user in maintaining separate keys and not sharing them [82]. In the meantime, the DO uses his own key to encrypt and decrypt data that has been outsourced.
Potential attacks in storage level service
-
DoS/DDoS: Ultimate purpose of this attack is to do unavailable original services towards users and overload the system by flooding spam results in a single cloud server. Due to the high workload, the performance of cloud servers slumps, and users lose the accessibility to their cloud services.
-
Phishing: Attackers steal important information in the form of a user’s credentials like name, password, etc. after redirecting the user to a fraud webpage as an original page.
-
Brute Force attack/ Online dictionary attack: It’s one type of cryptographic hack. Using an exhaustive key search engine, malicious attackers can violate the privacy policy of the data integrity scheme in cloud storage.
-
MITC: Man in the cloud attack helps attackers to gain the capability to execute any code on a victim machine through installing their synchronization token on a victim’s machine instead of the original synchronization token of a victim machine and using this token, attackers get control over target machine while target machine synchronizes this token to the attacker’s machine.
-
Port scanning: Attackers perform port scanning methods to identify open ports or exposed server locations, analyze the security level of storage and break into the target system.
-
Identity theft: Using password recovery method, attackers can get account information of legitimate users which causes loss of credential information of the user’s account.
-
Risk spoofing: Resource workload balancing is a good managerial part of cloud storage but due to this aspect of cloud computing, attackers can steal credential data of cloud users, able to spread malware code in host machines and create internal security issues.
-
Data loss/leakage: During data transmission time by external adversaries, incapability of cloud service providers, by unauthorized users of the same cloud environment, by internal malicious attackers, data can be lost or manipulated.
-
Shared technology issue: Compromising hypervisors, cloud service providers can run concurrently multiple OS as guests on a host computer. For the feebleness of hypervisor, attackers create vulnerabilities like data loss, insider malicious attacks, outsider attacks, loss of control on machines, and service disruption by taking control over all virtual machines.
Phases of data integrity technique
-
Data processing phase: In data processing phase, data files are processed in many way like file is divided into blocks [60], applying encryption technique on blocks [90], generation of message digest [87], applying random masking number generation [88], key generation and applying signature on encrypted block [93] etc. and finally encrypted data or obfuscated data is outsourced to cloud storage.
-
Acknowledgement Phase: This phase is totally optional but valuable because sometimes there may arise a situation where CSP might conceal the message of data loss or discard data accidentally to maintain their image [88]. But most of the research works skip this step to minimize computational overhead costs during acknowledgment verification time.
-
Integrity verification phase: In this phase, DO/ TPA sends a challenge message to CSP and subsequently, CSP sends a response message as metadata or proof of information to TPA/DO for data integrity verification. The audit result is sent to DO if verification is done by TPA.
Ref. | Technical Methods | Data Processing Phase | Acknowledgement Phase | Auditing Phase | |||||
---|---|---|---|---|---|---|---|---|---|
Initial Phase | Key & Signature Generation Phase | Encryption | Using TPA | Using Data Owner/Client | Challenge phase | Proof Verification Phase | |||
[85] | Confidentiality Preserving Auditing | Yes | Yes | Yes | No | Yes | No | Yes | Yes |
[60] | Ensuring of confidentiality and integrity data | Yes | No | Yes | No | Yes | No | No | No |
[86] | Privacy preserving integrity checking model | Yes | Yes | Yes | No | Yes | No | Yes | Yes |
[61] | Verifying Data Integrity | Yes | Yes | Yes | No | No | Yes | Yes | Yes |
[87] | Data auditing mitigating with data privacy and data integrity | Yes | No | Yes | No | Yes | No | Yes | Yes |
[88] | Public Verification of Data Integrity | Yes | Yes | Yes | No | Yes | No | Yes | Yes |
[89] | Ternary Hash Tree Based Integrity Verification | Yes | Yes | Yes | No | Yes | No | Yes | Yes |
[84] | Third-party auditing for cloud service providers | Yes | Yes | No | No | Yes | No | Yes | Yes |
[90] | Identity-Based Integrity Auditing and Data Sharing | Yes | Yes | Yes | No | Yes | No | Yes | Yes |
[91] | A Secure Data Dynamics and Public Auditing | Yes | No | Yes | No | Yes | No | Yes | Yes |
[83] | Oruta: privacy-preserving public auditing | Yes | Yes | No | No | Yes | No | Yes | Yes |
[92] | Dynamic Auditing Protocol | Yes | Yes | No | No | Yes | No | Yes | Yes |
[93] | Dynamic Data Integrity Auditing Method | Yes | Yes | No | No | Yes | No | Yes | Yes |
[94] | Algebraic Signatures-Based Data Integrity Auditing | Yes | Yes | No | Yes | Yes | No | Yes | Yes |
[95] | Efficient public verification on the integrity | Yes | Yes | Yes | No | Yes | No | Yes | Yes |
[96] | Attribute-Based Cloud Data Integrity Auditing | Yes | Yes | No | No | Yes | No | Yes | Yes |
[78] | Efficient User Revocation in Identity-Based Cloud Storage Auditing | Yes | Yes | No | No | Yes | No | Yes | Yes |
[97] | Secure and Efficient Data Integrity Verification Scheme | Yes | Yes | No | No | Yes | No | Yes | Yes |
Classification of data integrity strategy
-
File level verification: This is a deterministic verification approach. Here, data integrity verification is generally done by either TPA or the client. The client submitted an encoded file to the storage server and for data integrity verification a verifier verified the encoded file through the challenge key and secret key which is chosen by the client [103].
-
Block Level Verification : This type of verification is a deterministic verification approach. Firstly, a file is divided into blocks, encrypted, generated message digest, and sent encrypted blocks to CSP. Later, CSP sends a response message to TPA for verification and TPA verifies all blocks by comparing the newly generated message digest with the old message digest generated by the client [87].
-
Randomly block level verification: This is a probabilistic verification approach. In this verification, a file is divided into blocks, next generate anyone signatures or combination of any two signatures of hash [86], BLS [88], HLA [124], random masking [88], or ZSS [97] for all blocks and submits both of them to cloud storage. Later, TPA generates a challenge message for randomly selected blocks which will be verified for data integration checking and sent to CSP. Next, CSP sends a proof message to TPA for verification. The proof message is verified by TPA for randomly selected blocks by generating new signatures and comparing old and new signatures of particular blocks [61, 86].
-
Metadata verification: In this deterministic approach, firstly cloud users generate a secret key, and using this secret key, cloud users prepare metadata of the entire file through HMAC-MD5 authentication. Later, the encrypted file is sent to CSP, and metadata is sent to TPA. Later this metadata is used for integrity verification via TPA [85].
-
Static data: In static nature, no need to modify data that are stored in cloud storage. In [105], a basic RDPC scheme is proposed for the verification of static data integrity. In remote cloud data storage, all static files are of state-of-the-art nature which gets the main attention but in practical scenarios, TPA gets permission to possess the original data file creates security problems. In [106], the RSASS scheme is introduced for static data verification by applying a secure hash signature (SHA1) on file blocks.
-
Dynamic Data: Data owners don’t have any restriction policy for applying updation, insertion and deletion operations on outsourced data for unlimited time which is currently stored in remote cloud storage. In [111], a PDP scheme is introduced by assuming a ranked skipping list to hold up completely dynamic operation on data to overcome the problem of limited no. of insertion and query operation on data which is described in [118]. In [117], dynamic data graph is used to restrict conflict of the dynamic nature of big-sized graph data application.
-
Proof of ownership verification: The proof of ownership (PoW) scheme is introduced in the data integrity scheme to prove the actual data ownership of original data owner to server and to restrict unauthorized access to outsourced data of data owner from valid malicious users in the same cloud environment. PoW scheme is enclosed with data duplication scheme to reduce security issues about an illegal endeavor of a malicious user to access unauthorized data [27]. Three types of PoW scheme is defined: s-POW, s-Pow1, s-Pow2 in [29] which have satisfactory computation and I/O efficiency at user side but I/O burden on the remote cloud are significantly increased and this problem was overcome in [28] through establishing a balance between server and user side efficiency.
-
Provable of data possession: Provable of data possession (PDP) scheme promises statically the exactness of data integrity verification of cloud data without downloading on untrusted cloud servers and restricts data leakage attacks at cloud storage. In [104], research work described aspects of the PDP technique from a variety of system design perspectives like computation efficiency, robust verification, lightweight and constant communication cost, etc. in related work. In [112], certificateless PDP is proposed for public cloud storage to address key escrow problems and key management of general public key cryptography and solve the security problems(verifiers were able to extract original data of users during integrity verification time) of [113, 120].
-
Proof of retrievability verification: Proof of retrievability(PoR) ensures data intactness in remote cloud storage. Both PoR and PDP perform similar functions with the difference that PoR scheme has the ability to recover faulty outsourced data whereas PDP only supports data integrity and availability of data to clients [108]. In [109], IPOR scheme is introduced which ensures 100% retrieval probability of corrupted blocks of original data file. DIPOR scheme also supports data retrieval technique of partial health records along with data update operation [115].
-
Auditing verification: Verification of cloud data which is outsourced by the data owner is known as the audit verification process. Data integrity scheme supports two types of verification: Private auditing verification(verification is done between CSP and data owner i.e. cloud user) and Public auditing verification (cloud user hiers a TPA to reduce computational and communication overhead at ownside and verification is done between CSP and TPA) [122]. Privacy-preserving public auditing [83, 122], certificateless public auditing [125],optimized public auditing scheme [123] ,bitcoin-based public auditing [88], S-audit public auditing scheme [108], shared data auditing [83], Dynamic data public auditing [126] Non-privacy preserving public auditing scheme [127], digital signature(BLS, hash table, RSA etc. ) based public auditing scheme [88, 119, 128] etc. are some types of public auditing schemes. A private auditing scheme was first proposed by [110] called SW method and further reviewed by some research works[[87, 116].
Type of Proposals | Category of data | Verification Types | ||||||||
---|---|---|---|---|---|---|---|---|---|---|
Deterministic | Probabilistic | Static Data | Dynamic Data | PDP | PoW | PoR | Auditing | |||
File level Verification | Entire Block Level Verification | Meta data Verification | Random Block level Verification | Public Auditing | Private Auditing | |||||
[116] | ||||||||||
[120] | [121] | |||||||||
[117] | ||||||||||
[123] |
Characteristics of data integrity technique
-
Auditing soundness: The one and only way to pass TPA’s verification test is that CSP has to store the data owner’s entire outsourced data at cloud storage [90].
-
Error localization at block level: It helps to find out error blocks of a file in cloud storage during verification time [89].
-
Data Correctness: It helps to rectify error data block with available replica block’s information in cloud storage [89].
-
Storage Correctness: CSP prepares a report which shows that all data is entirely stored in cloud storage even if the data are partially tempered or lost. Therefore, the system needs to guarantee data owners that their outsourced data are the same as what was previously stored [129].
-
Robustness: In probabilistic data integrity strategy, errors in smaller size data should be identified and rectified [39].
-
Unforgeability: Authenticated users can only generate a valid signature/metadata on shared data [129].
-
Data Dynamic support: It allows data owners to insert, edit and delete data in the cloud storage by maintaining the constant level of integrity verification support like previous [89].
-
Dependability: Data should be available during managing all the file blocks time [89].
-
Replica Audibility: It helps to examine the replicas of the data file stored in the cloud storage by TPA on demand with data owners [89].
-
Auditing Correctness: It ensures that the response message from the CSP side can pass only the verification trial of TPA when CSP properly stores outsourced data perfectly into cloud storage [97].
-
Efficient User Revocation: The repeal users are not able to upload any data to cloud storage and can not be authorized users any more [78].
-
Batch Auditing: In the public auditing scheme, batch auditing method is proposed for doing multiple auditing tasks from different cloud users which TPA can instantly perform [95].
-
Data Confidentiality: TPA can not acquire actual data during data integrity verification time [90].
-
Boundless Verification: Data owners never give TPA any obligate condition about a fixed no. of verification interaction of data integrity [88].
-
Efficiency: The size of test metadata and the test time on multi-owner’s outsourced data in cloud computing are both individualistic with the number of data owners [95].
-
Private Key Correctness: Private key can pass verification test of cloud user only if the Private key Generator (PKG) sends a right private key to the cloud user [90].
-
Blockless Verification: TPA no need to download entire blocks from cloud storage for verification [95].
Challenges of data integrity technique in cloud environment
-
how outsourced data will be safe in a remote server and how data will be protected from any loss, damage, or alteration in cloud storage?
-
how security will assure cloud data if a malicious user is present inside the cloud?
-
On which location of shared storage, outsourced data will be stored?
-
Will legitimate access to the cloud data be by an authorized user only with complete audit verification availability?
-
during globally acquiring time, cloud services are hampered by many malicious attacks if integrity of database, network etc. are properly maintained.
-
Data availability and integrity problems occur if unauthorized changes happened with data by CSP.
-
Segregation problem of data among cloud users in cloud storage is another problem of data integrity. Therefore, SLA-based patch management policy, standard validation technique against unauthorized use and adequate security parameters need to be included in data integrity technique [131].
-
TPA can spoil the image of CSP by generating wrong integrity verification messages.
-
TPA can exploit confidential information with the help of malicious attackers through repeated verification interaction messages with cloud storage.
Types of Security Issues | Symptoms | Affects | Solution with references |
---|---|---|---|
Risk to integrity of data | Unauthorized access, segregation problem of data, lack of maintenance of database | hamperness of cloud storage service, lack of data integrity | |
Dishonest TPA | tempering of original file, by the generation of wrong audit message, spoiling of CSP’s intention | Lack of data confidentiality, lack of data integrity | |
Dishonest CSP | Data leakage, data modification, loss of data | loss of reputation of CSP, data unavailability, lack of data integrity | |
Forgery Attack | Forge audit message, forge proof message | Violate data integrity policy, lack of reputation of CSP | |
Malicious Insider Attack | Data leakage, data modification, data loss | Violate data integrity policy |
Desire design challenges of data integrity strategy
Ref. | Data Owner | Cloud Service Provider | Third Party Auditor |
---|---|---|---|
[88] | Not Considerable |
\(log2_c +160\)
|
\((s + 1) p\)
|
[89] | Not Considerable |
\(2j|k|+|r|\)
|
\(2|hash|+2j|k|+360\)
|
[90] | Not Considerable |
\(|p|+|q|\)
|
\(c.(|n|+|p|)\)
|
[133] | Not Considerable |
\(log2_c + (c + 1) log2_p\)
|
\((s + 1) log2_p\)
|
[78] | Not Considerable |
\(n|p|+n|q|\)
|
\((c + 1). |q|+ |p| + c |id|\)
|
[87] | Not Considerable |
j|k|
|
j|Hash|
|
[105] | Not Considerable |
j|k|
| Not Applicable (Private Auditing) |
[134] | Not Considerable |
\(c|s|+|p|\)
|
\(|p|+2|q|\)
|
[97] | Not Considerable |
\(K(|p|+|q|)\)
|
\(2p.(k+q)\)
|
[135] | Not Considerable |
\(c(|p| + |n|)\)
|
\((s + 1)|p|\)
|
[94] | Not Considerable |
\(|hash|+j|id|+ j|k|\)
|
\(|k|(j+1)+|c|\)
|
Ref. | Data Owner | Cloud Service Provider | Third Party Auditor |
---|---|---|---|
[88] |
\(jHash+(j*k)Exp + jAdd+ jExp\)
|
\(Hash+kMulExp + kAdd+ Exp+(k+1)Mul\)
|
\(Hash+kHash+kMulExp + 3Exp+Mul+ 2Pair+Mul\)
|
[89] |
\(2jHash+4jExp+2jAdd+2jMul+ 2jMul\)
|
\(Hash+ Mul+Exp+Mul+Add+Exp\)
|
\(KHash+2Exp+(k+1)Mul+Mul+Exp\)
|
[90] |
\(jHash+jExp+jMul+Add\)
|
\(Exp+(k-1)Mul+(k-1)Add+kExp\)
|
\(4Pair+2(k-1)Add+2Mul+2Exp+ (k+1)Mul+KHash+(k+1)Exp\)
|
[133] |
\(jHash+2jExp+jAdd+jMul+ jMul\)
|
\(j+2Exp+ (j+1)Mul+(k+1)Exp+kMul\)
|
\(4Pair+(k+2)Exp+(j+2)Exp\)
|
[78] |
Add
|
\(n(2Exp +Mul + Hash\)
|
\(KHash + 2Hash+2(k+1) Mul+(2k+3)Exp+2Pair+ (k-1)Add +k Mul\)
|
[87] |
\(10*j (Add+Shift+Sub+MixC)+ j*Hash\)
|
j|k|
|
\(j*Hash+Com\)
|
[105] |
\(j(Hash+Mul+Enc)+Exp\)
|
\((j-1)Mul+2Exp\)
| Not Applicable |
[91] |
\(4Encrypt+4Add\)
| 2Decrypt
|
\(2Decrypt+Encrypt+Add+Comp\)
|
[97] |
\(jHash+ jMul+ jAdd + jInv\)
|
\(Hash+ 2Add+Mul+Inv+4Mul\)
|
\(Mul+ 2Pair+Add\)
|
[135] |
\(jHash+2jExp+jAdd+jMul+ jMul\)
|
\(j*k(Add+Mul+ jExp_{G1}+Mul\)
|
\((j+k+1)Mul+ 2Pair+ (j+k)Exp\)
|
[94] |
\(4jMul+ jHash+ Exp\)
|
\(4Mul+ Exp\)
|
kAdd
|
Comparative analysis of data integrity strategies
Ref. | Objectives | Limitations |
---|---|---|
[88] | Public auditing, resist all external adversary, protect data from a malicious auditor | Due to the missing of data storing acknowledge verification, the reputation of the Cloud server may be destroyed |
[89] | Public data integrity, error localization, replica level auditing, dynamic update | Due to missing of data storing acknowledge verification, the reputation of CS may be destroyed |
[90] | Data integrity auditing, sensitive data hiding | Due to missing of audit message verification scheme, TPA can deceive user about audit message |
[85] | Data auditing, privacy-preserving | Audit report needs to verify otherwise TPA may be malicious TPA |
[61] | Data integrity, resist replay attack and MITC attack | Data privacy issue because after repeatedly passed challenging phase, CSP becomes capable of getting original data block |
[87] | Public auditing, data integrity | Audit message verification scheme need to be presented otherwise TPA may be malicious |
[86] | Data integrity for static data resist from the external adversary | The author assumes that TPA is a trusted one but practically not possible |
[91] | Public auditing, data integrity, dynamic data operation | Acknowledgment message about insert, modification and deletion of data needs to verify otherwise CS may be malicious CS |
[136] | Public auditing, dynamic big graph data operation | During verifying time of dynamic graph operations, data privacy is not properly maintained |
[93] | Dynamic update, data integrity auditing, reply forgery and reply attack | An audit message verification scheme needs to be present otherwise TPA may be malicious TPA |
[78] | Public auditing, data integrity | Audit message and acknowledge message verification scheme needs to be present otherwise TPA and cloud may be malicious |
[97] | Public auditing, reduce computational overhead, resist adaptive chosen-message attack | Validation results need to be verified otherwise TPA may be malicious |
[137] | Data integrity, privacy-preserving | An audit message verification scheme needs to be present otherwise TPA may be malicious TPA |
[122] | Data integrity, resist forge attack | No effective and secure data integrity scheme is present to support the data deduplication process of fog and cloud node |
[126] | Dynamic auditing, dynamic data operation, resist reply attack and replace attack | BLS signature is not suitable for a big data environment |
[125] | Certificateless public verification | Searching time over encrypted outsourced data in blockchain system takes much time |
[124] | Zero-knowledge public auditing, privacy-preserving | Not applicable for large scale big data and TPA don’t have the capability of auditing multiple user’s data simultaneously |
Future trends in data integrity approaches
Ref. | Issues of Cloud Storage | Merit of Blockchain Technology | Achivements |
---|---|---|---|
[100] | In multi-cloud storage environment, majority of the comparable schemes depend on reliable org. like the CSP and the centralised TPA, related and it might be challenging to pinpoint malevolent service providers in the wake of service disputes | used to detect service disputes and accurately identify dishonest service providers, blockchain technology is utilised to record the interactions between users, service providers, and organizers utilized,during the data auditing process | batch verification at a cheap cost without a TPA |
[143] | Several TPAs generate challenges for multi-cloud storage, sent to CS to verify data custody. TPAs may dishonestly exploit auditing protocols or collude with CS. | With the usage of blockchain technology, CS might be able to deduce the challenge messages, and there’s a chance that user data might be disclosed to the TPA while the audit is being conducted | This ensures decentralized, private audits, allowing public result verification for users |
[144] | App development requires data sharing and storage. Functional encryption(FE) solves public-key encryption drawbacks, but requires expensive bilinear pairings. | Cryptocurrency built on the blockchain that allows users to pay third parties when their outsourced decryption is successfully completed | The payment in an FE with outsourced decryption scheme is achieved |
[133] | In order to measure cloud data of virtual machines (VMs), two critical concerns in safe IaaS cloud storage are integrity evaluation and decision making. | A two-layer blockchain network can be used to create a revisable user-defined policy-based encryption mechanism and to construct a one-to-one relationship between a user, a node, and a virtual machine. | Enhance data integrity level and aids in controlling the scope of approved verifiers in a flexible manner |
[145] | Vast storage proofs and/or vast auditor states are prerequisites for the application of Dynamic Proof-of-Storage techniques designed for traditional cloud storage to Distributed Systems | Static proof-of-success systems promise compact proofs; dynamic on-blockchain auditing protocols can provide concretely tiny auditor states | Index information management is accomplished by optimisation strategies. |
[146] | In a multicloud storage environment, controlling scalability, data governance,non-tampering, trustworthiness, and transparency are two challenges. | A novel strategy for the security of huge data storage that makes use of highway protocol and blockchain technology to create new blocks that address problems with baseline models | dynamic control over sharing data manipulation is achieved. |