Skip to main content

2020 | Buch

Information Security and Privacy

25th Australasian Conference, ACISP 2020, Perth, WA, Australia, November 30 – December 2, 2020, Proceedings

insite
SUCHEN

Über dieses Buch

This book constitutes the refereed proceedings of the 25th Australasian Conference on Information Security and Privacy, ACISP 2020, held in Perth, WA, Australia, in November 2020*.

The 31 revised full papers and 5 short papers presented were carefully revised and selected from 151 submissions. The papers present and discuss the latest research, trends, breakthroughs, and challenges in the domain of information security, privacy and cybersecurity on a variety of topics such as post-quantum cryptography; symmetric cipher; signature; network security and blockchain; cryptographic primitives; mathematical foundation; machine learning security, among others.

*The conference was held virtually due to COVID-19 pandemic.

Inhaltsverzeichnis

Frontmatter

Post-quantum Cryptography

Frontmatter
Lattice Blind Signatures with Forward Security

Blind signatures play an important role in both electronic cash and electronic voting systems. Blind signatures should be secure against various attacks (such as signature forgeries). The work puts a special attention to secret key exposure attacks, which totally break digital signatures. Signatures that resist secret key exposure attacks are called forward secure in the sense that disclosure of a current secret key does not compromise past secret keys. This means that forward-secure signatures must include a mechanism for secret-key evolution over time periods. This paper gives a construction of the first blind signature that is forward secure. The construction is based on the SIS assumption in the lattice setting. The core techniques applied are the binary tree data structure for the time periods and the trapdoor delegation for the key-evolution mechanism.

Huy Quoc Le, Dung Hoang Duong, Willy Susilo, Ha Thanh Nguyen Tran, Viet Cuong Trinh, Josef Pieprzyk, Thomas Plantard
Optimized Arithmetic Operations for Isogeny-Based Cryptography on Huff Curves

Up to now, the state-of-the-art implementations of Supersingular Isogeny Diffie-Hellman (SIDH) work with Montgomery curves or Edwards curves, due to the facts that such curve models provide high efficiency for elliptic curve arithmetic operations. In this work, we propose a new w-coordinate method to optimize the arithmetic operations on Huff curves. Specifically, for the optimal computations of addition operation and doubling operation proposed by Orhon and Hisil on a fixed Huff curve, the costs of these operations can be further improved by about 40%. For the evaluations of odd-degree isogeny and 2-isogeny on variable Huff curves proposed by Moody and Shumow, the costs of evaluating $$\ell $$ -isogeny ( $$\ell $$ is odd) point and $$\ell $$ -isogeny curve can be further improved by about 50%. The computations of evaluating 2-isogeny point and 2-isogeny curve can be separately replaced by computing 4-isogeny point and 4-isogeny curve, which need $$6M+2S$$ and 4S, respectively, and avoid square root calculation mentioned in Moody and Shumow’s work. Interestingly, the desired computational issues on variable Huff curves have the same computational costs as those on variable Montgomery curves, as well supported by our implementations.

Yan Huang, Fangguo Zhang, Zhi Hu, Zhijie Liu
On Lattice-Based Interactive Protocols: An Approach with Less or No Aborts

A canonical identification (CID) scheme is a 3-move protocol consisting of a commitment, challenge, and response. It constitutes the core design of many cryptographic constructions such as zero-knowledge proof systems and various types of signature schemes. Unlike number-theoretic constructions, CID in the lattice setting usually forces provers to abort and repeat the whole authentication process once the distribution of the computed response does not follow a target distribution independent from the secret key. This concept has been realized by means of rejection sampling, which makes sure that the secrets involved in a protocol are concealed after a certain number of repetitions. This however has a negative impact on the efficiency of interactive protocols because it leads to a number of communication rounds that is multiplicative in the number of aborting participants (or rejection sampling procedures). In this work we show how the CID scheme underlying many lattice-based protocols can be designed with smaller number of aborts or even without aborts. Our new technique exploits (unbalanced) binary hash trees and thus significantly reduces the communication complexity. We show how to apply this new method within interactive zero-knowledge proofs. We also present BLAZE $$^{+}$$ : a further application of our technique to the recently proposed lattice-based blind signature scheme BLAZE (FC’20). We show that BLAZE $$^{+}$$ has an improved performance and communication complexity compared to BLAZE while preserving the size of keys and signatures.

Nabil Alkeilani Alkadri, Rachid El Bansarkhani, Johannes Buchmann
SKCN: Practical and Flexible Digital Signature from Module Lattice

The lattice-based signature scheme Dilithium is one of the most promising signature candidates for the post-quantum era, for its simplicity, efficiency, small public key size, and resistance against side channel attacks. Whether better trade-offs on the already remarkable performance of Dilithium can be made is left in Dilithium as an interesting open question.In this work, we provide new insights in interpreting the design of Dilithium, in terms of key consensus previously proposed in the literature for key encapsulation mechanisms (KEM) and key exchange (KEX). Based on the deterministic version of the optimal key consensus with noise (OKCN) mechanism originally developed for KEM/KEX, we present signature from key consensus with noise (SKCN), which could be viewed as generalization and optimization of Dilithium. The construction of SKCN is generic, modular and flexible, which in particular allows a much broader range of parameters for searching better tradeoffs among security, computational efficiency, and bandwidth. For example, on the recommended parameters, compared with Dilithium our SKCN scheme is more efficient both in computation and in bandwidth, while preserving the same level of post-quantum security. Also, our three sets of parameters are chosen to be as unified as possible, so as to simplify the implementation in practice. Finally, using the same routine of OKCN for both KEM/KEX and digital signature eases (hardware) implementation and deployment in practice, and is useful to simplify the system complexity of lattice-based cryptography in general.

Boru Gong, Leixiao Cheng, Yunlei Zhao
Cloud-Assisted Asynchronous Key Transport with Post-Quantum Security

In cloud-based outsourced storage systems, many users wish to securely store their files for later retrieval, and additionally to share them with other users. These retrieving users may not be online at the point of the file upload, and in fact they may never come online at all. In this asynchronous environment, key transport appears to be at odds with any demands for forward secrecy. Recently, Boyd et al. (ISC 2018) presented a protocol that allows an initiator to use a modified key encapsulation primitive, denoted a blinded KEM (BKEM), to transport a file encryption key to potentially many recipients via the (untrusted) storage server, in a way that gives some guarantees of forward secrecy. Until now all known constructions of BKEMs are built using RSA and DDH, and thus are only secure in the classical setting.We further the understanding of the use of blinding in post-quantum cryptography in two aspects. First, we show how to generically build blinded KEMs from homomorphic encryption schemes with certain properties. Second, we construct the first post-quantum secure blinded KEMs, and the security of our constructions are based on hard lattice problems.

Gareth T. Davies, Herman Galteland, Kristian Gjøsteen, Yao Jiang

Symmetric Cipher

Frontmatter
Rotational-XOR Cryptanalysis of Simon-Like Block Ciphers

Rotational-XOR cryptanalysis is a cryptanalytic method aimed at finding distinguishable statistical properties in ARX-C ciphers, i.e., ciphers that can be described only by using modular addition, cyclic rotation, XOR, and the injection of constants. In this paper we extend RX-cryptanalysis to AND-RX ciphers, a similar design paradigm where the modular addition is replaced by vectorial bitwise AND; such ciphers include the block cipher families Simon and Simeck. We analyze the propagation of RX-differences through AND-RX rounds and develop closed form formula for their expected probability. Finally, we formulate an SMT model for searching RX-characteristics in Simon and Simeck .Evaluating our model we find RX-characteristics of up to 20, 27, and 35 rounds with respective probabilities of $$2^{-26}, 2^{-42}$$ , and $$2^{-54}$$ for versions of Simeck with block sizes of 32, 48, and 64 bits, respectively, for large classes of weak keys in the related-key model. In most cases, these are the longest published distinguishers for the respective variants of Simeck.Interestingly, when we apply the model to the block cipher Simon, the best characteristic we are able to find covers 11 rounds of Simon32 with probability $$2^{-24}$$ . To explain the gap between Simon and Simeck in terms of the number of distinguished rounds we study the impact of the key schedule and the specific rotation amounts of the round function on the propagation of RX-characteristics in Simon-like ciphers.

Jinyu Lu, Yunwen Liu, Tomer Ashur, Bing Sun, Chao Li
A New Improved AES S-box with Enhanced Properties

The Advanced Encryption Standard (AES) is the most widely used symmetric encryption algorithm. Its security is mainly based on the structure of the S-box. In this paper, we present a new way to create S-boxes for AES and exhibit an S-box with improved cryptographic properties such as Bit Independence Criterion (BIC), periodicity, algebraic complexity, Strict Avalanche Criterion (SAC) and Distance to SAC.

Abderrahmane Nitaj, Willy Susilo, Joseph Tonien
Galaxy: A Family of Stream-Cipher-Based Space-Hard Ciphers

Whitebox cryptography seeks to ensure the security of cryptographic algorithms against adversaries who have unlimited access to the environments for their implementation. At ACM CCS 2015, Bogdanov and Isobe proposed a security notion called space hardness and a secure block cipher named SPACE in the whitebox setting. SPACE is a table-based cryptographic primitive whose table comprises the pairs of inputs to a block cipher such as AES and the corresponding outputs. In line with SPACE, other whitebox cryptographic schemes were proposed and offer sufficient security as SPACE does. However, there is still room for improvement in the performance of their encryption and table generation. In this paper, we propose a new family of whitebox cryptographic primitives called Galaxy to enhance the performance of the encryption and table generation. Galaxy employs a stream cipher to generate the table instead of a block cipher. The security of Galaxy against key-extraction attacks in the whitebox setting is reduced to the key-extraction problem for the stream cipher in the blackbox setting. Additionally, we utilize type-2 generalized Feistel network with optimal shuffle layers for the algorithm of Galaxy to improve the encryption performance. Type-2 generalized Feistel network enables parallel table lookups in the algorithm of Galaxy. As a result, we successfully increase the speed of encryption by 1.3–15 times. Besides, when we use chacha for table generation of Galaxy and AES for other existing block-cipher-based whitebox schemes, we can create the table of Galaxy 1.5–10 times faster than that of other existing whitebox schemes.

Yuji Koike, Kosei Sakamoto, Takuya Hayashi, Takanori Isobe
Automated Search for Block Cipher Differentials: A GPU-Accelerated Branch-and-Bound Algorithm

In this paper, we propose a GPU-accelerated branch-and-bound algorithm. The proposed approach substantially increases the performance of the differential cluster search. We were able to derive a branch enumeration and evaluation kernel that is 5.95 times faster than its CPU counterpart. To showcase its practicality, the proposed algorithm is applied on TRIFLE-BC, a 128-bit block cipher. By incorporating a meet-in-the-middle approach with the proposed GPU kernel, we were able to improve the search efficiency (on 20 rounds of TRIFLE-BC) by approximately 58 times as compared to the CPU-based approach. Differentials consisting of up to 50 million individual characteristics can be constructed for 20 rounds of TRIFLE, leading to slight improvements to the overall differential probabilities. Even for larger rounds (43 rounds), the proposed algorithm is still able to construct large clusters of over 500 thousand characteristics. This result depicts the practicality of the proposed algorithm in constructing large differentials even for a 128-bit block cipher, which could be used to improve cryptanalytic findings against other block ciphers in the future.

Wei-Zhu Yeoh, Je Sen Teh, Jiageng Chen

Signature

Frontmatter
From Rerandomizability to Sequential Aggregation: Efficient Signature Schemes Based on SXDH Assumption

An aggregate signature allows one to generate a short aggregate of signatures from different signers on different messages. A sequential aggregate signature (SeqAS) scheme allows the signers to aggregate their individual signatures in a sequential manner. All existing SeqAS schemes that do not use the random oracle assumption either require a large public key or the security depends upon some non-standard interactive/static assumptions. In this paper, we present an efficient SeqAS scheme with constant-size public key under the SXDH assumption. In the process, we first obtain an optimized (and more efficient) variant of Libert et al.’s randomizable signature scheme. While both the schemes are more efficient than the currently best ones that rely on some static assumption, they are only slightly costlier than the most efficient ones based on some interactive assumption.

Sanjit Chatterjee, R. Kabaleeshwaran
Parallel Implementation of SM2 Elliptic Curve Cryptography on Intel Processors with AVX2

This paper presents an efficient and secure implementation of SM2, the Chinese elliptic curve cryptography standard that has been adopted by the International Organization of Standardization (ISO) as ISO/IEC 14888-3:2018. Our SM2 implementation uses Intel’s Advanced Vector Extensions version 2.0 (AVX2), a family of three-operand SIMD instructions operating on vectors of 8, 16, 32, or 64-bit data elements in 256-bit registers, and is resistant against timing attacks. To exploit the parallel processing capabilities of AVX2, we studied the execution flows of Co-Z Jacobian point arithmetic operations and introduce a parallel 2-way Co-Z addition, Co-Z conjugate addition, and Co-Z ladder algorithm, which allow for fast Co-Z scalar multiplication. Furthermore, we developed an efficient 2-way prime-field arithmetic library using AVX2 to support our Co-Z Jacobian point operations. Both the field and the point operations utilize branch-free (i.e. constant-time) implementation techniques, which increase their ability to resist Simple Power Analysis (SPA) and timing attacks. Our software for scalar multiplication on the SM2 curve is, to our knowledge, the first constant-time implementation of the Co-Z based ladder that leverages the parallelism of AVX2.

Junhao Huang, Zhe Liu, Zhi Hu, Johann Großschädl
Improved Security Proof for the Camenisch-Lysyanskaya Signature-Based Synchronized Aggregate Signature Scheme

The Camenisch-Lysyanskaya signature scheme in CRYPTO 2004 is a useful building block to construct privacy-preserving schemes such as anonymous credentials, group signatures or ring signatures. However, the security of this signature scheme relies on the interactive assumption called the LRSW assumption. Even if the interactive assumptions are proven in the generic group model or bilinear group model, the concerns about these assumptions arise in a cryptographic community. This fact caused a barrier to the use of cryptographic schemes whose security relies on these assumptions.Recently, Pointcheval and Sanders proposed the modified Camenisch-Lysyanskaya signature scheme in CT-RSA 2018. This scheme satisfies the EUF-CMA security under the new q-type assumption called the Modified-q-Strong Diffie-Hellman-2 ( ) assumption. However, the size of a q-type assumptions grows dynamically and this fact leads to inefficiency of schemes.In this work, we revisit the Camenisch-Lysyanskaya signature-based synchronized aggregate signature scheme in FC 2013. This scheme is one of the most efficient synchronized aggregate signature schemes with bilinear groups. However, the security of this synchronized aggregate scheme was proven under the one-time LRSW assumption in the random oracle model. We give the new security proof for this synchronized aggregate scheme under the (static) assumption in the random oracle model with little loss of efficiency.

Masayuki Tezuka, Keisuke Tanaka

Network Security and Blockchain

Frontmatter
DCONST: Detection of Multiple-Mix-Attack Malicious Nodes Using Consensus-Based Trust in IoT Networks

The Internet of Things (IoT) is growing rapidly, which allows many smart devices to connect and cooperate with each other. While for the sake of distributed architecture, an IoT environment is known to be vulnerable to insider attacks. In this work, we focus on this challenge and consider an advanced insider threat, called multiple-mix attack, which typically combines three sub-attacks: tamper attack, drop attack and replay attack. For protection, we develop a Distributed Consensus based Trust Model (DCONST), which can build the nodes’ reputation by sharing particular information, called cognition. In particular, DCONST can detect malicious nodes by using the K-Means clustering, without disturbing the normal operations of a network. In the evaluation, as compared with some similar models, DCONST can overall provide a better detection rate by increasing around 10% to 40%.

Zuchao Ma, Liang Liu, Weizhi Meng
A Black-Box Attack on Neural Networks Based on Swarm Evolutionary Algorithm

Neural networks play an increasingly important role in the field of machine learning and are included in many applications in society. Unfortunately, neural networks suffer from adversarial examples generated to attack them. However, most of the generation approaches either assume that the attacker has full knowledge of the neural network model or are limited by the type of attacked model. In this paper, we propose a new approach that generates a black-box attack to neural networks based on the swarm evolutionary algorithm. Benefiting from the improvements in the technology and theoretical characteristics of evolutionary algorithms, our approach has the advantages of effectiveness, black-box attack, generality, and randomness. Our experimental results show that both the MNIST images and the CIFAR-10 images can be perturbed to successful generate a black-box attack with 100% probability on average. In addition, the proposed attack, which is successful on distilled neural networks with almost 100% probability, is resistant to defensive distillation. The experimental results also indicate that the robustness of the artificial intelligence algorithm is related to the complexity of the model and the data set. In addition, we find that the adversarial examples to some extent reproduce the characteristics of the sample data learned by the neural network model.

Xiaolei Liu, Teng Hu, Kangyi Ding, Yang Bai, Weina Niu, Jiazhong Lu
A Blockchain-Based Resource Supervision Scheme for Edge Devices Under Cloud-Fog-End Computing Models

As a combination of cloud computing and edge computing, cloud-fog-end computing models are gradually replacing traditional centralized cloud computing models due to their high controllability and low latency. However, this model has certain shortcomings in terms of resource awareness of edge devices. Two problems are the most prominent. One is that it is difficult to measure the resources of the edge device fairly, and the other is that it is challenging to monitor the resource status in real-time. This circumstance significantly limits the future application of this model. In this paper, we propose a blockchain-based resource supervision scheme for edge devices under the framework of the cloud-fog-end computing model. The scheme uses the data openness and verifiability of the public chain to measure and record the resource status of the edge devices through the smart contract. Besides, it uses the high controllability and designability of the consortium chain to supervise the resource status of the edge devices in the fog computing network in real-time. In addition, based on the structure of the scheme, we demonstrate the feasibility and security from multiple perspectives and carry out simulation experiments in the model of TensorFlow-federated. Experimental results show that the scheme can effectively monitor the resources of edge devices in real-time, and this scheme can be used to implement federated machine learning in fog computing networks.

Tongchen Wang, Jianwei Liu, Dawei Li, Qianhong Wu

Cryptographic Primitives

Frontmatter
SHOSVD: Secure Outsourcing of High-Order Singular Value Decomposition

Tensor decomposition is a popular tool for multi-dimensional data analysis. In particular, High-Order Singular Value Decomposition (HOSVD) is one of the most useful decomposition methods and has been adopted in many applications. Unfortunately, the computational cost of HOSVD is very high on large-scale tensor, and the desirable solution nowadays is to outsource the data to the clouds which perform the computation on behalf of the users. However, how to protect the data privacy against the possibly untrusted clouds is still a wide concern for users. In this paper, we design a new scheme called SHOSVD in the two-cloud model for secure outsourcing of tensor decomposition. At the core of our technique is the adoption of additive secret sharing. Our SHOSVD could guarantee the outsourced data privacy for users assuming no collusion between the two clouds. Moreover, it supports off-line users which means that no interaction between users and clouds is required during the computation process. We prove that our scheme is secure in the semi-honest model, and conduct the theoretical analyses regarding its computational and communicational overhead. The experiment results demonstrate that our scheme is of desirable accuracy.

Jinrong Chen, Lin Liu, Rongmao Chen, Wei Peng
Efficient Forward-Secure Threshold Public Key Encryption

The purpose of forward-secure threshold public key encryption schemes is to mitigate the damage of secret key exposure. We construct the first CCA forward-secure threshold public key encryption scheme based on bilinear pairings with groups of prime order that is secure against adaptive and malicious adversaries in the standard model. Our scheme is very efficient since it has a non-interactive key update and decryption procedure. Additionally, our scheme does not require a trusted dealer and has optimal resilience as well as small ciphertexts of constant size. It is the first scheme which achieves all of these and that can also be implemented on standardized elliptic curves.

Rafael Kurek
A New Targeted Password Guessing Model

TarGuess-I is a leading targeted password guessing model using users’ personally identifiable information (PII) proposed at ACM CCS 2016 by Wang et al. Owing to its superior guessing performance, TarGuess-I has attracted widespread attention in password security. Yet, TarGuess-I fails to capture popular passwords and special strings in passwords correctly. Thus we propose TarGuess-I $$ ^+ $$ : an improved password guessing model, which is capable of identifying popular passwords by generating top-300 most popular passwords from similar websites and grasping special strings by extracting continuous characters from user-generated PII. We conduct a series of experiments on 6 real-world leaked datasets and the results show that our improved model outperforms TarGuess-I by 9.07% on average with 1000 guesses, which proves the effectiveness of our improvements.

Zhijie Xie, Min Zhang, Anqi Yin, Zhenhan Li
Multi-input Laconic Function Evaluation

Recently, Quach, Wee and Wichs (FOCS 2018) proposed a new powerful cryptographic primitive called laconic function evaluation (LFE). Using an LFE scheme, Alice can compress a large circuit f into a small digest. Bob can encrypt some data x under this digest in a way that enables Alice to recover f(x) without learning anything else about Bob’s data. The laconic property requires that the size of the digest, the run-time of the encryption algorithm and the size of the ciphertext should be much smaller than the circuit-size of f. This new tool is motivated by an interesting application of “Bob-optimized” two-round secure two-party computation (2PC). In such a 2PC, Alice will get the final result thus the workload of Bob will be minimized.In this paper, we consider a “client-optimized” two-round secure multiparty computation, in which multiple clients provide inputs and enable a server to obtain final outputs while protecting privacy of each individual input. More importantly, we would also minimize the cost of each client. For this purpose, we propose multi-input laconic function evaluation (MI-LFE), and give a systematic study of it.It turns out that MI-LFE for general circuit is not easy. Specifically, we first show that the directly generalized version, i.e., the public-key MI-LFE implies virtual black-box obfuscation. Hence the public-key MI-LFE (for general circuits) is infeasible. This forces us to turn to secret key version of MI-LFE, in which encryption now needs to take a secret key. Next we show that secret-key MI-LFE also implies heavy cryptographic primitives including witness encryption for NP language and the indistinguishability obfuscation. On the positive side, we show that the secret-key MI-LFE can be constructed assuming indistinguishability obfuscation and learning with errors assumption. Our theoretical results suggest that we may have to explore relaxed versions of MI-LFE for meaningful new applications of “client-optimized” MPC and others.

Bo Pang, Long Chen, Xiong Fan, Qiang Tang

Mathematical Foundation

Frontmatter
Arbitrary-Centered Discrete Gaussian Sampling over the Integers

Discrete Gaussian sampling over the integers, which is to sample from a discrete Gaussian distribution $$\mathcal {D}_{\mathbb {Z},\sigma ,\mu }$$ over the integers $$\mathbb {Z}$$ with parameter $$\sigma >0$$ and center $$\mu \in \mathbb {R}$$ , is one of fundamental operations in lattice-based cryptography. The sampling algorithm should support a varying center $$\mu $$ and even a varying parameter $$\sigma $$ , when it is used as one of the subroutines in an algorithm for sampling trapdoor lattices, or sampling from Gaussian distributions over a general n-dimensional lattice $$\varLambda $$ . In this paper, combining the techniques in Karney’s algorithm for exactly sampling the standard normal distribution, we present an exact sampling algorithm for $$\mathcal {D}_{\mathbb {Z},\sigma ,\mu }$$ with an integer-valued parameter $$\sigma $$ . This algorithm requires no pre-computation storage, uses no floating-point arithmetic, supports centers of arbitrary precision, and does not have any statistical discrepancy. Applying the convolution-like property of discrete Gaussian distributions, we also present an approximated sampling algorithm for $$\mathcal {D}_{\mathbb {Z},\sigma ,\mu }$$ with a real-valued parameter $$\sigma $$ . It also supports centers of arbitrary precision, and we show that the distribution it produces has a smaller max-log distance to the ideal distribution, as compared to Micciancio-Walter sampling algorithm, which was introduced by Micciancio et al. in Crypto 2017 for discrete Gaussian distributions with varying $$\sigma $$ and $$\mu $$ over the integers.

Yusong Du, Baoying Fan, Baodian Wei
New Assumptions and Efficient Cryptosystems from the e-th Power Residue Symbol

The e-th power residue symbol $$\left( \frac{\alpha }{\mathfrak {p}}\right) _e$$ is a useful mathematical tool in cryptography, where $$\alpha $$ is an integer, $$\mathfrak {p}$$ is a prime ideal in the prime factorization of $$p\mathbb {Z}[\zeta _e]$$ with a large prime p satisfying $$e | p-1$$ , and $$\zeta _e$$ is an e-th primitive root of unity. One famous case of the e-th power symbol is the first semantic secure public key cryptosystem due to Goldwasser and Micali (at STOC 1982). In this paper, we revisit the e-th power residue symbol and its applications. In particular, we prove that computing the e-th power residue symbol is equivalent to solving the discrete logarithm problem. By this result, we give a natural extension of the Goldwasser-Micali cryptosystem, where e is an integer only containing small prime factors. Compared to another extension of the Goldwasser-Micali cryptosystem due to Joye and Libert (at EUROCRYPT 2013), our proposal is more efficient in terms of bandwidth utilization and decryption cost. With a new hardness assumption naturally extended from the one used in the Goldwasser-Micali cryptosystem, our proposal is provable IND-CPA secure. Furthermore, we show that our results on the e-th power residue symbol can also be used to construct lossy trapdoor functions and circular and leakage resilient public key encryptions with more efficiency and better bandwidth utilization.

Xiaopeng Zhao, Zhenfu Cao, Xiaolei Dong, Jun Shao, Licheng Wang, Zhusen Liu
Revisiting the Hardness of Binary Error LWE

Binary error LWE is the particular case of the learning with errors (LWE) problem in which errors are chosen in $$\{0,1\}$$ . It has various cryptographic applications, and in particular, has been used to construct efficient encryption schemes for use in constrained devices. Arora and Ge showed that the problem can be solved in polynomial time given a number of samples quadratic in the dimension n. On the other hand, the problem is known to be as hard as standard LWE given only slightly more than n samples.In this paper, we first examine more generally how the hardness of the problem varies with the number of available samples. Under standard heuristics on the Arora–Ge polynomial system, we show that, for any $$\epsilon >0$$ , binary error LWE can be solved in polynomial time $$n^{O(1/\epsilon )}$$ given $$\epsilon \cdot n^{2}$$ samples. Similarly, it can be solved in subexponential time $$2^{\tilde{O}(n^{1-\alpha })}$$ given $$n^{1+\alpha }$$ samples, for $$0<\alpha <1$$ .As a second contribution, we also generalize the binary error LWE to problem the case of a non-uniform error probability, and analyze the hardness of the non-uniform binary error LWE with respect to the error rate and the number of available samples. We show that, for any error rate $$0< p < 1$$ , non-uniform binary error LWE is also as hard as worst-case lattice problems provided that the number of samples is suitably restricted. This is a generalization of Micciancio and Peikert’s hardness proof for uniform binary error LWE. Furthermore, we also discuss attacks on the problem when the number of available samples is linear but significantly larger than n, and show that for sufficiently low error rates, subexponential or even polynomial time attacks are possible.

Chao Sun, Mehdi Tibouchi, Masayuki Abe

Machine Learning Security

Frontmatter
PALOR: Poisoning Attacks Against Logistic Regression

With Google, Amazon, Microsoft, and other entities establishing “Machine Learning as a Service” (MLaaS), ensuring the security of the resulting machine learning models has become an increasingly important topic. The security community has demonstrated that MLaaS contains many potential security risks, with new risks constantly being discovered. In this paper, we focus on one of these security risks – data poisoning attacks. Specifically, we analyze how attackers interfere with the results of logistic regression by poisoning the training datasets. To this end, we analyze and propose an alternative formulation for the optimization of poisoning training points capable of poisoning the logistic regression classifier, a model that has previously not been susceptible to poisoning attacks. We evaluate the performance of our proposed attack algorithm on the three real-world datasets of wine cultivars, adult census information, and breast cancer diagnostics. The success of our proposed formulation is evident in decreasing testing accuracy of logistic regression models exposed to an increasing number of poisoned training samples.

Jialin Wen, Benjamin Zi Hao Zhao, Minhui Xue, Haifeng Qian
DeepCapture: Image Spam Detection Using Deep Learning and Data Augmentation

Image spam emails are often used to evade text-based spam filters that detect spam emails with their frequently used keywords. In this paper, we propose a new image spam email detection tool called DeepCapture using a convolutional neural network (CNN) model. There have been many efforts to detect image spam emails, but there is a significant performance degrade against entirely new and unseen image spam emails due to overfitting during the training phase. To address this challenging issue, we mainly focus on developing a more robust model to address the overfitting problem. Our key idea is to build a CNN-XGBoost framework consisting of eight layers only with a large number of training samples using data augmentation techniques tailored towards the image spam detection task. To show the feasibility of DeepCapture, we evaluate its performance with publicly available datasets consisting of 6,000 spam and 2,313 non-spam image samples. The experimental results show that DeepCapture is capable of achieving an F1-score of 88%, which has a 6% improvement over the best existing spam detection model CNN-SVM [19] with an F1-score of 82%. Moreover, DeepCapture outperformed existing image spam detection solutions against new and unseen image datasets.

Bedeuro Kim, Sharif Abuadbba, Hyoungshick Kim

Attack

Frontmatter
Rolling Attack: An Efficient Way to Reduce Armors of Office Automation Devices

Firmware security is always a focus of IoT security in recent years. The security of office automation device’s firmware also attracts widespread attention. Previous work on attacking office automation devices mainly focused on code flaws in firmware. However, we noticed that to find these vulnerabilities and apply them in office automation devices requires rich experience and long-term research of specific devices, which is a big cost. In this work, we designed an easy but efficient attack, Rolling Attack, which rolls back firmware to perform attacks on the office automation device, even if the firmware is up-to-date. By rolling back firmware, attackers can use Rolling Attack to exploit vulnerabilities that have been fixed by the latest firmware on office automation devices covering personal computers, network printers, network projectors and servers. We also proposed a system called Rolling Attack Pentest System to test the device by Rolling Attack. By crawling the firmware on the Internet, we have collected 99,120 models of devices’ firmware packages in the past 2 years. We also collected firmware’s vulnerabilities. We verified Rolling Attack on popular office automation devices covering 45 vendors, including Lenovo, HP, Samsung, Canon, Brother, Sony, Dell and so on. We performed Rolling Attack on 104 different office automation devices covering 4 types (personal computer, network printer, network projector, server) with the collected historical versions of firmware. 50.00% of the total models of devices we tested can be rolled back. 88.46% of the devices that have been rolled back are vulnerable to public vulnerabilities. We concluded that 44.23% of the devices we tested were affected by the Rolling Attack. Finally, we give some suggestions on how to mitigate Rolling Attack.

Linyu Li, Lei Yu, Can Yang, Jie Gou, Jiawei Yin, Xiaorui Gong
Improving Key Mismatch Attack on NewHope with Fewer Queries

NewHope is a lattice cryptoscheme based on the Ring Learning With Errors (Ring-LWE) problem, and it has received much attention among the candidates of the NIST post-quantum cryptography standardization project. Recently, key mismatch attacks on NewHope have been proposed, where the adversary tries to recover the server’s secret key by observing the mismatch of the shared key from chosen queries. At CT-RSA 2019, Bauer et al. first proposed a key mismatch attack on NewHope, and then at ESORICS 2019, Qin et al. proposed an improved version with success probability of 96.9% using about 880,000 queries. In this paper, we further improve their key mismatch attacks on NewHope. First, we reduce the number of queries by adapting the terminating condition to the response from the server using an early abort technique. Next, the success rate of recovering the secret key polynomial is raised by setting a deterministic condition for judging its coefficients. We also improve the method of generating queries. Furthermore, the search range of the secret key in Qin et al.’s attack is extended without increasing the number of queries. As a result, about 73% of queries can be reduced compared with Qin et al.’s method under the success rate of 97%. Moreover, we analyze the trade-off between the number of queries and the success rate. In particular, we show that a lower success rate of 20.9% is available by further reduced queries of 135,000, simultaneously.

Satoshi Okada, Yuntao Wang, Tsuyoshi Takagi
A Novel Duplication Based Countermeasure to Statistical Ineffective Fault Analysis

The Statistical Ineffective Fault Analysis, SIFA, is a recent addition to the family of fault based cryptanalysis techniques. SIFA based attack is shown to be formidable and is able to bypass virtually all the conventional fault attack countermeasures. Reported countermeasures to SIFA incur overheads of the order of at least thrice the unprotected cipher. We propose a novel countermeasure that reduces the overhead (compared to all existing countermeasures) as we rely on a simple duplication based technique. In essence, our countermeasure eliminates the observation that enables the attacker to perform SIFA. The core idea we use here is to choose the encoding for the state bits randomly. In this way, each bit of the state is free from statistical bias, which renders SIFA unusable. Our approach protects against stuck-at faults and also does not rely on any side channel countermeasure. We show the effectiveness of the countermeasure through an open source gate-level fault attack simulation tool. Our approach is probably the simplest and the most cost effective.

Anubhab Baksi, Vinay B. Y. Kumar, Banashri Karmakar, Shivam Bhasin, Dhiman Saha, Anupam Chattopadhyay
Design and Evaluation of Enumeration Attacks on Package Tracking Systems

Most shipping companies provide a package tracking system where customers can easily track their package delivery status when the package is being shipped. However, we present a security problem called enumeration attacks against package tracking systems in which attackers can collect customers’ personal data illegally through the systems. We specifically examine the security of the package tracking websites of the top five popular shipping companies (Korea Post, CJ Logistics, Lotte Logistics, Logen, and Hanjin Shipping) in South Korea and found that enumeration attacks can be easily implemented with package tracking numbers or phone numbers. To show potential risks of enumeration attacks on the package tracking system, we automatically collected package tracking records from those websites through our attack tool. We gathered 1,398,112, 2,614,839, 797,676, 1,590,933, and 163,452 package delivery records from the websites of Korea Post, CJ Logistics, Lotte Logistics, Logen and Hanjin Shipping, respectively, during 6 months. Using those records, we uncover 4,420,214 names, 2,527,205 phone numbers, and 4,467,329 addresses. To prevent such enumeration attacks, we also suggest four practical defense approaches.

Hanbin Jang, Woojoong Ji, Simon S. Woo, Hyoungshick Kim

Privacy

Frontmatter
JTaint: Finding Privacy-Leakage in Chrome Extensions

Extensions are used by many Chrome browser users to enhance browser functions and users’ online experience. These extensions run with special permissions, they can read and modify the element of DOM (Document Object Model) in users’ web pages. But, excessive permissions and operation behaviors have brought users heavy risks such as the privacy leakage caused by extensions. Dynamic taint analysis techniques are often exploited to discover the privacy leakage, it monitors code execution by modifying the JavaScript interpreter or rewriting the JavaScript source code. However, interpreter-level taint technique needs to overcome the complexity of the interpreter, and there are also many difficulties in designing taint propagation rules for bytecode. And source-level taint technique is undertainted like Jalangi2, which will trigger some exceptions in practice.To this end, we design JalangiEX based on Jalangi2. JalangiEX fixes problems in Jalangi2 and strips its redundant codes. Besides, JalangiEX also monitors two types of initialization actions and provides taint propagation support for message passing between different pages, which further solves the undertaint problem of Jalangi2. Moreover we implement JTaint, a dynamic taint analysis system that uses JalangiEX to rewrite the extension and monitors the process of taint propagation to discover potential privacy leaks in Chrome extensions. Finally, we use JTaint to analyze 20,000 extensions from Chrome Web Store and observe the data flow of extensions on a special honey page. Fifty-seven malicious extensions are recognized to leak sensitive-privacy information and are still active in the Chrome Web Store.

Mengfei Xie, Jianming Fu, Jia He, Chenke Luo, Guojun Peng
Unlinkable Updatable Databases and Oblivious Transfer with Access Control

An oblivious transfer with access control protocol (OTAC) allows us to protect privacy of accesses to a database while enforcing access control policies. Existing OTAC have several shortcomings. First, their design is not modular. Typically, to create an OTAC, an adaptive oblivious transfer protocol (OT) is extended ad-hoc. Consequently, the security of the OT is reanalyzed when proving security of the OTAC, and it is not possible to instantiate the OTAC with any secure OT. Second, existing OTAC do not allow for policy updates. Finally, in practical applications, many messages share the same policy. However, existing OTAC cannot take advantage of that to improve storage efficiency.We propose an UC-secure OTAC that addresses the aforementioned shortcomings. Our OTAC uses as building blocks the ideal functionalities for OT, for zero-knowledge (ZK) and for an unlinkable updatable database ( $$\mathrm {UUD}$$ ), which we define and construct. $$\mathrm {UUD}$$ is a protocol between an updater $$\mathcal {U} $$ and multiple readers $$\mathcal {R} _k$$ . $$\mathcal {U} $$ sets up a database and updates it. $$\mathcal {R} _k$$ can read the database by computing UC ZK proofs of an entry in the database, without disclosing what entry is read. In our OTAC, $$\mathrm {UUD}$$ is used to store and read the policies.We construct an $$\mathrm {UUD}$$ based on subvector commitments (SVC). We extend the definition of SVC with update algorithms for commitments and openings, and we provide an UC ZK proof of a subvector. Our efficiency analysis shows that our $$\mathrm {UUD}$$ is practical.

Aditya Damodaran, Alfredo Rial
Secure and Compact Elliptic Curve LR Scalar Multiplication

Elliptic curve cryptography (ECC) can ensure an equivalent security with much smaller key sizes. Elliptic curve scalar multiplication (ECSM) is a fundamental computation used in ECC. This paper focuses on ECSM resisting simple power attack and safe error attack of side-channel attack specifically. Elliptic curve complete addition (CA) formulae can achieve secure ECSM algorithms but are inefficient from memory and computational cost perspectives. Another secure ECSM, which uses (extended) affine, is more efficient for both memory and computational costs. However, it scans input scalars from right to left. In this paper, our developed scalar multiplication algorithms also use their extended affine, but scan from left to right (LR). We also prove the security of our LR ECSM algorithms and analyze them both theoretically and experimentally. Our new LR ECSM algorithms can reduce the amount of memory by $$37.5\%$$ and reduce the computational time by more than $$40\%$$ compared to Joye’s regular 2-ary LR algorithm with CA formulae.

Yaoan Jin, Atsuko Miyaji

Short Papers

Frontmatter
User Identity Linkage Across Social Networks via Community Preserving Network Embedding

User Identity Linkage (UIL) across social networks refers to the recognition of the accounts belonging to the same individual among multiple social network platforms. Most existing network structure-based methods focus on extracting local structural proximity from the local context of nodes, but the inherent community structure of the social network is largely ignored. In this paper, with an awareness of labeled anchor nodes as supervised information, we propose a novel community structure-based algorithm for UIL, called CUIL. Firstly, inspired by the network embedding, CUIL considers both proximity structure and community structure of the social network simultaneously to capture the structural information conveyed by the original network as much as possible when learning the feature vectors of nodes in social networks. Given a set of labeled anchor nodes, CUIL then applies the back-propagation neural network to learn a stable cross-network mapping function for identities linkage. Experiments conducted on the real-world dataset show that CUIL outperforms the state-of-the-art network structure-based methods in terms of linking precision even with only a few labeled anchor nodes. CUIL is also shown to be efficient with low vector dimensionality and a small number of training iterations.

Xiaoyu Guo, Yan Liu, Lian Liu, Guangsheng Zhang, Jing Chen, Yuan Zhao
Improvement of Attribute-Based Encryption Using Blakley Secret Sharing

Attribute-based encryption (ABE) enables fine-grained access control of encrypted data. This technique has been carefully scrutinised by the research community for over a decade, and it has wide theoretical interests as well as practical potentials. Thus, any efficiency improvement of it is highly desirable but non-trivial. In this paper, we demonstrate that the computational costs in ABE can be slightly reduced using Blakley secret sharing. The main reason that contributes to this improvement is a unique feature enjoyed by Blakley secret sharing, i.e. it is more efficient to handle (n, n)-threshold secret sharing compared with Shamir secret sharing. Due to the space limitation, we only describe how to improve key-policy attribute-based encryption (KP-ABE), but our method is very general and it can be used to improve some of its variants similarly, e.g. cipher-policy attribute-based encryption (CP-ABE). This work may also inspire further investigations on Blakley secret sharing, both applying this unique feature to other cryptographic primitives and exploring more undiscovered features.

Zhe Xia, Bo Yang, Yanwei Zhou, Mingwu Zhang, Yi Mu
Recovering CRT-RSA Secret Keys from Noisy Square-and-Multiply Sequences in the Sliding Window Method

We discuss side-channel attacks on CRT-RSA encryption or signature scheme (the RSA scheme with the Chinese remainder theorem) implemented via the sliding window method. The sliding window method calculates exponentiations through repeated squaring and multiplication. These square-and-multiply sequences can be obtained by side-channel attacks, and there is the risk of recovering CRT-RSA secret keys from these sequences. Especially, in CHES 2017, it is proved that we can recover secret keys from the correct square-and-multiply sequences in polynomial time when the window size w is less than 4. However, there are errors in the obtained sequences. Oonishi and Kunihiro proposed a method for recovering secret keys from noisy sequences when $$w=1$$ . Although this work only addresses the case with $$w=1$$ , it should be possible to recover secret keys for larger values of w. In this paper, we propose a new method for recovering secret keys from noisy sequences in the sliding window method. Moreover, we clarify the amount of errors for which our method works.

Kento Oonishi, Noboru Kunihiro
Security Analysis on Tangle-Based Blockchain Through Simulation

Tangle provides an enlightening paradigm for DAG-based structures. We build a simple but flexible simulation network for Tangle by identifying its features. Based on that, we construct three types of attack strategies via defining basic actions and behaviours. We further evaluate these attacks in multi-dimensions with 12 sets of experiments, followed by comprehensive discussions. The results show the trend under different strategies and configurations. Our work provides an educational example for both attack and defense towards Tangle-based blockchains.

Bozhi Wang, Qin Wang, Shiping Chen, Yang Xiang
Tightly Secure Chameleon Hash Functions in the Multi-user Setting and Their Applications

We define the security notion of strong collision resistance for chameleon hash functions in the multi-user setting (S-MU-CR security). We also present three specific constructions $$\mathsf {CHF}_{dl}$$ , $$\mathsf {CHF}_{rsa}$$ and $$\mathsf {CHF}_{fac}$$ of chameleon hash functions, and prove their tight S-MU-CR security based on the discrete logarithm, RSA and factoring assumptions, respectively. In applications, we show that tightly S-MU-CR secure chameleon hash functions can lift a signature scheme from (weak) unforgeability to strong unforgeability with a tight security reduction in the multi-user setting.

Xiangyu Liu, Shengli Liu, Dawu Gu
Backmatter
Metadaten
Titel
Information Security and Privacy
herausgegeben von
Joseph K. Liu
Hui Cui
Copyright-Jahr
2020
Electronic ISBN
978-3-030-55304-3
Print ISBN
978-3-030-55303-6
DOI
https://doi.org/10.1007/978-3-030-55304-3