2006 | OriginalPaper | Buchkapitel
Paragraph: Thwarting Signature Learning by Training Maliciously
verfasst von : James Newsome, Brad Karp, Dawn Song
Erschienen in: Recent Advances in Intrusion Detection
Verlag: Springer Berlin Heidelberg
Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.
Wählen Sie Textabschnitte aus um mit Künstlicher Intelligenz passenden Patente zu finden. powered by
Markieren Sie Textabschnitte, um KI-gestützt weitere passende Inhalte zu finden. powered by
Defending a server against Internet worms and defending a user’s email inbox against spam bear certain similarities. In both cases, a stream of
samples
arrives, and a
classifier
must automatically determine whether each sample falls into a malicious
target
class (
e.g.,
worm network traffic, or spam email). A
learner
typically generates a classifier automatically by analyzing two labeled training pools: one of innocuous samples, and one of samples that fall in the malicious target class.
Learning techniques have previously found success in settings where the content of the labeled samples used in training is either random, or even constructed by a helpful teacher, who aims to speed learning of an accurate classifier. In the case of learning classifiers for worms and spam, however, an
adversary
controls the content of the labeled samples to a great extent. In this paper, we describe practical attacks against learning, in which an adversary constructs labeled samples that, when used to train a learner, prevent or severely delay generation of an accurate classifier. We show that even a
delusive
adversary, whose samples are all correctly labeled, can obstruct learning. We simulate and implement highly effective instances of these attacks against the Polygraph [15] automatic polymorphic worm signature generation algorithms.