skip to main content
article

Backtracking intrusions

Published:02 February 2005Publication History
Skip Abstract Section

Abstract

Analyzing intrusions today is an arduous, largely manual task because system administrators lack the information and tools needed to understand easily the sequence of steps that occurred in an attack. The goal of BackTracker is to identify automatically potential sequences of steps that occurred in an intrusion. Starting with a single detection point (e.g., a suspicious file), BackTracker identifies files and processes that could have affected that detection point and displays chains of events in a dependency graph. We use BackTracker to analyze several real attacks against computers that we set up as honeypots. In each case, BackTracker is able to highlight effectively the entry point used to gain access to the system and the sequence of steps from that entry point to the point at which we noticed the intrusion. The logging required to support BackTracker added 9% overhead in running time and generated 1.2 GB per day of log data for an operating-system intensive workload.

References

  1. Ammann, P., Jajodia, S., and Liu, P. 2002. Recovery from malicious transactions. IEEE Trans. Knowl. Data Eng. 14, 5 (Sept.), 1167--1185. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Ashcraft, K. and Engler, D. 2002. Using programmer-written compiler extensions to catch security holes. In Proceedings of the 2002 IEEE Symposium on Security and Privacy. 131--147. Google ScholarGoogle Scholar
  3. Buchacker, K. and Sieh, V. 2001. Framework for testing the fault-tolerance of systems including OS and network aspects. In Proceedings of the 2001 IEEE Symposium on High Assurance System Engineering (HASE). 95--105. Google ScholarGoogle Scholar
  4. CERT. 2000. Steps for recovering from a UNIX or NT system compromise. Tech. rep. CERT Coordination Center. Available online at http://www.cert.org/tech_tips/win-UNIX-system_compromise.html.Google ScholarGoogle Scholar
  5. CERT. 2001. Detecting signs of intrusion. Tech. rep. CMU/SEI-SIM-009. CERT Coordination Center. Available online at http://www.cert.org/security-improvement/modules/m09.html.Google ScholarGoogle Scholar
  6. CERT. 2002a. CERT/CC overview incident and vulnerability trends. Tech. rep. CERT Coordination Center. Available online at http://www.cert.org/present/cert-overview-trends/.Google ScholarGoogle Scholar
  7. CERT. 2002b. Multiple vulnerabilities In OpenSSL. Tech. rep. CERT Advisory CA-2002-23. CERT Coordination Center. Available online at http://www.cert.org/advisories/CA-2002-23.html.Google ScholarGoogle Scholar
  8. Cheswick, B. 1992. An evening with Berferd in which a cracker is lured, endured, and studied. In Proceedings of the Winter 1992 USENIX Technical Conference. 163--174.Google ScholarGoogle Scholar
  9. Christie, A. M. 2002. The Incident Detection, Analysis, and Response (IDAR) Project. Tech. rep. CERT Coordination Center. Available online at http://www.cert.org/idar.Google ScholarGoogle Scholar
  10. CIAC. 2001. L-133: Sendmail debugger arbitrary code execution vulnerability. Tech. rep. Computer Incident Advisory Capability. Available online at http://www.ciac.org/ciac/bulletins/l-133.shtml.Google ScholarGoogle Scholar
  11. Denning, D. E. 1976. A lattice model of secure information flow. Commun. ACM 19, 5 (May), 236--243. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Dunlap, G. W., King, S. T., Cinar, S., Basrai, M., and Chen, P. M. 2002. ReVirt: Enabling intrusion analysis through virtual-machine logging and replay. In Proceedings of the 2002 Symposium on Operating Systems Design and Implementation. 211--224. Google ScholarGoogle Scholar
  13. Farmer, D. 2000. What are MACtimes? Dr. Dobb's J. 25, 10 (Oct.), 68, 70--74.Google ScholarGoogle Scholar
  14. Farmer, D. 2001. Bring out your dead. Dr. Dobb's J. 26, 1 (Jan.), 104--105, 107--108.Google ScholarGoogle Scholar
  15. Farmer, D. and Venema, W. 2000. Forensic computer analysis: an introduction. Dr. Dobb's J. 25, 9 (Sept.), 70, 72--75.Google ScholarGoogle Scholar
  16. Forrest, S., Hofmeyr, S. A., Somayaji, A., and Longstaff, T. A. 1996. A sense of self for Unix processes. In Proceedings of 1996 IEEE Symposium on Computer Security and Privacy. 120--128. Google ScholarGoogle Scholar
  17. Garfinkel, T. and Rosenblum, M. 2003. A virtual machine introspection based architecture for intrusion detection. In Proceedings of the 2003 Network and Distributed System Security Symposium (NDSS).Google ScholarGoogle Scholar
  18. Goel, A., Shea, M., Ahuja, S., and Chang Feng, W. 2003. Forensix: A robust, high-performance reconstruction system. In Proceedings of the 2003 Symposium on Operating Systems Principles (poster session). Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Goldberg, I., Wagner, D., Thomas, R., and Brewer, E. A. 1996. A secure environment for untrusted helper applications. In Proceedings of the 1996 USENIX Security Symposium. 1--13. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Huagang, X. 2000. Build a secure system with LIDS. Available online at http://www.lids.org/document/build_lids-0.2.html.Google ScholarGoogle Scholar
  21. Kim, G. H. and Spafford, E. H. 1994. The design and implementation of Tripwire: A file system integrity checker. In Proceedings of 1994 ACM Conference on Computer and Communications Security (CCS). 18--29. Google ScholarGoogle Scholar
  22. King, S. T., Dunlap, G. W., and Chen, P. M. 2003. Operating system support for virtual machines. In Proceedings of the 2003 USENIX Technical Conference. 71--84. Google ScholarGoogle Scholar
  23. Kiriansky, V., Bruening, D., and Amarasinghe, S. 2002. Secure execution via program shepherding. In Proceedings of the 2002 USENIX Security Symposium. Google ScholarGoogle Scholar
  24. Lamport, L. 1978. Time, clocks, and the ordering of events in a distributed system. Commun. ACM 21, 7 (July), 558--565. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Lampson, B. W. 1973. A note on the confinement problem. Commun. ACM 16, 10 (Oct.), 613--615. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. The Honeynet Project. 2001. Know Your Enemy: Revealing the Security Tools, Tactics, and Motives of the Blackhat Community. Addison Wesley, Reading, MA.Google ScholarGoogle Scholar
  27. Tip, F. 1995. A survey of program slicing techniques. J. Programm. Lang. 3, 3.Google ScholarGoogle Scholar
  28. Tyson, W. M. 2001. DERBI: Diagnosis, explanation and recovery from computer break-ins. Tech. rep. DARPA Project F30602-96-C-0295 Final Report. SRI International, Menlo Task, CA. Artificial Intelligence Center. Available online at http://www.dougmoran.com/dmoran/publications.html.Google ScholarGoogle Scholar
  29. Wall, L., Christiansen, T., and Orwant, J. 2000. Programming Perl, 3rd ed. O'Reilly & Associates, Sebastopol; CA. Google ScholarGoogle Scholar
  30. Zhu, N. and Chiueh, T. 2003. Design, implementation, and evaluation of repairable file service. In Proceedings of the 2003 International Conference on Dependable Systems and Networks (DSN). 217--226.Google ScholarGoogle Scholar

Index Terms

  1. Backtracking intrusions

              Recommendations

              Reviews

              Stefano Zanero

              BackTracker, a tool developed to help system administrators track intrusions, is described in this paper. The tool creates a graph of the interactions between system objects, helping in the forensic analysis of a compromised machine. The tool has minimal performance impact, and generates 1.2 gigabytes of logs per day. The idea is really interesting, however, there are several key assumptions that limit its usefulness. The first is the assumption that the intrusion happens in a single row. The authors themselves describe at least one scenario in which the chain of interactions can be broken, and there are several others. Also, the usefulness of the tool depends on the filtering criteria for selecting interesting links, suppressing noise. The authors give examples of these criteria, but there is no extensive testing to create an optimal set of filters. The effect of the tools is described in three scenarios, on a system with no real users aside from the attackers and some emulated workload. Deployment in a real-world situation could lead to very different results. Also, as the authors note, a kernel-level rootkit would hide the actions of the attacker from the tool. It has been demonstrated that kernel compromise can be performed even on protected systems with a monolithic kernel (something the authors do not consider). This means the system could be easily bypassed. Online Computing Reviews Service

              Access critical reviews of Computing literature here

              Become a reviewer for Computing Reviews.

              Comments

              Login options

              Check if you have access through your login credentials or your institution to get full access on this article.

              Sign in

              Full Access

              • Published in

                cover image ACM Transactions on Computer Systems
                ACM Transactions on Computer Systems  Volume 23, Issue 1
                February 2005
                110 pages
                ISSN:0734-2071
                EISSN:1557-7333
                DOI:10.1145/1047915
                Issue’s Table of Contents

                Copyright © 2005 ACM

                Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

                Publisher

                Association for Computing Machinery

                New York, NY, United States

                Publication History

                • Published: 2 February 2005
                Published in tocs Volume 23, Issue 1

                Permissions

                Request permissions about this article.

                Request Permissions

                Check for updates

                Qualifiers

                • article

              PDF Format

              View or Download as a PDF file.

              PDF

              eReader

              View online with eReader.

              eReader