skip to main content
research-article

Our troubles with Linux Kernel upgrades and why you should care

Published:23 July 2013Publication History
Skip Abstract Section

Abstract

Linux and other open-source Unix variants (and their distributors) provide researchers with full-fledged operating systems that are widely used. However, due to their complexity and rapid development, care should be exercised when using these operating systems for performance experiments, especially in systems research. In particular, the size and continual evolution of the Linux code-base makes it difficult to understand, and as a result, decipher and explain the reasons for performance improvements. In addition, the rapid kernel development cycle means that experimental results can be viewed as out of date, or meaningless, very quickly. We demonstrate that this viewpoint is incorrect because kernel changes can and have introduced both bugs and performance degradations.

This paper describes some of our experiences using Linux and FreeBSD as platforms for conducting performance evaluations and some performance regressions we have found. Our results show, these performance regressions can be serious (e.g., repeating identical experiments results in large variability in results) and long lived despite having a large negative effect on performance (one problem was present for more than 3 years). Based on these experiences, we argue: it is sometimes reasonable to use an older kernel version, experimental results need careful analysis to explain why a performance effect occurs, and publishing papers validating prior research is essential.

References

  1. M. Baker. Independent labs to verify high-profile papers. Nature | News, August 2012.Google ScholarGoogle Scholar
  2. S. M. Blackburn, A. Diwan, M. Hauswirth, P. F. Sweeney, J. N. Amaral, V. Babka, W. Binder, T. Brecht, L. Bulej, L. Eeckhout, S. Fischmeister, D. Frampton, R. Garner, A. Georges, L. J. Hendren, M. Hind, A. L. Hosking, R. Jones, T. Kalibera, P. Moret, N. Nystrom, V. Pankratius, and P. Tuma. Evaluate collaboratory technical report #1: Can you trust your experimental results?, February 2012. http://evaluate.inf.usi.ch/technical-reports/1.Google ScholarGoogle Scholar
  3. B. M. Cantrill, M. W. Shapiro, and A. H. Leventhal. Dynamic instrumentation of production systems. In Proceedings of the Annual Conference on USENIX Annual Technical Conference, ATEC '04, Berkeley, CA, U.S.A., 2004. USENIX Association. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. CentOS: The community enterprise operating system. http://www.centos.org.Google ScholarGoogle Scholar
  5. J. Corbet, G. Kroah-Hartman, and A. McPherson. Linux kernel development: How fast it is going, who is doing it, what they are doing, and who is sponsoring it, Mar. 2012. http://go.linuxfoundation.org/who-writes-linux-2012.Google ScholarGoogle Scholar
  6. Evaluate Collaboratory. Experimental evaluation of software and systems in computer science. http://evaluate.inf.usi.ch/.Google ScholarGoogle Scholar
  7. D. G. Feitelson. Experimental computer science: The need for a cultural change, December 2006. http://www.cs.huji.ac.il/ feit/papers/exp05.pdf.Google ScholarGoogle Scholar
  8. A. S. Harji. Performance Comparison of Uniprocessor and Multiprocessor Web Server Architectures. PhD thesis, University of Waterloo, 2010. http://hdl.handle.net/10012/5040.Google ScholarGoogle Scholar
  9. A. S. Harji, P. A. Buhr, and T. Brecht. Our troubles with Linux and why you should care. In Proceedings of the Second Asia-Pacific Workshop on Systems, APSys '11, pages 2:1-2:5, New York, NY, USA, July 2011. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. A. S. Harji, P. A. Buhr, and T. Brecht. Comparing high-performance multi-core web-server architectures. In Proceedings of the 5th Annual International Systems and Storage Conference (SYSTOR 2012), pages 2:1--2:12, New York, NY, USA, June 2012. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. M. Larabel. Five Years Of Linux Kernel Benchmarks: 2.6.12 Through 2.6.37. Phoronix Media, Nov. 2010. http://www.phoronix.com/scan.php?page=article-&item=linux 2612 2637&num=1.Google ScholarGoogle Scholar
  12. J. Lehrer. The truth wears off: Is there something wrong with the scientific method? The New Yorker, December 13, 2010.Google ScholarGoogle Scholar
  13. Linux kernel performance! http://kernel-perf.sourceforge.net.Google ScholarGoogle Scholar
  14. J. N. Matthews. The case for repeated research in operating systems. SIGOPS Operating Systems Review, 38:5--7, April 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Netcraft, Jan. 2012. https://ssl.netcraft.com/sslsample-report/CMatch/oscnt all.Google ScholarGoogle Scholar
  16. OProfile, 2012. http://oprofile.sourceforge.net.Google ScholarGoogle Scholar
  17. D. Pariag, T. Brecht, A. Harji, P. Buhr, and A. Shukla. Comparing the performance of web server architectures. In Proc. of the 2nd ACM SIGOPS/EuroSys Conf. on Computer Systems, pages 231--243. ACM, Mar. 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. C. Small, N. Ghosh, H. Saleeb, M. Seltzer, and K. Smith. Does systems research measure up? Technical report, Harvard University, TR-16-97, 1997.Google ScholarGoogle Scholar
  19. J. Summers, T. Brecht, D. L. Eager, and B. Wong. Methodologies for generating HTTP streaming video workloads to evaluate web server performance. In 5th Annual International Systems and Storage Conference (SYSTOR), pages 13:1--13:12, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. J. Summers, T. Brecht, D. L. Eager, and B. Wong. To chunk or not to chunk: Implications for HTTP streaming video server performance. In 22nd SIGMM Workshop on Network and Operating Systems Support for Digital Audio and Video (NOSSDAV), pages 15--20, Toronto, Canada, June 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. SUSE linux enterprise server for system z: The market leader for linux on ibm mainframes, May 2010.http://www.novell.com/docrep/2010/05/-SLES for system z market share leader.pdf.Google ScholarGoogle Scholar
  22. The SystemTap home page, 2010. http://sourceware.org/systemtap/.Google ScholarGoogle Scholar
  23. W. F. Tichy. Should computer scientists experiment more? Computer, 31(5):32--40, 1998. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Top 500 supercomputer sites. http://www.top500.org/statistics/list, Category→Operating System Family→Submit.Google ScholarGoogle Scholar
  25. Web Technology Surveys. Historical trends in the usage of Linux versions for websites, Aug. 2011. http://w3techs.com/technologies/history details/oslinux.Google ScholarGoogle Scholar

Recommendations

Comments

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in

Full Access

  • Published in

    cover image ACM SIGOPS Operating Systems Review
    ACM SIGOPS Operating Systems Review  Volume 47, Issue 2
    July 2013
    69 pages
    ISSN:0163-5980
    DOI:10.1145/2506164
    Issue’s Table of Contents

    Copyright © 2013 Authors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    • Published: 23 July 2013

    Check for updates

    Qualifiers

    • research-article

PDF Format

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader