Skip to main content
Top

2017 | Book

Guide to Competitive Programming

Learning and Improving Algorithms Through Contests

insite
SEARCH

About this book

This invaluable textbook presents a comprehensive introduction to modern competitive programming. The text highlights how competitive programming has proven to be an excellent way to learn algorithms, by encouraging the design of algorithms that actually work, stimulating the improvement of programming and debugging skills, and reinforcing the type of thinking required to solve problems in a competitive setting. The book contains many “folklore” algorithm design tricks that are known by experienced competitive programmers, yet which have previously only been formally discussed in online forums and blog posts.

Topics and features: reviews the features of the C++ programming language, and describes how to create efficient algorithms that can quickly process large data sets; discusses sorting algorithms and binary search, and examines a selection of data structures of the C++ standard library; introduces the algorithm design technique of dynamic programming, and investigates elementary graph algorithms; covers such advanced algorithm design topics as bit-parallelism and amortized analysis, and presents a focus on efficiently processing array range queries; surveys specialized algorithms for trees, and discusses the mathematical topics that are relevant in competitive programming; examines advanced graph techniques, geometric algorithms, and string techniques; describes a selection of more advanced topics, including square root algorithms and dynamic programming optimization.

This easy-to-follow guide is an ideal reference for all students wishing to learn algorithms, and practice for programming contests. Knowledge of the basics of programming is assumed, but previous background in algorithm design or programming contests is not necessary. Due to the broad range of topics covered at various levels of difficulty, this book is suitable for both beginners and more experienced readers.

Table of Contents

Frontmatter
Chapter 1. Introduction
Abstract
This chapter shows what competitive programming is about, outlines the contents of the book, and discusses additional learning resources. Section 1.1 goes through the elements of competitive programming, introduces a selection of popular programming contests, and gives advice on how to practice competitive programming. Section 1.2 discusses the goals and topics of this book, and briefly describes the contents of each chapter. Section 1.3 presents the CSES Problem Set, which contains a collection of practice problems. Solving the problems while reading the book is a good way to learn competitive programming. Section 1.4 discusses other books related to competitive programming and the design of algorithms.
Antti Laaksonen
Chapter 2. Programming Techniques
Abstract
This chapter presents some of the features of the C++ programming language that are useful in competitive programming and gives examples of how to use recursion and bit operations in programming. Section 2.1 discusses a selection of topics related to C++, including input and output methods, working with numbers, and how to shorten code. Section 2.2 focuses on recursive algorithms. First we will learn an elegant way to generate all subsets and permutations of a set using recursion. After this, we will use backtracking to count the number of ways to place n non-attacking queens on an \(n \times n\) chessboard. Section 2.3 discusses the basics of bit operations, and shows how to use them to represent subsets of sets.
Antti Laaksonen
Chapter 3. Efficiency
Abstract
The efficiency of algorithms plays a central role in competitive programming. In this chapter, we learn tools that make it easier to design efficient algorithms. Section 3.1 introduces the concept of time complexity, which allows us to estimate running times of algorithms without implementing them. The time complexity of an algorithm shows how quickly its running time increases when the size of the input grows. Section 3.2 presents two example problems which can be solved in many ways. In both problems, we can easily design a slow brute force solution, but it turns out that we can also create much more efficient algorithms.
Antti Laaksonen
Chapter 4. Sorting and Searching
Abstract
Many efficient algorithms are based on sorting the input data, because sorting often makes solving the problem easier. This chapter discusses the theory and practice of sorting as an algorithm design tool. Section 4.1 first discusses three important sorting algorithms: bubble sort, merge sort, and counting sort. After this, we will learn how to use the sorting algorithm available in the C++ standard library. Section 4.2 shows how sorting can be used as a subroutine to create efficient algorithms. For example, to quickly determine if all array elements are unique, we can first sort the array and then simply check all pairs of consecutive elements. Section 4.3 presents the binary search algorithm, which is another important building block of efficient algorithms.
Antti Laaksonen
Chapter 5. Data Structures
Abstract
This chapter introduces the most important data structures of the C++ standard library. In competitive programming, it is crucial to know which data structures are available in the standard library and how to use them. This often saves a large amount of time when implementing an algorithm. Section 5.1 first describes the vector structure which is an efficient dynamic array. After this, we will focus on using iterators and ranges with data structures, and briefly discuss deques, stacks, and queues. Section 5.2 discusses sets, maps, and priority queues. Those data structures are often used as building blocks of efficient algorithms, because they allow us to maintain dynamic structures that support both efficient searches and updates. Section 5.3 shows some results about the efficiency of data structures in practice. As we will see, there are important performance differences that cannot be detected by only looking at time complexities.
Antti Laaksonen
Chapter 6. Dynamic Programming
Abstract
Dynamic programming is an algorithm design technique that can be used to find optimal solutions to problems and to count the number of solutions. This chapter is an introduction to dynamic programming, and the technique will be used many times later in the book when designing algorithms. Section 6.1 discusses the basic elements of dynamic programming in the context of a coin change problem. In this problem we are given a set of coin values, and our task is to construct a sum of money using as few coins as possible. There is a simple greedy algorithm for the problem, but as we will see, it does not always produce an optimal solution. However, using dynamic programming, we can create an efficient algorithm that always finds an optimal solution. Section 6.2 presents a selection of problems that show some of the possibilities of dynamic programming. The problems include determining the longest increasing subsequence in an array, finding an optimal path in a two-dimensional grid, and generating all possible weight sums in a knapsack problem.
Antti Laaksonen
Chapter 7. Graph Algorithms
Abstract
Many programming problems can be solved by considering the situation as a graph and using an appropriate graph algorithm. In this chapter, we will learn the basics of graphs and a selection of important graph algorithms. Section 7.1 discusses graph terminology and data structures that can be used to represent graphs in algorithms. Section 7.2 introduces two fundamental graph traversal algorithms. Depth-first search is a simple way to visit all nodes that can be reached from a starting node, and breadth-first search visits the nodes in increasing order of their distance from the starting node. Section 7.3 presents algorithms for finding shortest paths in weighted graphs. The Bellman–Ford algorithm is a simple algorithm that finds shortest paths from a starting node to all other nodes. Dijkstra’s algorithm is a more efficient algorithm which requires that all edge weights are nonnegative. The Floyd–Warshall algorithm determines shortest paths between all node pairs of a graph. Section 7.4 explores special properties of directed acyclic graphs. We will learn how to construct a topological sort and how to use dynamic programming to efficiently process such graphs. Section 7.5 focuses on successor graphs where each node has a unique successor. We will discuss an efficient way to find successors of nodes and Floyd’s algorithm for cycle detection. Section 7.6 presents Kruskal’s and Prim’s algorithms for constructing minimum spanning trees. Kruskal’s algorithm is based on an efficient union-find structure which has also other uses in algorithm design.
Antti Laaksonen
Chapter 8. Algorithm Design Topics
Abstract
This chapter discusses a selection of algorithm design topics. Section 8.1 focuses on bit-parallel algorithms that use bit operations to efficiently process data. Typically, we can replace a for loop with bit operations, which may remarkably improve the running time of the algorithm. Section 8.2 presents the amortized analysis technique, which can be used to estimate the time needed for a sequence of operations in an algorithm. Using the technique, we can analyze algorithms for determining nearest smaller elements and sliding window minima. Section 8.3 discusses ternary search and other techniques for efficiently calculating minimum values of certain functions.
Antti Laaksonen
Chapter 9. Range Queries
Abstract
In this chapter, we discuss data structures for efficiently processing range queries on arrays. Typical queries are range sum queries (calculating the sum of values) and range minimum queries (finding the minimum value). Section 9.1 focuses on a simple situation where the array values are not modified between the queries. In this case it suffices to preprocess the array so that we can efficiently determine the answer for any possible query. We will first learn to process sum queries using a prefix sum array, and then we will discuss the sparse table algorithm for processing minimum queries. Section 9.2 presents two tree structures that allow us to both process queries and update array values efficiently. A binary indexed tree supports sum queries and can be seen as a dynamic version of a prefix sum array. A segment tree is a more versatile structure that supports sum queries, minimum queries, and several other queries. The operations of both the structures work in logarithmic time.
Antti Laaksonen
Chapter 10. Tree Algorithms
Abstract
The special properties of trees allow us to create algorithms that are specialized for trees and work more efficiently than general graph algorithms. This chapter presents a selection of such algorithms. Section 10.1 introduces basic concepts and algorithms related to trees. A central problem is finding the diameter of a tree, i.e., the maximum distance between two nodes. We will learn two linear time algorithms for solving the problem. Section 10.2 focuses on processing queries on trees. We will learn to use a tree traversal array to process various queries related to subtrees and paths. After this, we will discuss methods for determining lowest common ancestors and an offline algorithm which is based on merging data structures. Section 10.3 presents two advanced tree processing techniques: centroid decomposition and heavy-light decomposition.
Antti Laaksonen
Chapter 11. Mathematics
Abstract
This chapter deals with mathematical topics that are recurrent in competitive programming. We will both discuss theoretical results and learn how to use them in practice in algorithms. Section 11.1 discusses number-theoretical topics. We will learn algorithms for finding prime factors of numbers, techniques related to modular arithmetic, and efficient methods for solving integer equations. Section 11.2 explores ways to approach combinatorial problems: how to efficiently count all valid combinations of objects. The topics of this section include binomial coefficients, Catalan numbers, and inclusion-exclusion. Section 11.3 shows how to use matrices in algorithm programming. For example, we will learn how to make a dynamic programming algorithm more efficient by exploiting an efficient way to calculate matrix powers. Section 11.4 first discusses basic techniques for calculating probabilities of events and the concept of Markov chains. After this, we will see examples of algorithms that are based on randomness. Section 11.5 focuses on game theory. First we will learn to optimally play a simple stick game using nim theory, and after this, we will generalize the strategy to a wide range of other games.
Antti Laaksonen
Chapter 12. Advanced Graph Algorithms
Abstract
This chapter discusses a selection of advanced graph algorithms. Section 12.1 presents an algorithm for finding the strongly connected components of a graph. After this, we will learn how to efficiently solve the 2SAT problem using the algorithm. Section 12.2 focuses on Eulerian and Hamiltonian paths. An Eulerian path goes through each edge of the graph exactly once, and a Hamiltonian path visits each node exactly once. While the concepts look quite similar at first glance, the computational problems related to them are very different. Section 12.3 first shows how we can determine the maximum flow from a source to a sink in a graph. After this, we will see how to reduce several other graph problems to the maximum flow problem. Section 12.4 discusses properties of depth-first search and problems related to biconnected graphs.
Antti Laaksonen
Chapter 13. Geometry
Abstract
This chapter discusses algorithm techniques related to geometry. The general goal of the chapter is to find ways to conveniently solve geometric problems, avoiding special cases and tricky implementations. Section 13.1 introduces the C++ complex number class which has useful tools for geometric problems. After this, we will learn to use cross products to solve various problems, such as testing whether two line segments intersect and calculating the distance from a point to a line. Finally, we discuss ways to calculate polygon areas and explore special properties of Manhattan distances. Section 13.2 focuses on sweep line algorithms which play an important role in computational geometry. We will see how to use such algorithms for counting intersection points, finding closest points, and constructing convex hulls.
Antti Laaksonen
Chapter 14. String Algorithms
Abstract
This chapter deals with topics related to string processing. Section 14.1 presents the trie structure which maintains a set of strings. After this, dynamic programming algorithms for determining longest common subsequences and edit distances are discussed. Section 14.2 discusses the string hashing technique which is a general tool for creating efficient string algorithms. The idea is to compare hash values of strings instead of their characters, which allows us to compare strings in constant time. Section 14.3 introduces the Z-algorithm which determines for each string position the longest substring which is also a prefix of the string. The Z-algorithm is an alternative for many string problems that can also be solved using hashing. Section 14.4 discusses the suffix array structure, which can be used to solve some more advanced string problems.
Antti Laaksonen
Chapter 15. Additional Topics
Abstract
This final chapter presents a selection of advanced algorithms and data structures. Mastering the techniques of this chapter may sometimes help you to solve the most difficult problem in a programming contest. Section 15.1 discusses square root techniques for creating data structures and algorithms. Such solutions are often based on the idea of dividing a sequence of n elements into \(O(\sqrt{n})\) blocks, each of which consists of \(O(\sqrt{n})\) elements. Section 15.2 further explores the possibilities of segment trees. For example, we will see how to create a segment tree that supports both range queries and range updates at the same time. Section 15.3 presents the treap data structure which allows us to efficiently split an array into two parts and combine two arrays into a single array. Section 15.4 focuses on optimizing dynamic programming solutions. First we will learn the convex hull trick which is used with linear functions, and after this we will discuss the divide and conquer optimization and Knuth’s optimization. Section 15.5 deals with miscellaneous algorithm design techniques, such as meet in the middle and parallel binary search.
Antti Laaksonen
Backmatter
Metadata
Title
Guide to Competitive Programming
Author
Antti Laaksonen
Copyright Year
2017
Electronic ISBN
978-3-319-72547-5
Print ISBN
978-3-319-72546-8
DOI
https://doi.org/10.1007/978-3-319-72547-5

Premium Partner