Pattern Recognition Primer
- 2026
- Book
- Authors
- Karol Przystalski
- Maciej J. Ogorzałek
- Jan K. Argasiński
- Wiesław Chmielnicki
- Publisher
- Springer Nature Switzerland
About this book
This textbook provides semester-length coverage of pattern recognition/classification, accessible to everyone who would like to understand how pattern recognition and machine learning works. It explores the most commonly used classification methods in an intelligible way. Unlike other books available for this course, this one explains from top to bottom each method with all needed details. Every method described is explained with examples in Python. The presentation is designed to be highly accessible to students from a variety of disciplines, with no experience in machine learning. Each chapter contains easy to understand code samples, as well as exercises to consolidate and test knowledge.
Table of Contents
-
Frontmatter
-
Chapter 1. Introduction to Pattern Recognition
Karol Przystalski, Maciej J. Ogorzałek, Jan K. Argasiński, Wiesław ChmielnickiAbstractPattern recognition has become a very popular buzzword over the last few years and is widely used in many commercial solutions. In [1], we can find many trends for 2023. There are such trends as large language models, algorithms, deep learning, and so on. Most of these trends are more or less related to the topic of this book. What is important here is that for several years in each of these trend lists we can find many references to artificial intelligence and pattern recognition. We predict an even more comprehensive expansion of pattern recognition usage in the upcoming years. -
Chapter 2. Machine Learning Math Basics
Karol Przystalski, Maciej J. Ogorzałek, Jan K. Argasiński, Wiesław ChmielnickiAbstractThe goal of this chapter is to provide an explanation of several well-known mathematical terms that are used in machine learning methods presented in this book. In the first part, we cover basic statistical terms such as standard deviation, variance, coefficient matrix, and Pearson correlation. It is followed by the probability terms and related topics like combinatorics, conditional probability, and probability distribution. The third section, even though it is not very extensive, consists of the crucial part in each machine learning method—operations on matrices. The next section is about differential calculus. To understand what a gradient is, we need to explain a few other terms in the first place. The first term that we explain in this section is the limits. -
Chapter 3. Unsupervised Learning
Karol Przystalski, Maciej J. Ogorzałek, Jan K. Argasiński, Wiesław ChmielnickiAbstractUnsupervised methods are based on data sets that do not contain labels. This means that the algorithms are learning only using feature vectors. This group of learning methods is also known under different names. It depends on the context where it is used. Unsupervised learning can be called learning without a teacher. It is the opposite to learning with a teacher, supervised learning. Unsupervised learning is also known as partitioning, segmentation, typology, numerical taxonomy, or clustering. The last term is one of the most commonly used, aside from unsupervised learning. A cluster is a set of elements/objects of the same label. Compared to supervised methods, the label used here is based on similarities between elements of each cluster. It means that some elements are more similar to other elements than to other elements. In other words, the goal of the clustering method is to find groups of objects that are most similar to each other. It is important to mention that if we say label in the context of unsupervised learning, we mean the testing part of a method. Labels are assigned during the learning phase. Each element/object belongs to a group. Each group has its own label that is different for each group. -
Chapter 4. Introduction to Shallow Supervised Methods
Karol Przystalski, Maciej J. Ogorzałek, Jan K. Argasiński, Wiesław ChmielnickiAbstractIn this section, we explain a few basic methods. Explaining the simple machine learning methods done in the first place makes it easier to understand the more complex ones. All the methods presented in this chapter are supervised methods. We start with linear classifiers, such as the Fisher classifier. To understand the linearity of the classifiers, we discuss the k nearest neighborhood method that is not linear by design and compare it to Fisher’s Linear Discriminant method. The last part of the linear section is dedicated to two regression methods: linear and logistic regression. -
Chapter 5. Decision Trees
Karol Przystalski, Maciej J. Ogorzałek, Jan K. Argasiński, Wiesław ChmielnickiAbstractDecision trees are one of the most popular machine learning methods. One of the reasons is their easy usage and understanding. A decision tree is a method that can be easily visualized and understood. We have tens of different decision trees [1‐17]. A decision tree is a method that divide the feature space on each level of a tree. It means it is a non-linear method because it does a linear classification at each node. The tree starts with a root and consists of decision nodes and leafs. It decouples the training set into smaller sets based on some conditions related to one (univariate) or more features (multivariate). As a result of the division, we can get one or more smaller data sets of different sizes. The goal of a decision tree is to build a tree in a way that we have objects of the same label in each leaf. A tree can be also written as a set of rules as it is based on a set of choices at each node. That is why it is commonly used in many decision-making software. It handles multiclass problems easily. We can use decision trees to understand which feature has the major impact on the classification. The more often a feature is used in decision nodes the higher impact it has on the classification. Compared to some other methods, decision trees do not work like a black box as hidden layers of a neural network. Another advantage of decision trees is their performance. Compared to most methods it is fast. On the other hand, a small change in training data can significantly change the rules and accuracy. -
Chapter 6. Support Vector Machine
Karol Przystalski, Maciej J. Ogorzałek, Jan K. Argasiński, Wiesław ChmielnickiAbstractSupport Vector Machine (SVM) is a classifier that was fully introduced by Vapnik in [122,123], however, it was first mentioned in [124]. The standard SVM is a binary linear classifier, i.e., it can separate the samples from the two classes only and only when they are linearly separable. SVM tries to find an optimal separating hyperplane. It is a hyperplane that distinguishes elements of the two different classes in an efficient way. What exactly we mean by optimal and efficient is described later in this chapter. The equation describing the hyperplane is calculated using the samples from the training data set. This means that some noisy samples can affect the result of the classification. To avoid it, the so-called soft margin approach was proposed by Cortes and Vapnik in [125]. This approach is presented in one of the sections in the chapter. -
Chapter 7. Ensemble Methods
Karol Przystalski, Maciej J. Ogorzałek, Jan K. Argasiński, Wiesław ChmielnickiAbstractEnsemble methods are also known as combined classifiers and are a group of methods that combine more than just one classifier to get better results than each classifier on its own. -
Chapter 8. Neural Networks
Karol Przystalski, Maciej J. Ogorzałek, Jan K. Argasiński, Wiesław ChmielnickiAbstractNatural (biological) neurons are the fundamental building blocks of the nervous system, particularly the brain. These biological units process and transmit information using electrical and chemical signals. -
Backmatter
- Title
- Pattern Recognition Primer
- Authors
-
Karol Przystalski
Maciej J. Ogorzałek
Jan K. Argasiński
Wiesław Chmielnicki
- Copyright Year
- 2026
- Publisher
- Springer Nature Switzerland
- Electronic ISBN
- 978-3-031-91816-2
- Print ISBN
- 978-3-031-91815-5
- DOI
- https://doi.org/10.1007/978-3-031-91816-2
PDF files of this book have been created in accordance with the PDF/UA-1 standard to enhance accessibility, including screen reader support, described non-text content (images, graphs), bookmarks for easy navigation, keyboard-friendly links and forms and searchable, selectable text. We recognize the importance of accessibility, and we welcome queries about accessibility for any of our products. If you have a question or an access need, please get in touch with us at accessibilitysupport@springernature.com.