Over the past decade, we've witnessed a fundamental shift in how infrastructure is built, deployed, and run. The rise of reliability engineering is a response to systems' increasing complexity and scale. Without its tools and methods, managing and monitoring the environments of hundreds or thousands of hosts and services is an unimaginable, impossible task.
Anzeige
Bitte loggen Sie sich ein, um Zugang zu diesem Inhalt zu erhalten
The mechanisms behind containers have been a part of the Linux kernel for years. Docker’s API lowered the technical threshold for using containers. Making containers more accessible and easier to use revolutionized the way enterprises develop and deploy code and infrastructure. The Docker name became synonymous with containers. When someone says, “We’ll put it in Docker,” they’re really talking about Linux containers. The terms Docker and containers refer to the same thing and are often used interchangeably.
Infrastructure as Code is a model for defining infrastructure as a set of rules or instructions that automation tools, like Docker, follow when building platforms to support software and applications. Code can be saved to source control, shared among teams, and subject to review and inspection. It leads to repeatable, reliable results, and allows teams to confidently provision hosts, storage, and networks without specialized knowledge or skills.
It’s unusual to see environments dedicated to infrastructure testing. Test systems are typically shared by end users, limiting their usefulness for any effort that might jeopardize their availability. They’re effectively production systems for internal users.