Skip to main content

Über dieses Buch

Pro Asynchronous Programming with .NET teaches the essential skill of asynchronous programming in .NET. It answers critical questions in .NET application development, such as: how do I keep my program responding at all times to keep my users happy? how do I make the most of the available hardware? how can I improve performance?

In the modern world, users expect more and more from their applications and devices, and multi-core hardware has the potential to provide it. But it takes carefully crafted code to turn that potential into responsive, scalable applications.

With Pro Asynchronous Programming with .NET you will:

Meet the underlying model for asynchrony on Windows—threads. Learn how to perform long blocking operations away from your UI thread to keep your UI responsive, then weave the results back in as seamlessly as possible. Master the async/await model of asynchrony in .NET, which makes asynchronous programming simpler and more achievable than ever before. Solve common problems in parallel programming with modern async techniques. Get under the hood of your asynchronous code with debugging techniques and insights from Visual Studio and beyond. In the past asynchronous programming was seen as an advanced skill. It’s now a must for all modern developers. Pro Asynchronous Programming with .NET is your practical guide to using this important programming skill anywhere on the .NET platform.



Chapter 1. An Introduction to Asynchronous Programming

There are many holy grails in software development, but probably none so eagerly sought, and yet so woefully unachieved, as making asynchronous programming simple. This isn’t because the issues are currently unknown; rather, they are very well known, but just very hard to solve in an automated way. The goal of this book is to help you understand why asynchronous programming is important, what issues make it hard, and how to be successful writing asynchronous code on the .NET platform.
Richard Blewett, Andrew Clymer

Chapter 2. The Evolution of the .NET Asynchronous API

In February 2002, .NET version 1.0 was released. From this very first release it was possible to build parts of your application that ran asynchronously. The APIs, patterns, underlying infrastructure, or all three have changed, to some degree, with almost every subsequent release, each attempting to make life easier or richer for the .NET developer. To understand why the .NET async world looks the way it does, and why certain design decisions were made, it is necessary to take a tour through its history. We will then build on this in future chapters as we describe how to build async code today, and which pieces of the async legacy still merit a place in your applications today.
Richard Blewett, Andrew Clymer

Chapter 3. Tasks

With the release of .NET 4.0, Microsoft introduced yet another API for building asynchronous applications: the Task Parallel Library (TPL). The key difference between TPL and previous APIs is that TPL attempts to unify the asynchronous programming model. It provides a single type called a Task to represent all asynchronous operations. In addition to Tasks, TPL introduces standardized cancellation and reporting of progress—traditionally something developers rolled for themselves. This chapter will examine these new constructs and how to take advantage of them to perform asynchronous operations.
Richard Blewett, Andrew Clymer

Chapter 4. Basic Thread Safety

In the last two chapters, we looked at numerous ways of starting work that will run asynchronously. However, in all of the examples, that work has been relatively self-contained. Asynchrony opens up a whole new class of bugs that can infect your code: race conditions, deadlocks, and data corruption to name just three. We will look at how you debug asynchronous code in Chapters 15 and 16, but our starting point has to be how to prevent these issues in the first place. In this chapter, we will examine the need for thread safety and then introduce the primary tools used to achieve it. In Chapter 5, we will take this idea further and look at the constructs introduced in .NET 4.0 that take some of the work off our shoulders.
Richard Blewett, Andrew Clymer

Chapter 5. Concurrent Data Structures

In the previous chapter we introduced the need to consider thread safety when sharing state across multiple threads. The techniques demonstrated required the developer to understand the possible race conditions and select the cheapest synchronization technique to satisfy thread safety. These techniques, while essential, can often become tedious and make the simplest of algorithms seemingly overly complicated and hard to maintain. This chapter will explore the use of built-in concurrent data structures shipped with TPL that will simplify our multithreaded code while maximizing concurrency and efficiency.
Richard Blewett, Andrew Clymer

Chapter 6. Asynchronous UI

Ever since the first graphical user interface, there has been a need to provide users with the feeling that while the computer is crunching numbers, the UI can still respond to input. This is perfectly reasonable: how many physical devices with buttons do you have that suddenly stop responding the moment they start doing any work?
Richard Blewett, Andrew Clymer

Chapter 7. async and await

In view of Windows 8’s mantra of “Fast and Fluid,” it has never been more important to ensure that UIs don’t stall. To ensure that UI developers have no excuses for not using asynchronous methods, the C# language introduces two new keywords: async and await. These two little gems make consuming and composing asynchronous methods as easy as their synchronous counterparts.
Richard Blewett, Andrew Clymer

Chapter 8. Everything a Task

In Chapter 7 you discovered how the async and await keywords simplify the composing and consuming of Task-based asynchronous logic. Also, in Chapter 3 we mentioned that a Task represents a piece of asynchronous activity. This asynchronous activity could be compute but could as easily be I/O. An example of a noncompute Task is when you turned an IAsyncResult into a Task utilizing Task.Factory.FromAsyncResult. If you could literally represent anything as a Task, then you could have more areas of your code that could take advantage of the async and await keywords. In this chapter you will discover there is a very simple API to achieve just this. Taking advantage of this API, we will show you a series of common use cases, from an efficient version of WhenAny to stubbing out Task-based APIs for the purpose of unit testing.
Richard Blewett, Andrew Clymer

Chapter 9. Server-Side Async

In Chapter 6 we looked at asynchrony on the client side in some depth. Strong as the case is for using asynchronous execution on the client, it is even stronger on the server side. One could even say that the server side is fundamentally flawed unless request execution is handled, to some degree, asynchronously. In this chapter we will examine the reasons for implementing server-side asynchrony and the challenges it presents. Then we will analyze the asynchronous features in .NET 4.0 and 4.5 in the major server-side frameworks: ASP.NET (in a number of guises) and Windows Communication Foundation (WCF).
Richard Blewett, Andrew Clymer

Chapter 10. TPL Dataflow

Classic concurrent programming simply took synchronous programming and said, “Let us have lots of synchronous execution running at the same time, through the use of threads.” To this end we have used the Task abstraction to describe concurrency; all is good until we introduce mutable shared state. Once mutable shared state is involved, we have to consider synchronization strategies. Adding the correct and most efficient form of synchronization adds complexity to our code. The one glimmer of hope is that we can retain some degree of elegance through the use of concurrent data structures and more complex synchronization primitives; but the fact remains that we still have to care about mutable shared state.
Richard Blewett, Andrew Clymer

Chapter 11. Parallel Programming

No book on asynchronous programming would be complete without discussing how to improve the performance of your computationally intensive algorithms. Back in March 2005, Herb Sutter, who works for Microsoft, coined the phrase “The free lunch is over,” and he wasn't referring to the Microsoft canteen. He was referring to the fact that prior to that date, when engineers were faced with the need to make their code run faster, they had two choices. They could profile and optimize the code to squeeze a bit more out of the CPU, or just wait a few months and Intel would produce a new, faster CPU. The latter was known as the “free lunch,” as it didn’t require engineering effort. Around March 2005, the computer industry, faced with the need to keep delivering faster and faster computational units, and the fact that clock speeds couldn’t keep growing at historical rates, made the design decision to add more cores. While more cores offer the possibility of greater throughput, single-threaded applications won't run any faster on multicore systems, unlike CPUs of the past. Making the code run faster now requires engineering effort. Algorithms have to be rewritten to spread the work across multiple cores; hence “the free lunch is over.”
Richard Blewett, Andrew Clymer

Chapter 12. Task Scheduling

You saw in Chapter 6 how, when creating a continuation, you can pass a scheduler on which to execute the task. The example in the chapter used the out-of-the-box SynchronizationContextTaskScheduler to push task execution on to the UI thread. It turns out, however, that there is nothing special about the SynchronizationContextTaskScheduler; the task scheduler is a pluggable component. .NET 4.5 introduced another specialized scheduler, but beyond that you can write task schedulers yourself. This chapter looks at the new scheduler introduced in .NET 4.5 and how to write a custom task scheduler. Writing custom task schedulers can be fairly straightforward, but there are some of the issues that you need to be aware of.
Richard Blewett, Andrew Clymer

Chapter 13. Debugging Async with Visual Studio

Debugging multithreaded applications is often nontrivial. This is because they have multiple threads of execution running asynchronously and, to some degree, independently. These threads can sometimes interact in unexpected ways, causing your code to malfunction. However, exactly when the threads execute depends on the Windows thread scheduler. It is therefore possible that one instance of a program will run perfectly fine whereas another will crash—and the only thing that is different is how and when the different threads were scheduled.
Richard Blewett, Andrew Clymer

Chapter 14. Debugging Async—Beyond Visual Studio

Visual Studio is a very powerful debugging tool. However, by its very nature, it struggles to provide insight into issues experienced on production machines. Developers cannot install development tools on production machines or attach an interactive debugger; as you have seen, hitting a breakpoint halts execution of all threads in the process, which means the production system halts. Instead, we need an approach that gives us data that we can mine, offline, to discover the root cause of a bug or performance issue. Generally this means using memory dumps.
Richard Blewett, Andrew Clymer


Weitere Informationen

Premium Partner

BranchenIndex Online

Die B2B-Firmensuche für Industrie und Wirtschaft: Kostenfrei in Firmenprofilen nach Lieferanten, Herstellern, Dienstleistern und Händlern recherchieren.



Best Practices für die Mitarbeiter-Partizipation in der Produktentwicklung

Unternehmen haben das Innovationspotenzial der eigenen Mitarbeiter auch außerhalb der F&E-Abteilung erkannt. Viele Initiativen zur Partizipation scheitern in der Praxis jedoch häufig. Lesen Sie hier  - basierend auf einer qualitativ-explorativen Expertenstudie - mehr über die wesentlichen Problemfelder der mitarbeiterzentrierten Produktentwicklung und profitieren Sie von konkreten Handlungsempfehlungen aus der Praxis.
Jetzt gratis downloaden!