diff --git a/rustbook-en/ci/dictionary.txt b/rustbook-en/ci/dictionary.txt
index 9f305940..5807c2b2 100644
--- a/rustbook-en/ci/dictionary.txt
+++ b/rustbook-en/ci/dictionary.txt
@@ -253,6 +253,7 @@ interoperate
IntoFuture
IntoIterator
intra
+intratask
InvalidDigit
invariants
ioerror
@@ -360,6 +361,7 @@ nondeterministic
nonequality
nongeneric
noplayground
+NoStarch
NotFound
nsprust
null's
@@ -523,6 +525,7 @@ suboptimal
subpath
subslices
substring
+subtasks
subteams
subtree
subtyping
diff --git a/rustbook-en/src/SUMMARY.md b/rustbook-en/src/SUMMARY.md
index 12a4c43c..5a83bd0b 100644
--- a/rustbook-en/src/SUMMARY.md
+++ b/rustbook-en/src/SUMMARY.md
@@ -101,12 +101,12 @@
- [Shared-State Concurrency](ch16-03-shared-state.md)
- [Extensible Concurrency with the `Sync` and `Send` Traits](ch16-04-extensible-concurrency-sync-and-send.md)
-- [Async and Await](ch17-00-async-await.md)
+- [Fundamentals of Asynchronous Programming: Async, Await, Futures, and Streams](ch17-00-async-await.md)
- [Futures and the Async Syntax](ch17-01-futures-and-syntax.md)
- - [Concurrency With Async](ch17-02-concurrency-with-async.md)
+ - [Applying Concurrency with Async](ch17-02-concurrency-with-async.md)
- [Working With Any Number of Futures](ch17-03-more-futures.md)
- - [Streams](ch17-04-streams.md)
- - [Digging Into the Traits for Async](ch17-05-traits-for-async.md)
+ - [Streams: Futures in Sequence](ch17-04-streams.md)
+ - [A Closer Look at the Traits for Async](ch17-05-traits-for-async.md)
- [Futures, Tasks, and Threads](ch17-06-futures-tasks-threads.md)
- [Object Oriented Programming Features of Rust](ch18-00-oop.md)
diff --git a/rustbook-en/src/appendix-01-keywords.md b/rustbook-en/src/appendix-01-keywords.md
index 8e00f34f..4e7cdc7d 100644
--- a/rustbook-en/src/appendix-01-keywords.md
+++ b/rustbook-en/src/appendix-01-keywords.md
@@ -69,9 +69,7 @@ Rust for potential future use.
- `box`
- `do`
- `final`
-
-* `gen`
-
+- `gen`
- `macro`
- `override`
- `priv`
diff --git a/rustbook-en/src/ch17-00-async-await.md b/rustbook-en/src/ch17-00-async-await.md
index a69fcccf..1a656139 100644
--- a/rustbook-en/src/ch17-00-async-await.md
+++ b/rustbook-en/src/ch17-00-async-await.md
@@ -1,122 +1,139 @@
-# Async and Await
-
-Many operations we ask the computer to do can take a while to finish. For
-example, if you used a video editor to create a video of a family celebration,
-exporting it could take anywhere from minutes to hours. Similarly, downloading a
-video shared by someone in your family might take a long time. It would be nice
-if we could do something else while we are waiting for those long-running
-processes to complete.
-
-The video export will use as much CPU and GPU power as it can. If you only had
-one CPU core, and your operating system never paused that export until it
-completed, you couldn’t do anything else on your computer while it was running.
-That would be a pretty frustrating experience, though. Instead, your computer’s
-operating system can—and does!—invisibly interrupt the export often enough to
-let you get other work done along the way.
-
-The file download is different. It does not take up very much CPU time. Instead,
-the CPU needs to wait on data to arrive from the network. While you can start
-reading the data once some of it is present, it might take a while for the rest
-to show up. Even once the data is all present, a video can be quite large, so it
-might take some time to load it all. Maybe it only takes a second or two—but
-that’s a very long time for a modern processor, which can do billions of
-operations every second. It would be nice to be able to put the CPU to use for
-other work while waiting for the network call to finish—so, again, your
-operating system will invisibly interrupt your program so other things can
-happen while the network operation is still ongoing.
-
-> Note: The video export is the kind of operation which is often described as
-> “CPU-bound” or “compute-bound”. It’s limited by the speed of the computer’s
-> ability to process data within the _CPU_ or _GPU_, and how much of that speed
-> it can use. The video download is the kind of operation which is often
-> described as “IO-bound,” because it’s limited by the speed of the computer’s
-> _input and output_. It can only go as fast as the data can be sent across the
-> network.
+# Fundamentals of Asynchronous Programming: Async, Await, Futures, and Streams
+
+Many operations we ask the computer to do can take a while to finish. It would
+be nice if we could do something else while we are waiting for those
+long-running processes to complete. Modern computers offer two techniques for
+working on more than one operation at a time: parallelism and concurrency. Once
+we start writing programs that involve parallel or concurrent operations,
+though, we quickly encounter new challenges inherent to *asynchronous
+programming*, where operations may not finish sequentially in the order they
+were started. This chapter builds on Chapter 16’s use of threads for parallelism
+and concurrency by introducing an alternative approach to asynchronous
+programming: Rust’s Futures, Streams, the `async` and `await` syntax that
+supports them, and the tools for managing and coordinating between asynchronous
+operations.
+
+Let’s consider an example. Say you’re exporting a video you’ve created of a
+family celebration, an operation that could take anywhere from minutes to hours.
+The video export will use as much CPU and GPU power as it can. If you had only
+one CPU core and your operating system didn’t pause that export until it
+completed—that is, if it executed the export _synchronously_—you couldn’t do
+anything else on your computer while that task was running. That would be a
+pretty frustrating experience. Fortunately, your computer’s operating system
+can, and does, invisibly interrupt the export often enough to let you get other
+work done simultaneously.
+
+Now say you’re downloading a video shared by someone else, which can also take a
+while but does not take up as much CPU time. In this case, the CPU has to wait
+for data to arrive from the network. While you can start reading the data once
+it starts to arrive, it might take some time for all of it to show up. Even once
+the data is all present, if the video is quite large, it could take at least a
+second or two to load it all. That might not sound like much, but it’s a very
+long time for a modern processor, which can perform billions of operations every
+second. Again, your operating system will invisibly interrupt your program to
+allow the CPU to perform other work while waiting for the network call to
+finish.
+
+The video export is an example of a _CPU-bound_ or _compute-bound_ operation.
+It’s limited by the computer’s potential data processing speed within the CPU or
+GPU, and how much of that speed it can dedicate to the operation. The video
+download is an example of an _IO-bound_ operation, because it’s limited by the
+speed of the computer’s _input and output_; it can only go as fast as the data
+can be sent across the network.
In both of these examples, the operating system’s invisible interrupts provide a
-form of concurrency. That concurrency only happens at the level of a whole
+form of concurrency. That concurrency happens only at the level of the entire
program, though: the operating system interrupts one program to let other
programs get work done. In many cases, because we understand our programs at a
-much more granular level than the operating system does, we can spot lots of
-opportunities for concurrency that the operating system cannot see.
+much more granular level than the operating system does, we can spot
+opportunities for concurrency that the operating system can’t see.
For example, if we’re building a tool to manage file downloads, we should be
-able to write our program in such a way that starting one download does not lock
-up the UI, and users should be able to start multiple downloads at the same
-time. Many operating system APIs for interacting with the network are
-_blocking_, though. That is, these APIs block the program’s progress until the
-data that they are processing is completely ready.
-
-> Note: This is how _most_ function calls work, if you think about it! However,
-> we normally reserve the term “blocking” for function calls which interact with
+able to write our program so that starting one download won’t lock up the UI,
+and users should be able to start multiple downloads at the same time. Many
+operating system APIs for interacting with the network are _blocking_, though;
+that is, they block the program’s progress until the data they’re processing is
+completely ready.
+
+> Note: This is how _most_ function calls work, if you think about it. However,
+> the term _blocking_ is usually reserved for function calls that interact with
> files, the network, or other resources on the computer, because those are the
-> places where an individual program would benefit from the operation being
+> cases where an individual program would benefit from the operation being
> _non_-blocking.
We could avoid blocking our main thread by spawning a dedicated thread to
-download each file. However, we would eventually find that the overhead of those
-threads was a problem. It would also be nicer if the call were not blocking in
-the first place. Last but not least, it would be better if we could write in the
-same direct style we use in blocking code. Something similar to this:
+download each file. However, the overhead of those threads would eventually
+become a problem. It would be preferable if the call didn’t block in the first
+place. It would also be better if we could write in the same direct style we use
+in blocking code, similar to this:
```rust,ignore,does_not_compile
let data = fetch_data_from(url).await;
println!("{data}");
```
-That is exactly what Rust’s async abstraction gives us. Before we see how this
-works in practice, though, we need to take a short detour into the differences
-between parallelism and concurrency.
+That is exactly what Rust’s _async_ (short for _asynchronous_) abstraction gives
+us. In this chapter, you’ll learn all about async as we cover the following
+topics:
+
+- How to use Rust’s `async` and `await` syntax
+- How to use the async model to solve some of the same challenges we looked at
+ in Chapter 16
+- How multithreading and async provide complementary solutions, that you can
+ combine in many cases
+
+Before we see how async works in practice, though, we need to take a short
+detour to discuss the differences between parallelism and concurrency.
### Parallelism and Concurrency
-In the previous chapter, we treated parallelism and concurrency as mostly
-interchangeable. Now we need to distinguish between them more precisely, because
-the differences will show up as we start working.
+We’ve treated parallelism and concurrency as mostly interchangeable so far. Now
+we need to distinguish between them more precisely, because the differences will
+show up as we start working.
-Consider the different ways a team could split up work on a software project. We
-could assign a single individual multiple tasks, or we could assign one task per
-team member, or we could do a mix of both approaches.
+Consider the different ways a team could split up work on a software project.
+You could assign a single member multiple tasks, assign each member one task, or
+use a mix of the two approaches.
When an individual works on several different tasks before any of them is
complete, this is _concurrency_. Maybe you have two different projects checked
out on your computer, and when you get bored or stuck on one project, you switch
to the other. You’re just one person, so you can’t make progress on both tasks
-at the exact same time—but you can multi-task, making progress on multiple
-tasks by switching between them.
+at the exact same time, but you can multi-task, making progress on one at a time
+by switching between them (see Figure 17-1).
-When you agree to split up a group of tasks between the people on the team, with
-each person taking one task and working on it alone, this is _parallelism_. Each
-person on the team can make progress at the exact same time.
+When the team splits up a group of tasks by having each member take one task and
+work on it alone, this is _parallelism_. Each person on the team can make
+progress at the exact same time (see Figure 17-2).
-With both of these situations, you might have to coordinate between different
-tasks. Maybe you _thought_ the task that one person was working on was totally
-independent from everyone else’s work, but it actually needs something finished
-by another person on the team. Some of the work could be done in parallel, but
-some of it was actually _serial_: it could only happen in a series, one thing
-after the other, as in Figure 17-3.
+In both of these workflows, you might have to coordinate between different
+tasks. Maybe you _thought_ the task assigned to one person was totally
+independent from everyone else’s work, but it actually requires another person
+on the team to finish their task first. Some of the work could be done in
+parallel, but some of it was actually _serial_: it could only happen in a
+series, one task after the other, as in Figure 17-3.
@@ -130,24 +147,17 @@ coworker are no longer able to work in parallel, and you’re also no longer abl
to work concurrently on your own tasks.
The same basic dynamics come into play with software and hardware. On a machine
-with a single CPU core, the CPU can only do one operation at a time, but it can
-still work concurrently. Using tools such as threads, processes, and async, the
-computer can pause one activity and switch to others before eventually cycling
-back to that first activity again. On a machine with multiple CPU cores, it can
-also do work in parallel. One core can be doing one thing while another core
-does something completely unrelated, and those actually happen at the same
-time.
+with a single CPU core, the CPU can perform only one operation at a time, but it
+can still work concurrently. Using tools such as threads, processes, and async,
+the computer can pause one activity and switch to others before eventually
+cycling back to that first activity again. On a machine with multiple CPU cores,
+it can also do work in parallel. One core can be performing one task while
+another core performs a completely unrelated one, and those operations actually
+happen at the same time.
When working with async in Rust, we’re always dealing with concurrency.
Depending on the hardware, the operating system, and the async runtime we are
-using—more on async runtimes shortly!—that concurrency may also use parallelism
+using (more on async runtimes shortly), that concurrency may also use parallelism
under the hood.
-Now, let’s dive into how async programming in Rust actually works! In the rest
-of this chapter, we will:
-
-- see how to use Rust’s `async` and `await` syntax
-- explore how to use the async model to solve some of the same challenges we
- looked at in Chapter 16
-- look at how multithreading and async provide complementary solutions, which
- you can even use together in many cases
+Now, let’s dive into how async programming in Rust actually works.
diff --git a/rustbook-en/src/ch17-01-futures-and-syntax.md b/rustbook-en/src/ch17-01-futures-and-syntax.md
index 81975f3a..35b22ac1 100644
--- a/rustbook-en/src/ch17-01-futures-and-syntax.md
+++ b/rustbook-en/src/ch17-01-futures-and-syntax.md
@@ -3,59 +3,56 @@
The key elements of asynchronous programming in Rust are _futures_ and Rust’s
`async` and `await` keywords.
-A _future_ is a value which may not be ready now, but will become ready at some
+A _future_ is a value that may not be ready now but will become ready at some
point in the future. (This same concept shows up in many languages, sometimes
-under other names such as “task” or “promise”.) Rust provides a `Future` trait
-as a building block so different async operations can be implemented with
-different data structures, but with a common interface. In Rust, we say that
-types which implement the `Future` trait are futures. Each type which
-implements `Future` holds its own information about the progress that has been
-made and what "ready" means.
-
-The `async` keyword can be applied to blocks and functions to specify that they
+under other names such as _task_ or _promise_.) Rust provides a `Future` trait
+as a building block so that different async operations can be implemented with
+different data structures but with a common interface. In Rust, futures are
+types that implement the `Future` trait. Each future holds its own information
+about the progress that has been made and what "ready" means.
+
+You can apply the `async` keyword to blocks and functions to specify that they
can be interrupted and resumed. Within an async block or async function, you can
-use the `await` keyword to wait for a future to become ready, called _awaiting a
-future_. Each place you await a future within an async block or function is a
-place that async block or function may get paused and resumed. The process of
-checking with a future to see if its value is available yet is called _polling_.
-
-Some other languages also use `async` and `await` keywords for async
-programming. If you’re familiar with those languages, you may notice some
-significant differences in how Rust does things, including how it handles the
-syntax. That’s for good reason, as we’ll see!
-
-Most of the time when writing async Rust, we use the `async` and `await`
-keywords. Rust compiles them into equivalent code using the `Future` trait, much
-as it compiles `for` loops into equivalent code using the `Iterator` trait.
-Because Rust provides the `Future` trait, though, you can also implement it for
-your own data types when you need to. Many of the functions we’ll see
-throughout this chapter return types with their own implementations of `Future`.
-We’ll return to the definition of the trait at the end of the chapter and dig
-into more of how it works, but this is enough detail to keep us moving forward.
-
-That may all feel a bit abstract. Let’s write our first async program: a little
-web scraper. We’ll pass in two URLs from the command line, fetch both of them
-concurrently, and return the result of whichever one finishes first. This
-example will have a fair bit of new syntax, but don’t worry. We’ll explain
+use the `await` keyword to _await a future_ (that is, wait for it to become
+ready). Any point where you await a future within an async block or function is
+a potential spot for that async block or function to pause and resume. The
+process of checking with a future to see if its value is available yet is called
+_polling_.
+
+Some other languages, such as C# and JavaScript, also use `async` and `await`
+keywords for async programming. If you’re familiar with those languages, you may
+notice some significant differences in how Rust does things, including how it
+handles the syntax. That’s for good reason, as we’ll see!
+
+When writing async Rust, we use the `async` and `await` keywords most of the
+time. Rust compiles them into equivalent code using the `Future` trait, much as
+it compiles `for` loops into equivalent code using the `Iterator` trait. Because
+Rust provides the `Future` trait, though, you can also implement it for your own
+data types when you need to. Many of the functions we’ll see throughout this
+chapter return types with their own implementations of `Future`. We’ll return to
+the definition of the trait at the end of the chapter and dig into more of how
+it works, but this is enough detail to keep us moving forward.
+
+This may all feel a bit abstract, so let’s write our first async program: a
+little web scraper. We’ll pass in two URLs from the command line, fetch both of
+them concurrently, and return the result of whichever one finishes first. This
+example will have a fair bit of new syntax, but don’t worry—we’ll explain
everything you need to know as we go.
-### Our First Async Program
+## Our First Async Program
-To keep this chapter focused on learning async, rather than juggling parts of
-the ecosystem, we have created the `trpl` crate (`trpl` is short for “The Rust
+To keep the focus of this chapter on learning async rather than juggling parts
+of the ecosystem, we’ve created the `trpl` crate (`trpl` is short for “The Rust
Programming Language”). It re-exports all the types, traits, and functions
you’ll need, primarily from the [`futures`][futures-crate] and
-[`tokio`][tokio] crates.
-
-- The `futures` crate is an official home for Rust experimentation for async
- code, and is actually where the `Future` type was originally designed.
-
-- Tokio is the most widely used async runtime in Rust today, especially (but
- not only!) for web applications. There are other great runtimes out there,
- and they may be more suitable for your purposes. We use Tokio under the hood
- for `trpl` because it’s well-tested and widely used.
-
-In some cases, `trpl` also renames or wraps the original APIs to let us stay
+[`tokio`][tokio] crates. The `futures` crate is an official home
+for Rust experimentation for async code, and it’s actually where the `Future`
+trait was originally designed. Tokio is the most widely used async runtime in
+Rust today, especially for web applications. There are other great runtimes out
+there, and they may be more suitable for your purposes. We use the `tokio` crate
+under the hood for `trpl` because it’s well tested and widely used.
+
+In some cases, `trpl` also renames or wraps the original APIs to keep you
focused on the details relevant to this chapter. If you want to understand what
the crate does, we encourage you to check out [its source
code][crate-source]. You’ll be able to see what crate each
@@ -72,12 +69,14 @@ $ cargo add trpl
```
Now we can use the various pieces provided by `trpl` to write our first async
-program. We’ll build a little command line tool which fetches two web pages,
+program. We’ll build a little command line tool that fetches two web pages,
pulls the `
` element from each, and prints out the title of whichever
-finishes that whole process first.
+page finishes that whole process first.
+
+### Defining the page_title Function
Let’s start by writing a function that takes one page URL as a parameter, makes
-a request to it, and returns the text of the title element:
+a request to it, and returns the text of the title element (see Listing 17-1).
@@ -87,53 +86,53 @@ a request to it, and returns the text of the title element:
-In Listing 17-1, we define a function named `page_title`, and we mark it with
-the `async` keyword. Then we use the `trpl::get` function to fetch whatever URL
-is passed in, and we await the response by using the `await` keyword. Then we
-get the text of the response by calling its `text` method, and once again await
-it with the `await` keyword. Both of these steps are asynchronous. For `get`,
-we need to wait for the server to send back the first part of its response,
-which will include HTTP headers, cookies, and so on. That part of the response
-can be delivered separately from the body of the request. Especially if the
-body is very large, it can take some time for it all to arrive. Thus, we have
-to wait for the _entirety_ of the response to arrive, so the `text` method is
-also async.
+First, we define a function named `page_title` and mark it with the `async`
+keyword. Then we use the `trpl::get` function to fetch whatever URL is passed in
+and add the `await` keyword to await the response. To get the text of the
+response, we call its `text` method, and once again await it with the `await`
+keyword. Both of these steps are asynchronous. For the `get` function, we have
+to wait for the server to send back the first part of its response, which will
+include HTTP headers, cookies, and so on, and can be delivered separately from
+the response body. Especially if the body is very large, it can take some time
+for it all to arrive. Because we have to wait for the _entirety_ of the response
+to arrive, the `text` method is also async.
We have to explicitly await both of these futures, because futures in Rust are
-_lazy_: they don’t do anything until you ask them to with `await`. (In fact,
-Rust will show a compiler warning if you don’t use a future.) This should
-remind you of our discussion of iterators [back in Chapter 13][iterators-lazy].
-Iterators do nothing unless you call their `next` method—whether directly, or
-using `for` loops or methods such as `map` which use `next` under the hood. With
-futures, the same basic idea applies: they do nothing unless you explicitly ask
-them to. This laziness allows Rust to avoid running async code until it’s
-actually needed.
-
-> Note: This is different from the behavior we saw when using `thread::spawn` in
-> the previous chapter, where the closure we passed to another thread started
-> running immediately. It’s also different from how many other languages
-> approach async! But it’s important for Rust. We’ll see why that is later.
-
-Once we have `response_text`, we can then parse it into an instance of the
-`Html` type using `Html::parse`. Instead of a raw string, we now have a data
-type we can use to work with the HTML as a richer data structure. In particular,
-we can use the `select_first` method to find the first instance of a given CSS
-selector. By passing the string `"title"`, we’ll get the first ``
-element in the document, if there is one. Because there may not be any matching
-element, `select_first` returns an `Option`. Finally, we use the
+_lazy_: they don’t do anything until you ask them to with the `await` keyword.
+(In fact, Rust will show a compiler warning if you don’t use a future.) This
+might remind you of Chapter 13’s discussion of iterators in the section
+[Processing a Series of Items With Iterators][iterators-lazy].
+Iterators do nothing unless you call their `next` method—whether directly or by
+using `for` loops or methods such as `map` that use `next` under the hood.
+Likewise, futures do nothing unless you explicitly ask them to. This laziness
+allows Rust to avoid running async code until it’s actually needed.
+
+> Note: This is different from the behavior we saw in the previous chapter when
+> using `thread::spawn` in [Creating a New Thread with
+> spawn][thread-spawn], where the closure we passed to another
+> thread started running immediately. It’s also different from how many other
+> languages approach async. But it’s important for Rust, and we’ll see why
+> later.
+
+Once we have `response_text`, we can parse it into an instance of the `Html`
+type using `Html::parse`. Instead of a raw string, we now have a data type we
+can use to work with the HTML as a richer data structure. In particular, we can
+use the `select_first` method to find the first instance of a given CSS
+selector. By passing the string `"title"`, we’ll get the first `` element
+in the document, if there is one. Because there may not be any matching element,
+`select_first` returns an `Option`. Finally, we use the
`Option::map` method, which lets us work with the item in the `Option` if it’s
present, and do nothing if it isn’t. (We could also use a `match` expression
here, but `map` is more idiomatic.) In the body of the function we supply to
`map`, we call `inner_html` on the `title_element` to get its content, which is
a `String`. When all is said and done, we have an `Option`.
-Notice that Rust’s `await` keyword goes after the expression you’re awaiting,
-not before it. That is, it’s a _postfix keyword_. This may be different from
-what you might be used to if you have used async in other languages. Rust chose
-this because it makes chains of methods much nicer to work with. As a result, we
-can change the body of `page_url_for` to chain the `trpl::get` and `text`
-function calls together with `await` between them, as shown in Listing 17-2:
+Notice that Rust’s `await` keyword goes _after_ the expression you’re awaiting,
+not before it. That is, it’s a _postfix_ keyword. This may differ from what
+you’re used to if you’ve used `async` in other languages, but in Rust it makes
+chains of methods much nicer to work with. As a result, we can change the body
+of `page_url_for` to chain the `trpl::get` and `text` function calls together
+with `await` between them, as shown in Listing 17-2.
@@ -148,15 +147,15 @@ some code in `main` to call it, let’s talk a little more about what we’ve
written and what it means.
When Rust sees a block marked with the `async` keyword, it compiles it into a
-unique, anonymous data type which implements the `Future` trait. When Rust sees
-a function marked with `async`, it compiles it into a non-async function whose
+unique, anonymous data type that implements the `Future` trait. When Rust sees a
+function marked with `async`, it compiles it into a non-async function whose
body is an async block. An async function’s return type is the type of the
anonymous data type the compiler creates for that async block.
-Thus, writing `async fn` is equivalent to writing a function which returns a
-_future_ of the return type. When the compiler sees a function definition such
-as the `async fn page_title` in Listing 17-1, it’s equivalent to a non-async
-function defined like this:
+Thus, writing `async fn` is equivalent to writing a function that returns a
+_future_ of the return type. To the compiler, a function definition such as the
+`async fn page_title` in Listing 17-1 is equivalent to a non-async function
+defined like this:
```rust
# extern crate trpl; // required for mdbook test
@@ -175,34 +174,38 @@ fn page_title(url: &str) -> impl Future