Introduction

Welcome to Typed Effects in Rust.

If you already use async Rust, you know the model: Futures are polled by an executor; work runs when those futures are driven (for example with .await). That foundation is sound, and this book does not ask you to unlearn it.

What teams often hit next is organization at scale: error types that grow without structure, dependencies threaded through long call chains, and background work whose lifetime is hard to reason about. Those problems are not unique to Rust, but they show up in every non-trivial async codebase.

id_effect is a library for writing async programs where the shape of the work—success type, error type, and required environment—is carried in one place, and where much of the program is built as composable descriptions (Effect<A, E, R>) that you run only when you choose how and with which dependencies.

You still run on ordinary async runtimes. You still use .await inside bridges to third-party code. What changes is how you structure domain logic, tests, and dependency boundaries.

Who This Book Is For

You should know Rust basics: ownership, borrowing, traits, and how async/await and Future fit together. You do not need prior experience with category theory or functional programming jargon—we introduce terms only when they help.

If you want a typed, compositional style for async Rust—with explicit requirements in the type system and a clear split between “what to run” and “how to run it”—this book is for you.

How to Read This Book

Part I: Foundations explains why effects are useful and teaches the core types. Start here.

Part II: Environment & Dependencies covers the R parameter and compile-time dependency injection patterns.

Part III: Real Programs covers error handling, concurrency, resources, and scheduling for production code.

Part IV: Advanced covers STM, streams, schemas, and testing—read when you need those topics.

Code examples are intended to compile unless marked otherwise.

Let's begin.

Why Effects?

Before we write effect code in detail, it helps to agree on what problem we are solving and where id_effect sits relative to ordinary async Rust.

Rust’s async model is built on Future and executors: futures are lazy until polled, and .await is how async functions compose. That model is not a mistake—it is the standard way to express non-blocking I/O and concurrency.

At application scale, the difficulties are usually engineering ones:

  • Errors — mapping and aggregating failures across layers without losing structure.
  • Dependencies — passing clients, configuration, and context without turning every function signature into a long parameter list (or hiding the same behind globals).
  • Concurrency — knowing who owns a task, how it shuts down, and what happens on cancellation.

This chapter names those patterns, relates them to how Effect<A, E, R> is designed, and sets up the rest of Part I.

By the end of the chapter you should understand:

  • Why teams reach for a declarative layer on top of hand-written async fn chains.
  • What “effect” means in this book: a description of work, separate from running it with a chosen environment.
  • Why the type has three parameters (A, E, R) and why that matters for APIs and tests.

We start with a concrete look at those recurring challenges.

Challenges in Large Async Codebases

Async Rust gives you non-blocking I/O and structured concurrency primitives. In production, the same strengths can become painful when composition and boundaries are not planned: errors, dependencies, and spawned work all tend to accumulate complexity.

This section is not a claim that “async is broken.” It is a concise picture of problems id_effect is meant to help with—so the rest of the book has a shared vocabulary.

Challenge 1: Error mapping and noise

A typical async workflow chains several operations. Each step may fail in its own way, so you map errors into a domain type and propagate:

#![allow(unused)]
fn main() {
async fn process_order(order: Order) -> Result<Receipt, ProcessError> {
    let config = get_config()
        .await
        .map_err(|e| ProcessError::Config(e))?;

    let user = fetch_user(&config, order.user_id)
        .await
        .map_err(|e| ProcessError::User(e))?;

    let inventory = check_inventory(&config, &order.items)
        .await
        .map_err(|e| ProcessError::Inventory(e))?;

    let payment = charge_payment(&config, &user, order.total)
        .await
        .map_err(|e| ProcessError::Payment(e))?;

    let shipment = create_shipment(&config, &order, &user)
        .await
        .map_err(|e| ProcessError::Shipment(e))?;

    Ok(Receipt::new(order, payment, shipment))
}
}

The business steps are clear, but the .map_err noise is repetitive. The domain ProcessError enum often grows with every new integration. Policy (retries, fallbacks) may live in callers or ad hoc helpers, which makes behavior harder to see in one place.

What effects add: failure and recovery can be expressed as transformations on a description (for example retry, map_error, structured Exit types), so policies are easier to reuse and test without rewriting the core flow.

Challenge 2: Explicit dependency parameters

Another common shape is the handler that needs many clients and cross-cutting services:

#![allow(unused)]
fn main() {
async fn handle_request(
    db: &DatabasePool,
    cache: &RedisClient,
    logger: &Logger,
    config: &AppConfig,
    metrics: &MetricsClient,
    tracer: &Tracer,
    request: Request,
) -> Response {
    // ...
}
}

Dependencies are explicit, which is good for honesty, but every layer between here and main must repeat or forward them. Tests must build or mock the same bundle repeatedly. Alternatives (globals, implicit context) trade one problem for another.

What effects add: required capabilities can be expressed in R (the environment type) and satisfied in one place at the edge, while inner functions stay focused on logic.

Challenge 3: Background work and lifetimes

Fire-and-forget background tasks are easy to start and harder to reason about:

#![allow(unused)]
fn main() {
fn start_background_worker(db: DatabasePool) {
    tokio::spawn(async move {
        loop {
            match process_queue(&db).await {
                Ok(_) => {}
                Err(e) => eprintln!("Worker error: {}", e),
            }
            tokio::time::sleep(Duration::from_secs(5)).await;
        }
    });
}
}

Questions that matter in production—shutdown, cancellation, panic behavior, and resource cleanup—need explicit design. That is true in any async system; the goal is to make ownership and intent visible in the program structure.

What effects add: structured concurrency patterns (fibers, scopes, handles) integrate with the same Effect abstraction so “what runs” and “how it ends” can be expressed consistently.

How this relates to Future and async

Remember: async fn bodies compile to Futures; nothing runs until a future is polled (for example via .await on an async caller). The difficulties above are usually about how we organize async code—signatures, error types, and where side effects are allowed—not about rejecting the Future model.

In practice, hand-written async often reads like a straight-line script: await one step, then the next. That is appropriate for many functions. It becomes harder when you want the same logical workflow to be inspected, wrapped (retries, timeouts), or tested with a substituted environment without threading mocks through every layer.

Effects push the “script” into a value: Effect<A, E, R> is a description that you run with run_async, run_blocking, or test harnesses—after you have composed and configured it.

That does not replace understanding executors or Future. It adds a layer for domain structure: answer type A, error type E, requirements R, and explicit execution.

Next we define what an Effect is in this library and how that description differs from calling async fn directly—without exaggerating either side.

What Even Is an Effect?

An Effect is a description of a computation, not the computation itself.

The rest of the API—map, flat_map, environment types, runners—is there to work with that description in a type-safe way.

The Recipe Analogy

Think about a recipe for chocolate cake.

A recipe is not a cake. You can hold a recipe in your hands without any flour appearing. You can read a recipe without preheating an oven. You can photocopy a recipe, modify it (less sugar, more cocoa), combine it with a frosting recipe, and share it with a friend — all without a single cake coming into existence.

The cake only appears when someone executes the recipe. Takes out the ingredients, follows the steps, waits for the oven.

An Effect is a recipe for a computation.

When you write succeed(42), you're not "succeeding" at anything. You're writing down a recipe that says "when executed, produce the value 42." The 42 doesn't exist yet. No computation has happened. You just have a piece of paper with instructions on it.

#![allow(unused)]
fn main() {
use id_effect::{Effect, succeed};

// This doesn't compute anything — it's a description
let recipe: Effect<i32, String, ()> = succeed(42);

// Still nothing has happened. `recipe` is just a value.
// We can pass it around, store it, inspect its type.
}

The computation only happens when you explicitly run it:

#![allow(unused)]
fn main() {
use id_effect::run_blocking;

// NOW something happens
let result: Result<i32, String> = run_blocking(recipe);
assert_eq!(result, Ok(42));
}

Building Up Descriptions

Because an Effect is just data — a description — you can transform it without running it.

#![allow(unused)]
fn main() {
let recipe: Effect<i32, String, ()> = succeed(42);

// Transform the description: "when executed, produce 42, then double it"
let doubled: Effect<i32, String, ()> = recipe.map(|x| x * 2);

// Still nothing has happened! `doubled` is just a modified recipe.

// Now run it
let result = run_blocking(doubled);
assert_eq!(result, Ok(84));
}

The .map() call didn't execute anything. It took one recipe and produced a new recipe that includes an extra step. Like writing "double the result" at the bottom of your cake recipe — the cake doesn't change until someone bakes it.

A More Realistic Example

Let's see what this looks like with actual I/O:

#![allow(unused)]
fn main() {
use id_effect::{Effect, effect, run_blocking};

// This function doesn't fetch anything — it returns a DESCRIPTION
// of how to fetch a user
fn fetch_user(id: u64) -> Effect<User, DbError, ()> {
    effect! {
        let conn = ~ connect_to_db();
        let user = ~ query_user(&conn, id);
        Ok(user)
    }
}

// Calling the function doesn't open any connections
let description = fetch_user(42);

// `description` is a value we can hold, pass around, combine with others
// No database has been touched

// Only when we run it does the I/O happen
let user = run_blocking(description)?;
}

That effect! block looks imperative — it looks like it's doing things. But it's not. It's building a description of things to do. The ~ operator means "this step depends on the previous step completing" — it's describing sequencing, not executing it.

The Key Insight: Separation of Concerns

This separation — description vs execution — is how the challenges from the previous section (errors, dependencies, task structure) get a consistent home in the type system.

Error handling becomes part of the description itself. When you write:

#![allow(unused)]
fn main() {
let resilient = risky_operation.retry(Schedule::exponential(100.ms(), 3));
}

You're not adding retry logic to running code. You're modifying the description to say "when executed, retry up to 3 times with exponential backoff." The retry logic is baked into the recipe.

Dependencies become part of the type signature. When you write:

#![allow(unused)]
fn main() {
fn get_user(id: u64) -> Effect<User, DbError, Database>
}

That Database in the type says "this recipe requires a Database to execute." The compiler enforces it. You can't run the effect without providing a Database. No runtime surprises.

Structured concurrency becomes possible because the runtime knows what each effect intends to do before it does it. Spawning an effect doesn't fire and forget — it creates a handle to a structured task with clear ownership and cancellation semantics.

What's in an Effect?

An Effect<A, E, R> carries three pieces of information in its type:

  • A — the Answer: what you get if it succeeds
  • E — the Error: what you get if it fails
  • R — the Requirements: what environment is needed to run it

We'll explore all three in the next section. For now, just notice that an Effect's type tells you everything about what it does — success, failure, and dependencies — without you having to read the implementation.

#![allow(unused)]
fn main() {
// This type signature tells the whole story:
fn process_payment(
    amount: Money
) -> Effect<Receipt, PaymentError, (PaymentGateway, Logger)>

// - Produces a Receipt on success
// - Can fail with PaymentError
// - Requires a PaymentGateway and Logger to run
}

No need to read the function body to know what resources it needs or what errors it can produce. The type is the documentation.

Style: imperative async vs effect descriptions

Typical async fn code is written as a sequence of steps: each .await drives the next piece of work. That is clear and idiomatic Rust.

Effect code in this library is often written so that many domain functions return Effect<…>: a value that describes work and only runs when you pass it to a runner with an environment. The style emphasizes composition (map, flat_map, layers, retries) before execution.

Both approaches run on the same Future machinery underneath. Use effects where you want environment and error structure in the type, shared policies, and test substitution at the boundary; use plain async fn where a small linear function is enough.

Let's look at those three type parameters in detail.

The Three Type Parameters

Every Effect carries three type parameters: Effect<A, E, R>. These aren't arbitrary — they answer the three fundamental questions every computation must address:

  • A — What do I produce when I succeed?
  • E — What do I produce when I fail?
  • R — What do I need in order to run?

Let's examine each one.

A: The Answer

The A parameter is the success type — what you get back when everything goes right.

#![allow(unused)]
fn main() {
use id_effect::{Effect, succeed};

// This effect produces an i32 on success
let answer: Effect<i32, String, ()> = succeed(42);

// This effect produces a User on success
let user_effect: Effect<User, DbError, ()> = succeed(User::new("Alice"));
}

If you're familiar with Result<T, E>, think of A as the T. It's what you're hoping to get.

When you transform an effect with .map(), you're changing the A:

#![allow(unused)]
fn main() {
let numbers: Effect<i32, String, ()> = succeed(21);
let doubled: Effect<i32, String, ()> = numbers.map(|n| n * 2);
let stringified: Effect<String, String, ()> = doubled.map(|n| n.to_string());
}

Each .map() transforms the success value while preserving the error type and requirements.

E: The Error

The E parameter is the failure type — what you get back when something goes wrong.

#![allow(unused)]
fn main() {
use id_effect::{Effect, fail};

// This effect always fails with a String error
let failure: Effect<i32, String, ()> = fail("something went wrong".to_string());

// This effect can fail with a DbError
let user: Effect<User, DbError, ()> = fetch_user_from_db(42);
}

Again, if you know Result<T, E>, think of E as the E. It's what you're worried might happen.

You can transform error types with .map_error():

#![allow(unused)]
fn main() {
let db_effect: Effect<User, DbError, ()> = fetch_user(42);

// Convert DbError to a more general AppError
let app_effect: Effect<User, AppError, ()> = db_effect.map_error(|e| AppError::Database(e));
}

Unlike traditional error handling where you sprinkle .map_err() everywhere, with effects you typically handle error transformation at specific boundaries — when composing larger effects from smaller ones, or when exposing an API.

R: The Requirements

Here's where effects get interesting. The R parameter represents the environment — the dependencies this effect needs in order to run.

#![allow(unused)]
fn main() {
// This effect needs nothing to run — R is ()
let standalone: Effect<i32, String, ()> = succeed(42);

// This effect needs a Database to run
fn get_user(id: u64) -> Effect<User, DbError, Database> {
    // ... implementation that uses the database
}

// This effect needs both a Database AND a Logger
fn get_user_logged(id: u64) -> Effect<User, DbError, (Database, Logger)> {
    // ... implementation that uses both
}
}

The key insight: you cannot run an effect unless you provide its requirements.

#![allow(unused)]
fn main() {
let needs_db: Effect<User, DbError, Database> = get_user(42);

// This won't compile! We haven't satisfied the Database requirement.
// run_blocking(needs_db);  // ERROR: Database not provided

// We need to provide what it needs first
let satisfied: Effect<User, DbError, ()> = needs_db.provide(my_database);

// Now we can run it
let user = run_blocking(satisfied)?;
}

The .provide() method takes a requirement and satisfies it, changing the R type. When R becomes (), the effect needs nothing more and can be executed.

Why R Matters

The R parameter is why id_effect can offer compile-time dependency injection.

Consider this function signature:

#![allow(unused)]
fn main() {
fn process_order(order: Order) -> Effect<Receipt, OrderError, (Database, PaymentGateway, EmailService, Logger)>
}

Just from the type, you know:

  • This produces a Receipt on success
  • It can fail with OrderError
  • It requires four services to run

You don't need to read the implementation. You don't need to trace through function calls. The type tells you exactly what dependencies are involved.

And the compiler enforces it. If you try to run this effect without providing all four services, you get a compile error. No runtime "service not found" exceptions. No forgetting to initialize something.

R Flows Through Composition

When you combine effects, their requirements combine too:

#![allow(unused)]
fn main() {
fn get_user(id: u64) -> Effect<User, DbError, Database> { ... }
fn send_email(to: &str, body: &str) -> Effect<(), EmailError, EmailService> { ... }

fn notify_user(id: u64) -> Effect<(), AppError, (Database, EmailService)> {
    effect! {
        let user = ~ get_user(id).map_error(AppError::Db);
        ~ send_email(&user.email, "Hello!").map_error(AppError::Email);
        Ok(())
    }
}
}

The notify_user function needs both Database (from get_user) and EmailService (from send_email). The compiler infers this automatically — you don't have to manually track which dependencies flow where.

The Unit Environment: ()

When R = (), the effect is self-contained. It doesn't need anything from the outside world to run:

#![allow(unused)]
fn main() {
let standalone: Effect<i32, String, ()> = succeed(42);

// Can run immediately — no dependencies
let result = run_blocking(standalone);
}

Most effects start with requirements and gradually have them satisfied as you move toward the "edge" of your program:

// Deep in your code: many requirements
fn business_logic() -> Effect<Result, Error, (Db, Cache, Logger, Config)>

// At the edge: provide everything
fn main() {
    let db = connect_database();
    let cache = connect_cache();
    let logger = setup_logger();
    let config = load_config();

    let effect = business_logic()
        .provide(db)
        .provide(cache)
        .provide(logger)
        .provide(config);
    // Now R = ()

    run_blocking(effect);
}

Reading Effect Signatures

Let's practice reading some signatures:

#![allow(unused)]
fn main() {
// Produces String, never fails, needs nothing
Effect<String, Never, ()>

// Produces i32, can fail with ParseError, needs nothing
Effect<i32, ParseError, ()>

// Produces User, can fail with DbError, needs Database
Effect<User, DbError, Database>

// Produces (), can fail with AppError, needs Database, Cache, and Logger
Effect<(), AppError, (Database, Cache, Logger)>
}

With practice, you'll read these as fluently as you read Result<T, E>. The extra R parameter becomes second nature.

What's Next

We've seen that effects are descriptions, not actions. We've seen that Effect<A, E, R> encodes success type, error type, and requirements.

But we haven't answered the obvious question: why does this matter? Why is it better to describe computations than to just do them?

The answer is laziness. And laziness, it turns out, is a superpower.

Laziness as a Superpower

So far we've established that Effect<A, E, R> is a description of a computation — a recipe that does nothing until someone executes it. You might be thinking: "OK, but why is that good? I have to run it eventually. What do I gain by waiting?"

Quite a bit, if your program benefits from composing and testing before execution.

Here is what you can do with a computation you have not run yet.

Effect values vs driving an async fn

Rust futures are lazy: calling an async fn returns a Future; the body runs when that future is polled (for example with .await).

The contrast here is about what your API returns—a raw Future you must await immediately in the caller, versus an Effect value you can store, compose, and run later.

#![allow(unused)]
fn main() {
// Returns a Future; the HTTP work runs when this future is awaited / polled
async fn fetch_user_async(id: u64) -> Result<User, HttpError> {
    http_get(&format!("https://api.example.com/users/{id}")).await
}

// Returns a description; I/O runs when the effect is executed with an environment
fn fetch_user(id: u64) -> Effect<User, HttpError, HttpClient> {
    effect! {
        let user = ~ http_get(&format!("https://api.example.com/users/{id}"));
        user
    }
}
}

Calling fetch_user_async(1) only builds the future; the request runs when something polls it (typically at .await). Calling fetch_user(1) returns an Effect—still no I/O until you run that effect with a runner and the needed HttpClient.

The point is not that async fn is “eager.” It is that effects give you a first-class value to combine (retries, timeouts, tests) before you commit to a particular run.

Superpower #1: Compose First, Run Later

Because effects are values, you can build an entire program before running any of it:

#![allow(unused)]
fn main() {
fn load_dashboard(user_id: u64) -> Effect<DashboardPage, AppError, (Database, Cache, Logger)> {
    effect! {
        let user    = ~ fetch_user(user_id).map_error(AppError::Db);
        let posts   = ~ fetch_posts(user.id).map_error(AppError::Db);
        let profile = ~ build_profile(&user, &posts).map_error(AppError::Render);
        profile
    }
}

// Nothing has run yet. We have a value.
let page = load_dashboard(42);

// Chain more work onto it — still nothing runs
let logged_page = page.flat_map(|p| log_view(p));

// Only now does any of this execute
run_blocking(logged_page.provide(env));
}

Every line before run_blocking is pure data manipulation. You're assembling a pipeline. The pipeline can be inspected, transformed, passed to other functions, stored in a struct. The laws of composition apply cleanly because there are no side-effects sneaking in.

Superpower #2: Retry Without Rewriting

Because an effect is a description, you can wrap it with new behavior without touching the original:

#![allow(unused)]
fn main() {
let flaky = call_payment_api(order);

// Add exponential back-off retry — no changes to call_payment_api
let resilient = flaky.retry(Schedule::exponential(Duration::from_millis(100), 3));

// Add a timeout on top of that — still no changes
let bounded = resilient.timeout(Duration::from_secs(5));
}

Compare this to the async version: to add retries to an async fn, you'd either modify the function body, wrap it in a helper that calls it in a loop, or reach for an external crate. The retry logic gets tangled with the business logic.

With effects, retry is just another transformation. retry takes a lazy description, produces a new lazy description that runs the original up to N times. No surgery on the original required.

Superpower #3: Test Without Mocking the Universe

Because nothing runs until you provide the environment, tests can substitute controlled implementations without rewriting a single line of production code:

#![allow(unused)]
fn main() {
#[test]
fn user_not_found_returns_error() {
    let test_env = TestEnv::new()
        .with_http(stub_http_404_for("/users/99"));

    let result = run_test(fetch_user(99), test_env);

    assert!(matches!(result, Err(HttpError::NotFound)));
}
}

The same fetch_user function used in production runs in the test — just against a different environment. No #[cfg(test)] stubs. No Arc<dyn Trait> that you only swap out in tests. The type system ensures you've provided every dependency the effect declared.

Sequential async vs bundled descriptions

Sequential async fn code is natural for linear flows: each .await advances the next step, and control matches the source order.

Effect-oriented APIs often bundle those steps into a single Effect value first, then apply cross-cutting behavior (retry, timeout, tracing) as transformations on that value before calling run_*.

That separation is useful when the same workflow must be reused under different policies or tested with a substituted environment, without copying the body of the async function.

When Does It Actually Run?

There are exactly three places where an Effect executes:

#![allow(unused)]
fn main() {
// In a binary or application entry point
run_blocking(program.provide(env));

// In an async context
run_async(program.provide(env)).await;

// In tests
run_test(program, test_env);
}

Everywhere else, you're building, transforming, or combining descriptions. The runtime boundary is explicit. You know exactly where the side-effects begin.

Until run_* is called, your effect is just data: composable and easy to substitute in tests.


That's Chapter 1. You now have a picture of why teams adopt effects (errors, dependencies, concurrency structure), what an Effect is (a description executed with an environment), what the type parameters mean (A = success, E = failure, R = requirements), and why keeping work in description form matters for composition and testing.

Chapter 2 gets hands-on: first effects, map, flat_map, and a small end-to-end program.

Your First Effect

Chapter 1 was all philosophy. We established what effects are, why they exist, and why laziness is useful. Now we get our hands dirty.

By the end of this chapter you will have written real effects, transformed them, chained them together, and run a complete small program. You'll use succeed, fail, map, map_error, and flat_map — the five operations that cover the vast majority of day-to-day effect work.

Let's start with the simplest question: how do you create an effect in the first place?

Creating Effects — succeed, fail, and pure

Every effect starts as either a success or a failure. The two constructors that express this are succeed and fail.

succeed

succeed wraps a value into an effect that, when run, immediately produces that value:

#![allow(unused)]
fn main() {
use id_effect::{Effect, succeed};

let answer: Effect<i32, String, ()> = succeed(42);
let greeting: Effect<String, String, ()> = succeed("Hello, world!".to_string());
}

Nothing happens when you call succeed. You get back a description — a lazy recipe that says "produce this value when someone asks." The 42 is already there, but no computation has been executed.

The type parameters are important:

  • A = i32 — the value we produce
  • E = String — the error type (unused here, but we still have to pick one)
  • R = () — no environment needed

If you prefer the FP vocabulary, pure is an alias for succeed:

#![allow(unused)]
fn main() {
use id_effect::pure;

let effect = pure(42_i32);
}

Both names refer to exactly the same thing. Use whichever feels natural in context.

fail

fail wraps an error into an effect that, when run, immediately fails with that error:

#![allow(unused)]
fn main() {
use id_effect::{Effect, fail};

let oops: Effect<i32, String, ()> = fail("something went wrong".to_string());
}

Again, nothing executes. oops is a description of a failure, not the failure itself. You can pass it around, store it, and transform it without triggering any error handling.

The type annotation matters: Effect<i32, String, ()> says this would have produced an i32 on success — we just know it won't.

From a Closure

For cases where you want to capture some computation in an effect (but still defer it):

#![allow(unused)]
fn main() {
use id_effect::{Effect, effect};

let computed: Effect<i32, String, ()> = effect!(|_r: &mut ()| {
    let x = expensive_calculation();
    x * 2
});
}

The body of effect! runs lazily — only when the effect is executed. This is the workhorse macro we'll cover thoroughly in Chapter 3.

Type Inference

Rust's type inference often lets you skip the annotations:

#![allow(unused)]
fn main() {
// Types inferred from usage
let answer = succeed(42);      // Effect<i32, _, ()>
let greeting = succeed("hi"); // Effect<&str, _, ()>
}

The error type E is usually inferred from how the effect is used later — when you chain it with other effects that can fail, the error type propagates. You'll only need to annotate explicitly when the compiler asks.

Quick Reference

#![allow(unused)]
fn main() {
succeed(value)    // Effect that produces value
pure(value)       // Alias for succeed
fail(error)       // Effect that fails with error
effect!(|_r| { … }) // Effect from a lazy closure
}

These three constructors cover every starting point. Everything else is transformation and composition.

Transforming Success — map and its Friends

You have an effect. It produces some value. But you want a different value — or a different error. That's what map and map_error are for.

map

map transforms the success value without running any new effects:

#![allow(unused)]
fn main() {
use id_effect::{succeed, Effect};

let number: Effect<i32, String, ()> = succeed(21);
let doubled: Effect<i32, String, ()> = number.map(|n| n * 2);
let text: Effect<String, String, ()> = doubled.map(|n| n.to_string());
}

None of these .map() calls executes anything. Each one wraps the previous description in a new layer: "and then transform the result with this function." The chain of transformations only runs when you call run_blocking or similar.

The type of the effect changes with each map. The A parameter shifts:

#![allow(unused)]
fn main() {
// Effect<i32, String, ()>
//   .map(|n: i32| n.to_string())
// → Effect<String, String, ()>
}

The E (error type) and R (requirements) stay the same. .map touches only the success path.

map_error

map_error transforms the failure type, leaving the success path untouched:

#![allow(unused)]
fn main() {
use id_effect::fail;

#[derive(Debug)]
struct AppError(String);

let db_err: Effect<String, String, ()> = fail("db connection failed".to_string());
let app_err: Effect<String, AppError, ()> = db_err.map_error(|s| AppError(s));
}

This is typically used at module boundaries when you need to unify error types. A database layer might return DbError, but your application layer needs AppError. map_error does the conversion without touching anything else.

Why These Don't Execute Anything

It's worth repeating: neither map nor map_error runs any computation.

#![allow(unused)]
fn main() {
let effect = succeed(42)
    .map(|n| { println!("mapping!"); n + 1 })
    .map(|n| n * 2);

// At this point: nothing has printed, nothing has computed.
// We have a description of three steps.

let result = run_blocking(effect.provide(()));
// NOW the effect runs. "mapping!" prints once. Result is 86.
}

This is the promise of laziness: you can build pipelines of transformations without triggering side effects until the moment you choose.

Combining map and map_error

A common pattern is calling both to normalise an effect into your domain's types:

#![allow(unused)]
fn main() {
fn fetch_user_record(id: u64) -> Effect<User, AppError, ()> {
    raw_db_fetch(id)
        .map(|row| User::from_row(row))
        .map_error(|e| AppError::Database(e))
}
}

The effect goes in with raw DB types; it comes out with domain types. The transformation chain documents the conversion at a glance.

and_then / tap (convenience)

Two more helpers are worth knowing:

#![allow(unused)]
fn main() {
// and_then: map + flatten (when your mapper returns an Option or Result)
let validated: Effect<i32, String, ()> = succeed(42)
    .and_then(|n| if n > 0 { Some(n) } else { None });

// tap: inspect the success value without changing it
let logged: Effect<i32, String, ()> = succeed(42)
    .tap(|n| println!("value: {n}"));  // side-effect, same type flows through
}

tap is particularly useful for debugging — add it anywhere in a chain without disrupting the types.

Summary

MethodChangesDoes not change
.map(f)A (success type)E, R
.map_error(f)E (error type)A, R
.tap(f)nothingA, E, R

None of them execute the effect. They all return new, larger descriptions.

Chaining Effects — flat_map and the Bind

map handles the case where your transformation is a pure function: A → B. But often the next step is itself an effect. You don't want Effect<Effect<B, E, R>, E, R> — you want Effect<B, E, R>. That's flat_map.

The Problem with map for Effects

Say you want to fetch a user and then fetch their posts:

#![allow(unused)]
fn main() {
fn get_user(id: u64) -> Effect<User, DbError, Database> { ... }
fn get_posts(user_id: u64) -> Effect<Vec<Post>, DbError, Database> { ... }
}

If you try to use map:

#![allow(unused)]
fn main() {
// This gives Effect<Effect<Vec<Post>, DbError, Database>, DbError, Database>
// — a nested effect, not what we want
let wrong = get_user(1).map(|user| get_posts(user.id));
}

map's function must return a plain value. If it returns an Effect, you get nesting.

flat_map: Chain Without Nesting

flat_map (also known as and_then on effects) takes a function A → Effect<B, E, R> and "flattens" the result:

#![allow(unused)]
fn main() {
let combined: Effect<Vec<Post>, DbError, Database> =
    get_user(1).flat_map(|user| get_posts(user.id));
}

Now you have one flat effect that, when run, first fetches the user, then uses the result to fetch posts. The nesting is gone.

Chaining Multiple Steps

flat_map chains read left-to-right, but deep chains get noisy:

#![allow(unused)]
fn main() {
// Gets unwieldy quickly
let program = get_user(1)
    .flat_map(|user| get_posts(user.id)
        .flat_map(|posts| render_page(user, posts)));
}

This is where the effect! macro comes in.

The effect! Macro as Syntactic Sugar

The effect! macro turns flat_map chains into readable sequential code using the ~ operator:

#![allow(unused)]
fn main() {
use id_effect::effect;

let program: Effect<Page, AppError, Database> = effect! {
    let user  = ~ get_user(1).map_error(AppError::Db);
    let posts = ~ get_posts(user.id).map_error(AppError::Db);
    let page  = render_page(user, posts);
    page
};
}

The ~ operator is the bind: "run this effect and give me its success value." Each ~ expr desugars to a flat_map. The whole block is one effect.

Note that render_page (a pure function with no ~) is just a normal Rust expression — it runs inside the macro body during execution.

Error Short-Circuiting

Like ? in Result, if any ~ step fails, the whole effect! exits early with that error:

#![allow(unused)]
fn main() {
let program: Effect<Page, AppError, Database> = effect! {
    let user = ~ get_user(999).map_error(AppError::Db);
    // If get_user fails, execution stops here.
    // The rest never runs.
    let posts = ~ get_posts(user.id).map_error(AppError::Db);
    render_page(user, posts)
};
}

This is sequential, not parallel. Each step waits for the previous.

map vs flat_map — When to Use Each

SituationUse
Transformation returns a plain value.map(f)
Transformation returns an Effect.flat_map(f) or effect! { ~ ... }
More than one sequential stepeffect! { ~ ... } macro

A rule of thumb: if you find yourself writing effect.map(|v| another_effect(v)) and noticing the nested type, switch to flat_map or the macro.

The Full Picture

#![allow(unused)]
fn main() {
// All equivalent:

// 1. Explicit flat_map
get_user(1)
    .flat_map(|user| get_posts(user.id))

// 2. Using effect! with ~
effect! {
    let user = ~ get_user(1);
    ~ get_posts(user.id)
}

// 3. Short form for single bind
effect! { ~ get_user(1).flat_map(|u| get_posts(u.id)) }
}

The effect! macro is the idiomatic choice for anything more than one step. Chapter 3 covers it in full detail.

Your First Real Program

Let's build something complete: a small program that loads configuration, connects to a database, queries a user, and formats a greeting. It's simple enough to fit on one page, but real enough to demonstrate the full effect workflow.

The Domain

#![allow(unused)]
fn main() {
#[derive(Debug)]
struct Config {
    db_url: String,
    app_name: String,
}

#[derive(Debug)]
struct User {
    id: u64,
    name: String,
    email: String,
}

#[derive(Debug)]
enum AppError {
    Config(String),
    Database(String),
}
}

The Individual Steps

Each step is a focused effect:

#![allow(unused)]
fn main() {
use id_effect::{Effect, effect, succeed, fail};

fn load_config() -> Effect<Config, AppError, ()> {
    // In a real app, read from a file or env vars
    succeed(Config {
        db_url: "postgres://localhost/myapp".to_string(),
        app_name: "Greeter".to_string(),
    })
}

fn connect_db(config: &Config) -> Effect<Database, AppError, ()> {
    Database::connect(&config.db_url)
        .map_error(|e| AppError::Database(format!("connect: {e}")))
}

fn fetch_user(db: &Database, id: u64) -> Effect<User, AppError, ()> {
    db.query_user(id)
        .map_error(|e| AppError::Database(format!("query: {e}")))
}

fn format_greeting(config: &Config, user: &User) -> String {
    format!("{}: Hello, {}! ({})", config.app_name, user.name, user.email)
}
}

Composing the Program

Now we compose these steps into one effect using effect!:

#![allow(unused)]
fn main() {
fn greet_user(user_id: u64) -> Effect<String, AppError, ()> {
    effect! {
        let config = ~ load_config();
        let db     = ~ connect_db(&config);
        let user   = ~ fetch_user(&db, user_id);
        format_greeting(&config, &user)
    }
}
}

Read it like a recipe:

  1. Load config — if it fails, stop with AppError::Config
  2. Connect to DB — if it fails, stop with AppError::Database
  3. Fetch user — if it fails, stop with AppError::Database
  4. Format the greeting — this is pure, always succeeds

Nothing has run yet. greet_user(42) is a value.

Running It

At the edge of the program — in main — we execute:

fn main() {
    match run_blocking(greet_user(42)) {
        Ok(greeting) => println!("{greeting}"),
        Err(AppError::Config(msg)) => eprintln!("Config error: {msg}"),
        Err(AppError::Database(msg)) => eprintln!("DB error: {msg}"),
    }
}

Testing It

Because the effect is a description, testing is straightforward — just swap out the underlying steps:

#![allow(unused)]
fn main() {
#[test]
fn test_greeting_format() {
    let effect = effect! {
        let config = ~ succeed(Config {
            db_url: "unused".into(),
            app_name: "TestApp".into(),
        });
        let user = ~ succeed(User {
            id: 1,
            name: "Alice".into(),
            email: "alice@example.com".into(),
        });
        format_greeting(&config, &user)
    };

    let result = run_test(effect);
    assert_eq!(result.unwrap(), "TestApp: Hello, Alice! (alice@example.com)");
}
}

No mocking framework. No Arc<dyn Trait> plumbing. Just substitute different succeed values for the steps you want to control.

What You Just Learned

You've written a complete effect-based program. Along the way you used:

  • succeed and fail to construct effects from values
  • .map and .map_error to transform success and error types
  • effect! { ~ ... } to sequence effects without callback nesting
  • run_blocking to execute at the program edge
  • run_test to verify behaviour in tests

That's the core of 90% of what you'll write day-to-day. The next two chapters go deeper: Chapter 3 explores the effect! macro in detail, and Chapter 4 begins the tour of R — the environment type that makes dependency injection a compile-time guarantee.

You just wrote your first effect-based program. It won't be your last.

The effect! Macro — Do-Notation for Mortals

Chapter 2 introduced the effect! macro as "syntactic sugar for flat_map." That's technically accurate, but undersells it. In practice, effect! is how you write almost every multi-step computation in id_effect.

This chapter covers the why, the how, and the limits of the macro. By the end you'll be fluent in ~, comfortable handling errors inside the macro, and clear on when not to use it.

Why Do-Notation Exists

Consider three steps that each depend on the previous result:

#![allow(unused)]
fn main() {
fn step_a() -> Effect<i32, Err, ()>   { succeed(1) }
fn step_b(n: i32) -> Effect<i32, Err, ()> { succeed(n * 2) }
fn step_c(n: i32) -> Effect<String, Err, ()> { succeed(n.to_string()) }
}

Written with raw flat_map:

#![allow(unused)]
fn main() {
let program = step_a()
    .flat_map(|a| step_b(a)
        .flat_map(|b| step_c(b)));
}

Two steps: readable. Five steps: a pyramid. Ten steps: indistinguishable from callback hell.

Haskell solved this decades ago with do-notation. Scala's for-comprehensions do the same thing. Rust doesn't have built-in do-notation, so id_effect provides it via a macro.

Do-Notation as a Concept

Do-notation lets you write sequential effectful code that looks like imperative code:

do
  a ← step_a
  b ← step_b(a)
  c ← step_c(b)
  return c

Each means "run this effect and bind its result to this name." If any step fails, the whole computation short-circuits.

Rust can't use the symbol, so id_effect uses ~ (prefix tilde):

#![allow(unused)]
fn main() {
effect! {
    let a = ~ step_a();
    let b = ~ step_b(a);
    let c = ~ step_c(b);
    c
}
}

Same semantics. Rust syntax. Zero nesting.

How the Desugaring Works

The macro transforms each ~ expr into a flat_map:

#![allow(unused)]
fn main() {
// Written:
effect! {
    let a = ~ step_a();
    let b = ~ step_b(a);
    b.to_string()
}

// Roughly expands to:
step_a().flat_map(|a| {
    step_b(a).flat_map(|b| {
        succeed(b.to_string())
    })
})
}

The macro generates exactly the nested flat_map chain you'd write by hand — just without the visual noise.

One Body, One block

One discipline matters: use one effect! block per function. Don't branch between two macro bodies:

#![allow(unused)]
fn main() {
// BAD — two separate effect! blocks for one computation
if flag {
    effect! { let x = ~ a(); x }
} else {
    effect! { let y = ~ b(); y }
}

// GOOD — one block, branching inside
effect! {
    if flag {
        ~ a()
    } else {
        ~ b()
    }
}
}

A single effect! block is a single description. Splitting it into multiple blocks loses the composition guarantee.

Pure Expressions

Not every line inside effect! has to be an effect. Pure Rust expressions work normally:

#![allow(unused)]
fn main() {
effect! {
    let user = ~ fetch_user(id);
    let name = user.name.to_uppercase();  // pure — no ~
    let posts = ~ fetch_posts(user.id);
    (name, posts)
}
}

Only use ~ when the expression has type Effect<_, _, _>. Pure expressions just run inline.

The ~ Operator Explained

The ~ (tilde) is the bind operator inside effect!. It means: "execute this effect and give me its success value; if it fails, propagate the failure and stop."

Basic Usage

#![allow(unused)]
fn main() {
effect! {
    let user = ~ fetch_user(42);   // bind the result to `user`
    user.name
}
}

~ fetch_user(42) desugars to a flat_map. The rest of the block becomes the body of the closure.

Discarding Results

When you don't need the value, use ~ without a binding:

#![allow(unused)]
fn main() {
effect! {
    ~ log_event("processing started");   // run for side effect, discard result
    let result = ~ do_work();
    ~ log_event("processing done");
    result
}
}

Both ~ log_event(...) expressions run for their effects and the () return is discarded.

Method Calls on Effects

~ works on any expression that evaluates to an Effect. That includes method chains:

#![allow(unused)]
fn main() {
effect! {
    let user = ~ fetch_user(id).map_error(AppError::Database);
    let posts = ~ fetch_posts(user.id)
        .map_error(AppError::Database)
        .retry(Schedule::exponential(100.ms()).take(3));
    (user, posts)
}
}

The ~ applies to the entire expression, including any .map_error(), .retry(), etc. that follow.

~ in Conditionals and Loops

You can use ~ inside if expressions and loops:

#![allow(unused)]
fn main() {
effect! {
    let value = if condition {
        ~ compute_a()
    } else {
        ~ compute_b()
    };
    process(value)
}
}

Both branches are effects; the macro handles either path.

#![allow(unused)]
fn main() {
effect! {
    for id in user_ids {
        ~ process_user(id);  // sequential: one at a time
    }
    "done"
}
}

Note: this is sequential iteration. For concurrent processing, use fiber_all (Chapter 9).

What ~ Cannot Do

~ only works inside an effect! block. Calling it outside is a compile error:

#![allow(unused)]
fn main() {
// Does not compile — ~ is not valid here
let x = ~ fetch_user(42);

// Must be inside effect!
let x = effect! { ~ fetch_user(42) };
}

Also, ~ cannot bind across an async closure boundary. If you're calling from_async, the body of the async block is separate:

#![allow(unused)]
fn main() {
effect! {
    let result = ~ from_async(|_r| async move {
        // Inside here, you're in regular Rust async — no ~
        let data = some_future().await?;
        Ok(data)
    });
    result
}
}

Use ~ outside the async move block; use .await inside it.

The Old Postfix Syntax (Deprecated)

Early versions of id_effect used a postfix tilde: expr ~. This is no longer valid. Always use the prefix form:

#![allow(unused)]
fn main() {
// OLD — do not use
step_a() ~;

// GOOD
~ step_a();
let x = ~ step_b();
}

If you see postfix tilde in older code, update it to the prefix form.

Error Handling Inside effect!

The ~ operator short-circuits on failure — if a bound effect fails, the whole effect! block fails with that error. But you can also handle errors within the block.

The Default: Short-Circuit

#![allow(unused)]
fn main() {
effect! {
    let a = ~ step_a();    // if this fails → whole block fails
    let b = ~ step_b(a);   // if this fails → whole block fails
    b
}
}

This matches ? in Result. You get clean sequencing at the cost of aborting early. For most code, that's exactly what you want.

Catching Errors Mid-block

To handle an error inline and continue, use .catch before the ~:

#![allow(unused)]
fn main() {
effect! {
    let user = ~ fetch_user(id).catch(|_| succeed(User::anonymous()));
    // If fetch_user fails, we get User::anonymous() and continue
    render_user(user)
}
}

.catch converts a failure into a success (or a different effect). The ~ then sees a successful effect.

Converting Errors with map_error

Often you have multiple effect types with different E parameters and need to unify them:

#![allow(unused)]
fn main() {
#[derive(Debug)]
enum AppError {
    Db(DbError),
    Network(HttpError),
}

effect! {
    let user = ~ fetch_user(id).map_error(AppError::Db);
    let data = ~ fetch_external_data(user.id).map_error(AppError::Network);
    process(user, data)
}
}

Both effects are converted to the same AppError before binding. The block's E parameter is AppError throughout.

Handling Errors with fold

fold handles both success and failure paths:

#![allow(unused)]
fn main() {
effect! {
    let outcome = ~ risky_operation().fold(
        |err| format!("Error: {err}"),
        |val| format!("Success: {val}"),
    );
    // outcome is always Ok(String), never fails here
    log_outcome(outcome)
}
}

fold is like pattern matching on the effect — you handle both arms and produce a uniform success value.

Re-raising Errors

Inside a .catch handler, you can inspect the error and decide whether to recover or re-fail:

#![allow(unused)]
fn main() {
effect! {
    let result = ~ db_operation().catch(|error| {
        if error.is_transient() {
            // Transient: retry once with a fallback
            fallback_db_operation()
        } else {
            // Permanent: re-raise
            fail(error)
        }
    });
    result
}
}

fail(error) inside a handler produces a failing effect — the outer ~ then propagates it.

Accumulating Multiple Errors

Short-circuit stops at the first error. When you need all errors (like form validation), use validate_all outside the macro:

#![allow(unused)]
fn main() {
// Not inside effect! — runs all regardless of failures
let results = validate_all(vec![
    validate_name(&input.name),
    validate_email(&input.email),
    validate_age(input.age),
]);

// results is Effect<Vec<Ok>, Vec<Err>, ()>
}

Chapter 8 covers validate_all and error accumulation patterns in detail.

The Rule of Thumb

WantDo
Stop at first failureplain ~ effect
Provide a fallback`~ effect.catch(
Unify error types~ effect.map_error(Into::into)
Pattern match both arms~ effect.fold(on_err, on_ok)
Collect all failuresvalidate_all outside the macro

When Not to Use the Macro

effect! is the idiomatic choice for most multi-step computations. But it's a macro — which means it has edges. Knowing when to reach for raw flat_map instead saves debugging time.

Use Raw flat_map for Single-Step Transforms

When there's exactly one effectful step and you're transforming its result, flat_map is cleaner:

#![allow(unused)]
fn main() {
// Unnecessarily verbose
effect! {
    let id = ~ parse_id(raw);
    id
}

// Clear and direct
parse_id(raw).flat_map(|id| succeed(id))
// or just:
parse_id(raw)
}

Use effect! when you have two or more sequential steps. For one, flat_map or .map is usually enough.

Use Combinators for Structural Patterns

Some patterns have named combinators that are more expressive than macros:

#![allow(unused)]
fn main() {
// Instead of:
effect! {
    let a = ~ step_a();
    let b = ~ step_b();
    (a, b)
}

// Consider (when steps are independent):
step_a().zip(step_b())
}

zip communicates intent: "I need both, in any order." The effect! version implies sequential dependency. For independent steps, prefer explicit combinators. (For concurrent independent steps, see fiber_all in Chapter 9.)

Avoid Deep Nesting Within the Block

The macro eliminates nesting between flat_map chains. But you can still create nested effect! blocks, which gets confusing:

#![allow(unused)]
fn main() {
// CONFUSING — nested macro bodies
effect! {
    let result = ~ effect! {      // inner macro
        let x = ~ inner_step();
        x * 2
    };
    result + 1
}

// BETTER — flatten it
effect! {
    let x = ~ inner_step();
    let result = x * 2;
    result + 1
}
}

If you feel the urge to nest effect! inside effect!, flatten the outer block instead.

The Macro and Type Inference

The macro occasionally confuses the type inferencer, especially when the error type isn't pinned early. If you see cryptic "can't infer type" errors inside effect!:

  1. Annotate the return type of the enclosing function explicitly
  2. Add a .map_error(Into::into) on the first ~ binding to anchor E
  3. As a last resort, break out the inner logic into a named helper function

When Generic Returns Are Needed

Library code with polymorphic A, E, R sometimes can't use the macro cleanly:

#![allow(unused)]
fn main() {
// This works fine with explicit function + effect!
pub fn load_config<A, E, R>() -> Effect<A, E, R>
where
    A: From<Config> + 'static,
    E: From<ConfigError> + 'static,
    R: 'static,
{
    effect!(|_r: &mut R| {
        let cfg = read_env_config()?;
        A::from(cfg)
    })
}
}

The closure form of effect! (with |_r: &mut R|) is the right tool for generic graph-builder functions. It's still the macro, just in its raw form.

Summary

SituationPrefer
2+ sequential stepseffect! { ~ ... }
1 step, simple transform.map / .flat_map
Independent steps.zip / combinators
Generic <A, E, R> graph builder`effect!(
Structural patterns (zip, race, all)explicit combinators, not macro

The macro is a tool, not a religion. Use it when it makes the code read like a story; use combinators when they express intent more directly.

The R Parameter — Your Dependencies, Encoded in Types

Chapter 1 introduced R as "what an effect needs to run." We kept it vague on purpose — you needed to understand effects before worrying about their environment.

Now it's time to understand R properly. This chapter answers: what is R mechanically, how does it flow through composition, and how do you satisfy it?

The payoff is significant. Once you internalize R, compile-time dependency injection stops feeling like magic and starts feeling obvious.

R Revisited — More Than Just a Type Parameter

You've seen R in function signatures:

#![allow(unused)]
fn main() {
fn get_user(id: u64) -> Effect<User, DbError, Database>
}

It looks like "this needs a Database." But what does that mean precisely?

R as a Contract

R is a promise to the compiler. When you write:

#![allow(unused)]
fn main() {
fn get_user(id: u64) -> Effect<User, DbError, Database> { ... }
}

You are declaring: "To run this effect, you must supply a Database." The compiler holds you to that promise. You cannot run get_user(1) without providing a Database — it won't compile.

#![allow(unused)]
fn main() {
let effect = get_user(1);

// This doesn't compile — Database not provided
// run_blocking(effect);

// This compiles — Database provided via .provide()
let effect_with_db = effect.provide(my_database);
run_blocking(effect_with_db);
}

The contract is not a comment. It's a type-system guarantee.

R Flows Through Composition

When you combine effects with effect!, their R requirements merge:

#![allow(unused)]
fn main() {
fn get_user(id: u64) -> Effect<User, DbError, Database> { ... }
fn get_posts(user_id: u64) -> Effect<Vec<Post>, DbError, Database> { ... }

// Combined: R is still just Database (both needed the same thing)
fn get_user_with_posts(id: u64) -> Effect<(User, Vec<Post>), DbError, Database> {
    effect! {
        let user  = ~ get_user(id);
        let posts = ~ get_posts(user.id);
        Ok((user, posts))
    }
}
}

When both effects need the same type, the composed effect needs it once.

When they need different things:

#![allow(unused)]
fn main() {
fn log(msg: &str) -> Effect<(), LogError, Logger> { ... }
fn get_user(id: u64) -> Effect<User, DbError, Database> { ... }

// Combined: R is (Database, Logger) — needs BOTH
fn get_user_logged(id: u64) -> Effect<User, AppError, (Database, Logger)> {
    effect! {
        ~ log(&format!("Fetching user {id}")).map_error(AppError::Log);
        let user = ~ get_user(id).map_error(AppError::Db);
        Ok(user)
    }
}
}

The compiler infers that get_user_logged needs both. You don't have to declare this manually — composing effects automatically tracks their dependencies.

Multiple Requirements

As functions grow, they naturally accumulate requirements:

#![allow(unused)]
fn main() {
fn process_order(order: Order) -> Effect<Receipt, AppError, (Database, PaymentGateway, EmailService, Logger)> {
    effect! {
        ~ log("Processing order").map_error(AppError::Log);
        let user    = ~ get_user(order.user_id).map_error(AppError::Db);
        let payment = ~ charge(order.total).map_error(AppError::Payment);
        ~ send_confirmation(&user.email).map_error(AppError::Email);
        Ok(Receipt::new(payment))
    }
}
}

Just from the type signature you know this function touches four subsystems. No need to read the implementation. No "I wonder if this calls the emailer" — the type tells you.

Why R Instead of Parameters?

Traditional Rust would thread dependencies as function parameters:

#![allow(unused)]
fn main() {
fn process_order(order: Order, db: &Database, pay: &PaymentGateway, email: &EmailService, log: &Logger) -> Result<Receipt, AppError> { ... }
}

That works, but it forces every layer of your call stack to accept and forward dependencies it may not directly use. The R parameter encodes the same information in the type of the return value rather than in the argument list — and the compiler tracks it automatically as you compose effects.

The practical difference becomes clear in large codebases: adding a new dependency to a deep function no longer requires propagating new parameters through every caller up to main. The composition chain carries it automatically.

Foreshadowing

You may be wondering: if R is a type like (Database, Logger), how does the runtime know which field is Database and which is Logger? And what happens if you have two databases?

Tuples are positional. Position-based access breaks down as soon as you add a second item of the same type, or reorder a tuple. The solution is Tags — compile-time names for values in the environment. That's Chapter 5. For now, the intuition of "R = the set of things needed" is sufficient.

Providing Dependencies — The provide Method

An effect with a non-() R can't be run directly. To run it, you must satisfy its requirements. The primary tool is .provide().

Basic provide

#![allow(unused)]
fn main() {
fn get_user(id: u64) -> Effect<User, DbError, Database> { ... }

let effect: Effect<User, DbError, Database> = get_user(42);

// Satisfy the Database requirement
let ready: Effect<User, DbError, ()> = effect.provide(my_database);

// R is now () — we can run it
let user = run_blocking(ready)?;
}

.provide(value) takes a value of whatever type R needs and returns a new effect where that requirement is satisfied. When R becomes (), the effect is runnable.

Providing Multiple Dependencies

If R = (Database, Logger), call .provide() twice:

#![allow(unused)]
fn main() {
fn logged_get_user(id: u64) -> Effect<User, AppError, (Database, Logger)> { ... }

let user = run_blocking(
    logged_get_user(42)
        .provide(my_database)
        .provide(my_logger)
)?;
}

Order doesn't matter — each .provide() removes one requirement from the tuple. After both, R = ().

Partial Providing with provide_some

Sometimes you want to satisfy some requirements now and the rest later:

#![allow(unused)]
fn main() {
// Provides Database, still needs Logger
let partial: Effect<User, AppError, Logger> =
    logged_get_user(42).provide_some(my_database);

// Later, provide the Logger
let ready: Effect<User, AppError, ()> =
    partial.provide(my_logger);
}

provide_some is useful when you're building up an effect in layers — each layer provides what it knows about.

Providing Layers (Preview)

For most real applications, you don't call .provide() with raw values. Instead, you use Layers — recipes that know how to construct dependencies from other dependencies:

#![allow(unused)]
fn main() {
// The idiomatic way in real apps
let app_effect = my_business_logic()
    .provide_layer(app_layer);
}

Layers are covered in Chapter 6. For now, think of .provide(value) as the low-level primitive and Layers as the high-level pattern built on top.

Where to Call provide

The rule is simple: provide at the program edge, not inside library functions.

Library functions should stay generic over R:

#![allow(unused)]
fn main() {
// BAD — library function reaches in and provides its own deps
pub fn process_order(order: Order) -> Effect<Receipt, AppError, ()> {
    let db = Database::connect("hardcoded-url");  // where did this come from?
    inner_process(order).provide(db)
}

// GOOD — library returns requirements in R, caller provides
pub fn process_order(order: Order) -> Effect<Receipt, AppError, Database> {
    inner_process(order)
}
}

The caller — main, a test, or a higher-level orchestrator — knows what database to use. The library function should not.

Summary

#![allow(unused)]
fn main() {
// Satisfy one requirement
effect.provide(value)

// Satisfy one of several requirements
effect.provide_some(value)

// Satisfy with a Layer (see Ch6)
effect.provide_layer(layer)
}

All three return a new effect with a smaller (or empty) R. None of them execute anything — .provide() is still lazy.

Widening and Narrowing — Environment Transformations

Sometimes your effect needs part of an environment, but you have the whole thing. Or you need to thread an effect through a context that provides more than required. This is where zoom_env and contramap_env come in.

The Mismatch Problem

Imagine your application has a large environment type:

#![allow(unused)]
fn main() {
struct AppEnv {
    db: Database,
    logger: Logger,
    config: Config,
    metrics: MetricsClient,
}
}

You have a utility function that only needs a Logger:

#![allow(unused)]
fn main() {
fn log_event(msg: &str) -> Effect<(), LogError, Logger> { ... }
}

You can't call this inside an effect! block that has AppEnv in scope — the types don't match. You need to narrow the environment down.

zoom_env: Narrow the Environment

zoom_env adapts an effect to work with a larger environment by providing a lens from the larger type to the smaller one:

#![allow(unused)]
fn main() {
// Adapt log_event to work with AppEnv
let app_log = log_event("hello").zoom_env(|env: &AppEnv| &env.logger);
}

Now app_log has type Effect<(), LogError, AppEnv>. The function extracts the Logger from AppEnv and feeds it to the original effect.

Inside effect!, the pattern looks like:

#![allow(unused)]
fn main() {
fn process(data: Data) -> Effect<(), AppError, AppEnv> {
    effect! {
        ~ log_event("start").zoom_env(|e: &AppEnv| &e.logger).map_error(AppError::Log);
        ~ db_query(data).zoom_env(|e: &AppEnv| &e.db).map_error(AppError::Db);
        Ok(())
    }
}
}

contramap_env: Transform the Environment

While zoom_env narrows, contramap_env transforms. It applies a function to convert whatever environment the caller provides into what the effect actually needs:

#![allow(unused)]
fn main() {
// Effect needs a raw string URL
fn connect(url: &str) -> Effect<Database, DbError, String> { ... }

// You have a Config that contains the URL
let with_config = connect_raw.contramap_env(|cfg: &Config| cfg.db_url.clone());
// Now type is Effect<Database, DbError, Config>
}

contramap_env is the formal name for "adapt the environment type." In practice, most code uses zoom_env for the common case of extracting a field.

R as Documentation Revisited

These combinators highlight why R is valuable as documentation. When you see:

#![allow(unused)]
fn main() {
fn log_event(msg: &str) -> Effect<(), LogError, Logger>
}

You know exactly what this function needs. You don't need to read its body to see if it also touches the database. The zoom_env call at the use site makes the adaptation explicit — it's not hidden.

Compare to the pre-effect alternative:

#![allow(unused)]
fn main() {
// Traditional: you'd need to read the body to know what `env` is used for
fn log_event(env: &AppEnv, msg: &str) -> Result<(), LogError> { ... }
}

With R, the function declares what it needs. With zoom_env, the caller declares how to satisfy it.

When to Use These

In practice, zoom_env and contramap_env appear most often in library code — when writing reusable utilities that should work with any environment containing the right piece. Application code typically uses Layers and service tags (Chapters 5–6) which avoid the need for explicit projection.

Think of zoom_env as the manual fallback when the automatic layer-based wiring isn't the right fit.

R as Documentation — Self-Describing Functions

The R parameter is often described as "the environment type." That's true, but it undersells the practical benefit. R is living documentation that the compiler enforces.

The Signature Tells the Story

Consider two versions of the same function:

#![allow(unused)]
fn main() {
// Version A: traditional async
async fn process_order(order: Order) -> Result<Receipt, Error> {
    // What does this use? Read the body to find out.
    // Database? PaymentGateway? Email? Metrics?
    // You'll have to trace through 200 lines to know.
}

// Version B: effect-based
fn process_order(order: Order) -> Effect<Receipt, OrderError, (Database, PaymentGateway, EmailService, Logger)> {
    // What does this use? Look at the signature.
    // Database ✓, PaymentGateway ✓, EmailService ✓, Logger ✓
    // Done.
}
}

Version B's type is self-describing. You don't need to read the implementation to understand its dependency surface.

Code Review Benefits

In a pull request, R changes are visible in the diff. If someone adds a call to send_metrics() inside process_order and the MetricsClient wasn't previously in R, the function signature must change:

- fn process_order(order: Order) -> Effect<Receipt, OrderError, (Database, PaymentGateway, EmailService, Logger)>
+ fn process_order(order: Order) -> Effect<Receipt, OrderError, (Database, PaymentGateway, EmailService, Logger, MetricsClient)>

This diff is in the function signature — impossible to miss. With traditional parameters or singletons, new dependencies can silently appear in implementation bodies.

Refactoring Safety

When you refactor and remove a dependency, the compiler finds all the places that provided the now-unnecessary value. The R type shrinks, and all the callers that were providing the removed dep get a compile error saying they're providing something no longer needed.

#![allow(unused)]
fn main() {
// After removing Logger from process_order:

// This now fails to compile — .provide(my_logger) is unnecessary
run_blocking(
    process_order(order)
        .provide(my_db)
        .provide(my_logger) // ERROR: Logger is not part of R anymore
)?;
}

The compiler guides you to clean up callers. Traditional code leaves stale dependencies silently lingering.

Testing Clarity

When writing a test, R tells you exactly what you need to mock:

#![allow(unused)]
fn main() {
#[test]
fn test_process_order() {
    // R = (Database, PaymentGateway, EmailService, Logger)
    // So the test needs these four — no more, no less
    let result = run_test(
        process_order(test_order()),
        (mock_db(), mock_payment(), mock_email(), test_logger()),
    );
    assert!(result.is_ok());
}
}

There's no "I wonder if this also touches the metrics service" uncertainty. The type says it doesn't. If you're missing a mock, the code won't compile.

R is Not Magic

It's important to understand that R is just a type parameter. The "compile-time DI" property comes from:

  1. Functions declaring what they need in R
  2. The runtime refusing to execute unless R = ()
  3. Composition automatically merging requirements

There's no reflection, no registration, no framework. Just types.

The next chapter shows how Tags and Context make this scale beyond simple tuples — handling large, complex dependency graphs without positional ambiguity.

Tags and Context — Compile-Time Service Lookup

Chapter 4 showed how R encodes dependencies as types. We used simple types like Database and Logger. That works for small programs, but breaks down as the dependency graph grows.

This chapter introduces the solution: Tags. Tags give values compile-time identities, and Context assembles them into a heterogeneous list that the compiler can query by name, not by position.

By the end you'll understand why id_effect uses this structure instead of tuples, and how to extract exactly the service you need from any environment.

The Problem with Positional Types

If you've used tuples as R for a while, you've probably already hit the wall. Let's make it explicit.

The Tuple Explosion

Two dependencies: perfectly readable.

#![allow(unused)]
fn main() {
Effect<A, E, (Database, Logger)>
}

Five dependencies: which is which?

#![allow(unused)]
fn main() {
Effect<A, E, (Pool, Pool, Logger, Config, HttpClient)>
//            ^^^^ two Pools — which is the main DB and which is the cache?
}

Tuples are positional. (Pool, Pool, ...) is ambiguous — both fields have the same type. There's no way to distinguish them except by index, and index-based access is error-prone and breaks silently when you reorder the tuple.

The Fragility Problem

Positional types are fragile under change. Say your function started with:

#![allow(unused)]
fn main() {
fn foo() -> Effect<A, E, (Database, Logger)>
//                        0         1
}

Now a teammate adds Config between them:

#![allow(unused)]
fn main() {
fn foo() -> Effect<A, E, (Database, Config, Logger)>
//                        0         1       2
}

Every caller that was providing a tuple (db, log) must be updated to (db, config, log). The change in position is invisible to the type system — the compiler won't tell you where the old index references are. It's a silent bomb.

The Same-Type Collision

The deeper problem: Rust can't distinguish Pool for the main database from Pool for the cache. They're the same type. Positional tuples just accept both:

#![allow(unused)]
fn main() {
// V1: provide (main_pool, cache_pool)
// V2: accidentally swap them
effect.provide((cache_pool, main_pool))  // compiles, wrong at runtime
}

No compile error. Wrong behaviour. Possibly wrong for months before you notice.

What We Actually Need

We need a way to give each dependency a name — a compile-time identifier that's independent of its type and its position in any list.

What if:

  • Database meant "the tagged Pool known as DatabaseTag"
  • Cache meant "the tagged Pool known as CacheTag"

Then you couldn't accidentally swap them — they'd be different types even though both are Pool underneath.

That's exactly what Tags provide.

Tags — Branding Values with Identity

A Tag is a zero-sized type that acts as a compile-time name for a value. It associates an identifier with a value type, so two values of the same underlying type can be distinguished by their tag.

What Is a Tag?

#![allow(unused)]
fn main() {
use id_effect::{Tag, Tagged, tagged};

// A Tag is a zero-sized type with an associated Value type
struct DatabaseTag;
impl Tag for DatabaseTag {
    type Value = Pool;
}

struct CacheTag;
impl Tag for CacheTag {
    type Value = Pool;  // Same underlying type, different identity
}
}

Tagged<DatabaseTag> is "a Pool identified as the database." Tagged<CacheTag> is "a Pool identified as the cache." They're different types even though both wrap Pool.

Creating Tagged Values

#![allow(unused)]
fn main() {
let db_pool: Pool = connect_database();
let cache_pool: Pool = connect_cache();

// Wrap with identity
let db:    Tagged<DatabaseTag> = tagged(db_pool);
let cache: Tagged<CacheTag>    = tagged(cache_pool);
}

tagged(value) is a simple wrapper constructor. It moves the value inside a Tagged<T> newtype.

To get the value back:

#![allow(unused)]
fn main() {
let pool: &Pool = db.value();
let pool_owned: Pool = db.into_value();
}

Why Tags Make the Compiler Your Friend

Now the swap problem from the previous section becomes a compile error:

#![allow(unused)]
fn main() {
// These are DIFFERENT types
fn needs_database<R: NeedsDatabase>() -> Effect<A, E, R> { ... }
fn needs_cache<R: NeedsCache>() -> Effect<A, E, R> { ... }

// Providing the wrong one fails to compile
effect.provide(tagged::<CacheTag>(pool))
// ERROR: expected Tagged<DatabaseTag>, got Tagged<CacheTag>
}

The compiler distinguishes them. You can't accidentally swap the database and cache connections.

The service_key! Macro

In practice, you don't implement Tag by hand. The service_key! macro generates the boilerplate:

#![allow(unused)]
fn main() {
use id_effect::service_key;

service_key!(DatabaseKey: Pool);
service_key!(CacheKey: Pool);
service_key!(LoggerKey: Logger);
}

Each call creates a tag type with the right Tag implementation. Use these as your service keys.

NeedsX Supertraits

When you write a function that needs a DatabaseKey service, you want the bound expressed cleanly. The NeedsX supertrait pattern does this:

#![allow(unused)]
fn main() {
// Low-level (verbose)
pub fn get_user<R>(id: u64) -> Effect<User, DbError, R>
where
    R: Get<DatabaseKey, Target = Pool>
{ ... }

// High-level (idiomatic) — define NeedsDatabase once
pub trait NeedsDatabase: Get<DatabaseKey, Target = Pool> {}
impl<R: Get<DatabaseKey, Target = Pool>> NeedsDatabase for R {}

// Now use it
pub fn get_user<R: NeedsDatabase>(id: u64) -> Effect<User, DbError, R> { ... }
}

The NeedsX trait is just a named alias for the Get<Key> bound. It makes function signatures readable and allows you to change the key implementation without updating every call site.

Summary

ConceptPurpose
TagZero-sized type acting as a compile-time name
Tagged<T>A value wrapped with a tag identity
tagged(v)Wrap a value with its tag
service_key!(K: V)Macro to generate a tag type
NeedsXSupertrait alias for Get<XKey> bounds

Tags eliminate the position problem. The next section shows how they're assembled into Context — the heterogeneous list that forms the R of a running effect.

Context and HLists — The Heterogeneous Stack

Context is the concrete data structure that R resolves to at runtime. It's a heterogeneous list (HList) — a stack where each element has a different type, and the compiler tracks all of them.

The Structure: Cons / Nil

Context is built from two constructors:

#![allow(unused)]
fn main() {
use id_effect::{Context, Cons, Nil, Tagged};

// An empty context
type Empty = Nil;

// A context with one item
type WithDb = Cons<Tagged<DatabaseKey>, Nil>;

// A context with two items
type WithDbAndLogger = Cons<Tagged<DatabaseKey>, Cons<Tagged<LoggerKey>, Nil>>;
}

Cons<Head, Tail> prepends one item to a list. Nil is the empty list. It's the same idea as linked-list types in functional languages, but expressed as Rust type parameters.

Building Context Values

Manually building Cons chains is verbose. The ctx! macro handles it:

#![allow(unused)]
fn main() {
use id_effect::ctx;

let env: Context<Cons<Tagged<DatabaseKey>, Cons<Tagged<LoggerKey>, Nil>>> = ctx!(
    tagged::<DatabaseKey>(my_pool),
    tagged::<LoggerKey>(my_logger),
);
}

Or use prepend_cell manually if you need to add to an existing context:

#![allow(unused)]
fn main() {
use id_effect::prepend_cell;

let base = ctx!(tagged::<LoggerKey>(my_logger));
let full = prepend_cell(tagged::<DatabaseKey>(my_pool), base);
}

Both produce the same type. ctx! is preferred for clarity.

Why HLists and Not HashMap?

A HashMap<TypeId, Box<dyn Any>> would also store heterogeneous values. But it trades type safety for flexibility — lookups return Box<dyn Any>, and you have to downcast.

Context gives:

  • Compile-time lookup: if you ask for DatabaseKey and it's not in the context, you get a compile error
  • Zero-cost access: no hashing, no downcast, no Option unwrapping
  • Type preservation: Get<DatabaseKey> returns &Pool, not &dyn Any

The cost is that the type of a Context encodes all its elements in the type parameter — which is why you see signatures like Cons<Tagged<A>, Cons<Tagged<B>, Nil>>. It's verbose, but it's verifiable at compile time.

Order Doesn't Matter for Access

Unlike tuples, adding an element to a Context doesn't break existing lookups. Get<DatabaseKey> finds the Tagged<DatabaseKey> wherever it is in the list:

#![allow(unused)]
fn main() {
// These two contexts both support Get<DatabaseKey>
type C1 = Cons<Tagged<DatabaseKey>, Cons<Tagged<LoggerKey>, Nil>>;
type C2 = Cons<Tagged<LoggerKey>, Cons<Tagged<DatabaseKey>, Nil>>;

// Both work — order doesn't matter for tag-based access
fn use_db<R: NeedsDatabase>(r: &R) { ... }
}

This is what makes R stable under refactoring: adding a new service to the context doesn't change how existing services are accessed.

R in Practice

In real application code, you rarely construct Context directly. Layers (Chapter 6) build it for you. Services (Chapter 7) access it through NeedsX bounds. You interact with Context directly mostly in:

  • Manual test environments (constructing a test Context with mock services)
  • Integration points where you're bridging an existing application to id_effect
  • Internal library utilities that manipulate context directly

For everything else, the layer and service machinery handles construction automatically.

Get and GetMut — Extracting from Context

Once you have a Context, you need to extract values from it. The Get and GetMut traits define the interface for type-safe lookup by tag.

Get: Read-Only Access

#![allow(unused)]
fn main() {
use id_effect::Get;

fn use_database<R>(env: &R) -> &Pool
where
    R: Get<DatabaseKey>,
{
    env.get::<DatabaseKey>()
}
}

Get<K> is the trait bound that says "this environment contains a value tagged with K." The get::<K>() method returns a reference to that value.

The compiler finds the right element in the Cons chain automatically. Position doesn't matter — it searches by tag identity.

GetMut: Mutable Access

#![allow(unused)]
fn main() {
use id_effect::GetMut;

fn increment_counter<R>(env: &mut R)
where
    R: GetMut<CounterKey>,
{
    let counter: &mut Counter = env.get_mut::<CounterKey>();
    counter.increment();
}
}

GetMut is the mutable variant. It's less commonly needed in effect code (effects generally avoid shared mutable state in favour of TRef or services), but it's there for integration scenarios.

The ~ Operator Uses Get Internally

Inside an effect! block, the ~ operator is what calls get::<K>():

#![allow(unused)]
fn main() {
effect! {
    let db = ~ DatabaseKey;  // equivalent to env.get::<DatabaseKey>()
    let user = ~ db.fetch_user(id);
    user
}
}

The ~ ServiceKey form binds the service to a local name. This is the primary way you access services in effect code — you rarely call get() directly.

NeedsX Supertraits (Recap)

Rather than writing Get<DatabaseKey> in every function bound, define a NeedsDatabase supertrait:

#![allow(unused)]
fn main() {
pub trait NeedsDatabase: Get<DatabaseKey> {}
impl<R: Get<DatabaseKey>> NeedsDatabase for R {}
}

Then use it:

#![allow(unused)]
fn main() {
fn get_user<R: NeedsDatabase>(id: u64) -> Effect<User, DbError, R> { ... }
fn get_posts<R: NeedsDatabase>(uid: u64) -> Effect<Vec<Post>, DbError, R> { ... }

// Composed: still just NeedsDatabase (same requirement)
fn get_user_with_posts<R: NeedsDatabase>(id: u64) -> Effect<(User, Vec<Post>), DbError, R> { ... }
}

The NeedsX pattern keeps signatures readable. Define one per service in your application.

Compile-Time Guarantees

The key property: if you write Get<DatabaseKey> in a bound, and the caller tries to run the effect without providing DatabaseKey, you get a compile error, not a runtime panic.

#![allow(unused)]
fn main() {
// Missing DatabaseKey in the context
let bad_env = ctx!(tagged::<LoggerKey>(my_logger));

// This won't compile — bad_env doesn't satisfy NeedsDatabase
run_blocking(get_user(42).provide(bad_env));
// ERROR: the trait bound `Context<Cons<Tagged<LoggerKey>, Nil>>: NeedsDatabase` is not satisfied
}

The error message tells you exactly what's missing. No runtime "service not found" exceptions. No defensive unwraps in service lookup code.

This is the payoff of the whole Tags/Context system: an application that compiles is an application where every service dependency is satisfied.

Layers — Building Your Dependency Graph

You've seen how R encodes what an effect needs, and how Context holds the values at runtime. But who builds the context?

In small programs you can construct context manually with ctx! and hand values to provide. In real applications, you need something more powerful: a way to declare how to build each piece of the environment, with automatic dependency ordering and lifecycle management.

That's what Layers are for.

A Layer is a recipe for building part of an environment. It knows what it produces, what it needs to produce it, and (optionally) how to clean up afterward. Wire Layers together, and id_effect figures out the right build order automatically.

What Is a Layer?

An Effect describes a computation that needs an environment. A Layer describes how to build part of that environment.

Think of it this way:

Effect<User, DbError, Database>
  └── "I need a Database to produce a User"

Layer<Database, ConfigError, Config>
  └── "I need a Config to produce a Database"

They're complementary. Effects declare their needs; Layers declare how to satisfy them.

The Layer Type

#![allow(unused)]
fn main() {
// Layer<Output, Error, Input>
//        │       │      └── What I need to build
//        │       └───────── What can go wrong while building
//        └───────────────── What I produce
Layer<Tagged<DatabaseKey>, DbError, Tagged<ConfigKey>>
}

A layer that takes a Tagged<ConfigKey> and produces a Tagged<DatabaseKey>, possibly failing with DbError.

A Simple Layer

#![allow(unused)]
fn main() {
use id_effect::{Layer, LayerFn, effect, tagged};

let db_layer: Layer<Tagged<DatabaseKey>, DbError, Tagged<ConfigKey>> =
    LayerFn::new(|config: &Tagged<ConfigKey>| {
        effect! {
            let pool = ~ connect_pool(config.value().db_url());
            tagged::<DatabaseKey>(pool)
        }
    });
}

LayerFn::new wraps an effectful constructor. The closure takes the required input (the config) and returns an effect that produces the output (the database connection).

Layers as Values

Like effects, layers are lazy descriptions. Constructing a LayerFn does nothing. The actual connection only happens when the layer is built — typically at application startup or at the top of a test.

#![allow(unused)]
fn main() {
// Build the layer (runs the constructor effect)
let db: Tagged<DatabaseKey> = db_layer.build(my_config).await?;
}

Or more commonly, layers are composed and the whole graph is built at once (covered in §6.4).

The Key Insight

  • Effects are programs. They describe what to do with dependencies.
  • Layers are constructors. They describe how to build dependencies.

When you call .provide_layer(some_layer), you're saying: "use this Layer's output to satisfy this Effect's R." The Layer builds the dependency; the Effect consumes it.

Lifecycle and Resource Safety

Layers aren't just factories. They can also register cleanup:

#![allow(unused)]
fn main() {
LayerFn::new(|_: &Nil| {
    effect! {
        let pool = ~ connect_pool(url);
        // Register cleanup: close pool on shutdown
        ~ scope.add_finalizer(Finalizer::new(move || {
            pool.close()
        }));
        tagged::<DatabaseKey>(pool)
    }
})
}

The finalizer runs when the layer's scope is dropped — even if the application panics or a fiber is cancelled. This makes Layers the safe way to manage resources with expensive setup and teardown.

Chapter 10 covers scopes and finalizers in detail. For now, know that Layers are where you register resource lifecycles.

Building Layers — From Simple to Complex

Simple Layers: One Input, One Output

The simplest layer takes one service and produces another:

#![allow(unused)]
fn main() {
// Produces a database connection from config
let db_layer = LayerFn::new(|config: &Tagged<ConfigKey>| {
    effect! {
        let pool = ~ connect_pool(config.value().db_url());
        tagged::<DatabaseKey>(pool)
    }
});
}

Layers That Need Nothing

A layer that builds from scratch (no inputs) uses Nil or () as its input type:

#![allow(unused)]
fn main() {
// Config layer — reads from environment variables, needs nothing
let config_layer = LayerFn::new(|_: &Nil| {
    effect! {
        let cfg = Config::from_env()?;
        tagged::<ConfigKey>(cfg)
    }
});
}

Layers With Multiple Inputs

To require multiple services, the input type is a tuple of tagged values:

#![allow(unused)]
fn main() {
// Logger needs both Config and MetricsClient
let logger_layer = LayerFn::new(
    |env: &(Tagged<ConfigKey>, Tagged<MetricsKey>)| {
        effect! {
            let config  = env.0.value();
            let metrics = env.1.value();
            let logger  = Logger::new(config.log_level(), metrics.clone());
            tagged::<LoggerKey>(logger)
        }
    }
);
}

Layers That Produce Multiple Values

A layer can produce a context with several services at once:

#![allow(unused)]
fn main() {
// Build both primary and replica database pools
let db_layers = LayerFn::new(|config: &Tagged<ConfigKey>| {
    effect! {
        let primary = ~ connect_pool(config.value().primary_url());
        let replica = ~ connect_pool(config.value().replica_url());
        ctx!(
            tagged::<PrimaryDbKey>(primary),
            tagged::<ReplicaDbKey>(replica),
        )
    }
});
}

Memoization

By default, LayerFn builds fresh on every call. If you want the same instance shared across multiple dependents, wrap with .memoize():

#![allow(unused)]
fn main() {
let shared_config = config_layer.memoize();
// Now every layer that depends on ConfigKey gets the same instance
}

Without memoization, if three layers need ConfigKey, config_layer would run three times. With .memoize(), it runs once and the result is cached for the lifetime of the build.

The Pattern in Practice

A typical application has a handful of layers that form a pipeline:

config_layer (no input)
  → db_layer (needs config)
  → cache_layer (needs config)
  → service_layer (needs db + cache)

The next section shows how to wire them together.

Stacking Layers — Composition Patterns

Individual layers do one thing. A real application needs them composed. id_effect provides two composition patterns: sequential stacking and parallel merging.

Sequential Stacking with .stack()

#![allow(unused)]
fn main() {
use id_effect::Stack;

let app_env = config_layer
    .stack(db_layer)       // Config → (Config, Database)
    .stack(logger_layer)   // → (Config, Database, Logger)
    .stack(service_layer); // → (Config, Database, Logger, Service)
}

Each .stack() takes the output of the previous layer and combines it with the new layer's output. The final type accumulates everything.

.stack() implies sequential ordering: db_layer runs after config_layer completes, because db_layer needs what config_layer produces.

Parallel Merging with merge_all

When layers don't depend on each other, they can be built in parallel:

#![allow(unused)]
fn main() {
use id_effect::merge_all;

// These three layers are independent — build them concurrently
let monitoring = merge_all!(
    metrics_layer,
    tracing_layer,
    health_check_layer,
);
}

merge_all! takes a list of layers with the same input type and merges their outputs. If the inputs are available, all three build concurrently.

Combining Stack and merge_all

In practice, you mix both:

#![allow(unused)]
fn main() {
let app_env = config_layer
    .stack(db_layer)
    .stack(
        // cache and redis are independent of each other but both need config+db
        merge_all!(cache_layer, redis_layer)
    )
    .stack(service_layer);
}

The build graph:

  1. Build config
  2. Build db (needs config)
  3. Build cache and redis concurrently (both need config + db)
  4. Build service (needs all of the above)

Building and Providing

Once you have a composed layer, build it and provide it to an effect:

#![allow(unused)]
fn main() {
// Build all layers, get back a Context with everything
let env = app_env.build(()).await?;

// Provide to the effect and run
run_blocking(my_application().provide(env));
}

Or use .provide_layer() directly on an effect:

#![allow(unused)]
fn main() {
run_blocking(
    my_application()
        .provide_layer(app_env)
);
}

.provide_layer() builds the layer and provides its output in one step. This is the most common pattern at the application entry point.

The type_only Pattern for Tests

Tests often want a subset of services. You can stack only what the test needs:

#![allow(unused)]
fn main() {
#[test]
fn test_user_service() {
    let test_layer = config_layer.stack(mock_db_layer);

    let result = run_test(
        get_user(1)
            .provide_layer(test_layer)
    );
    assert!(result.is_ok());
}
}

No need to build the full application stack. The test provides exactly what get_user requires — the type system enforces completeness.

Layer Graphs — Automatic Dependency Resolution

For small applications, manually stacking layers in the right order is fine. For larger ones with dozens of services and complex inter-dependencies, it gets tedious and error-prone. LayerGraph automates it.

Declaring a Layer Graph

#![allow(unused)]
fn main() {
use id_effect::{LayerGraph, LayerNode};

let graph = LayerGraph::new()
    .add(LayerNode::new("config",  config_layer))
    .add(LayerNode::new("db",      db_layer)
             .requires("config"))
    .add(LayerNode::new("cache",   cache_layer)
             .requires("config"))
    .add(LayerNode::new("mailer",  mailer_layer)
             .requires("config"))
    .add(LayerNode::new("service", service_layer)
             .requires("db")
             .requires("cache"))
    .add(LayerNode::new("app",     app_layer)
             .requires("service")
             .requires("mailer"));
}

Each LayerNode has a name and declares its dependencies with .requires(). The LayerGraph figures out the build order automatically.

Planning and Building

#![allow(unused)]
fn main() {
// Compute the build plan (topological sort)
let plan: LayerPlan = graph.plan()?;

// Build according to the plan (parallelises where possible)
let env = plan.build(()).await?;
}

LayerPlan is the computed ordering. It runs independent layers concurrently and sequential layers in order. The graph in the example above would:

  1. Build config first
  2. Build db, cache, and mailer concurrently (all need config, none need each other)
  3. Build service (needs db + cache)
  4. Build app (needs service + mailer)

Cycle Detection

graph.plan() returns an error if there are circular dependencies:

#![allow(unused)]
fn main() {
let bad_graph = LayerGraph::new()
    .add(LayerNode::new("a", layer_a).requires("b"))
    .add(LayerNode::new("b", layer_b).requires("a"));  // circular!

let err = bad_graph.plan();  // Err(LayerGraphError::Cycle { ... })
}

Cycles are detected at plan time, before any work begins. The error message identifies the cycle.

Conditional Layers

Layers can be added conditionally:

#![allow(unused)]
fn main() {
let mut graph = LayerGraph::new()
    .add(LayerNode::new("config", config_layer));

if cfg!(feature = "metrics") {
    graph = graph.add(LayerNode::new("metrics", metrics_layer)
        .requires("config"));
}
}

Feature flags and environment-based configuration compose naturally with the graph API.

When to Use LayerGraph vs Stack

SituationPrefer
< 5 layers, clear order.stack()
> 5 layers, complex depsLayerGraph
Need cycle detectionLayerGraph
Conditional/pluggable servicesLayerGraph
Tests with minimal deps.stack()

LayerGraph is overkill for small programs. For anything approaching production scale, the automatic resolution and parallelism are worth it.

Services — The Complete DI Pattern

The previous chapters established the building blocks: Tags (identities), Context (the environment), and Layers (constructors). Now we put them together into the complete Service pattern.

A Service in id_effect is the combination of:

  1. A trait defining the interface
  2. A tag identifying it in the environment
  3. One or more implementations (production and test)
  4. A layer that wires an implementation into the environment

This is the full dependency injection story. By the end of this chapter you'll have a working multi-service application wired entirely at compile time.

Service Traits — Defining Interfaces

The first step in defining a service is the trait — the contract between the service and its callers.

Define the Interface

#![allow(unused)]
fn main() {
use id_effect::{Effect};

// The interface contract
trait UserRepository: Send + Sync {
    fn get_user(&self, id: u64) -> Effect<User, DbError, ()>;
    fn save_user(&self, user: &User) -> Effect<(), DbError, ()>;
    fn list_users(&self) -> Effect<Vec<User>, DbError, ()>;
}
}

A few conventions:

  • Methods return Effect<_, _, ()> — the service itself has R = () because it doesn't need additional context. The R of callers (who access the service through the environment) is what carries the requirement.
  • Send + Sync on the trait means implementations can be stored in Arc<dyn Trait> and shared across fibers.
  • Method names are verb-oriented: get_user, save_user, not user or users.

Define the Tag

Each service needs a tag that identifies it in the environment:

#![allow(unused)]
fn main() {
use id_effect::service_key;

// Associates UserRepositoryTag with Arc<dyn UserRepository>
service_key!(UserRepositoryTag: Arc<dyn UserRepository>);
}

service_key! generates:

  • A zero-sized UserRepositoryTag type
  • A Tag implementation that maps UserRepositoryTagArc<dyn UserRepository>
  • A NeedsUserRepository supertrait that functions can use in bounds

Define the Needs Supertrait

#![allow(unused)]
fn main() {
use id_effect::{Get};

// Generated by service_key! or defined manually
pub trait NeedsUserRepository: Get<UserRepositoryTag> {}
impl<R: Get<UserRepositoryTag>> NeedsUserRepository for R {}
}

Now any function that uses the user repository declares this:

#![allow(unused)]
fn main() {
fn get_user_profile(id: u64) -> Effect<UserProfile, AppError, impl NeedsUserRepository> {
    effect! {
        let repo = ~ UserRepositoryTag;  // get the service
        let user = ~ repo.get_user(id);
        UserProfile::from(user)
    }
}
}

The compiler checks that NeedsUserRepository is satisfied before the function can run.

Keeping Traits Focused

A common mistake is defining one massive AppService trait with everything in it. Prefer small, focused traits:

#![allow(unused)]
fn main() {
// BAD — one God trait
trait AppService {
    fn get_user(&self, id: u64) -> Effect<User, AppError, ()>;
    fn send_email(&self, to: &str, body: &str) -> Effect<(), AppError, ()>;
    fn charge_card(&self, amount: u64) -> Effect<(), AppError, ()>;
}

// GOOD — separate concerns
trait UserRepository { fn get_user(&self, id: u64) -> Effect<User, DbError, ()>; }
trait EmailService { fn send_email(&self, to: &str, body: &str) -> Effect<(), EmailError, ()>; }
trait PaymentGateway { fn charge(&self, amount: u64) -> Effect<(), PaymentError, ()>; }
}

Small traits are composable. Functions declare exactly which services they need — NeedsUserRepository + NeedsEmailService — and the compiler enforces it.

ServiceEnv and service_env — The Glue

service_key! handles the tag definition. ServiceEnv and service_env provide the glue for accessing a service from the environment inside an effect.

service_env: Access a Service

#![allow(unused)]
fn main() {
use id_effect::{service_env, ServiceEnv};

// Access a service and use it
fn get_user(id: u64) -> Effect<User, DbError, ServiceEnv<UserRepositoryTag>> {
    effect! {
        let repo = ~ service_env::<UserRepositoryTag>();
        ~ repo.get_user(id)
    }
}
}

service_env::<K>() returns an effect that, when run, extracts the Arc<dyn Trait> identified by K from the environment.

ServiceEnv<K> is a type alias for the R required by service_env — effectively Context<Cons<Tagged<K>, Nil>> but more readable.

The ~ Tag Shorthand

Inside effect!, you can use the tag directly with ~:

#![allow(unused)]
fn main() {
fn get_user(id: u64) -> Effect<User, DbError, impl NeedsUserRepository> {
    effect! {
        let repo: Arc<dyn UserRepository> = ~ UserRepositoryTag;
        ~ repo.get_user(id)
    }
}
}

~ UserRepositoryTag is syntactic sugar for ~ service_env::<UserRepositoryTag>(). Both work; the shorthand is more concise.

Multiple Services in One Effect

When a function needs several services:

#![allow(unused)]
fn main() {
fn notify_user(user_id: u64, message: &str)
-> Effect<(), AppError, impl NeedsUserRepository + NeedsEmailService>
{
    effect! {
        let repo  = ~ UserRepositoryTag;
        let email = ~ EmailServiceTag;
        
        let user = ~ repo.get_user(user_id).map_error(AppError::Db);
        ~ email.send(&user.email, message).map_error(AppError::Email);
        ()
    }
}
}

The impl NeedsX + NeedsY syntax is the idiomatic way to express multiple requirements. The compiler verifies that the provided environment satisfies both traits.

ServiceEnv vs Raw Context

ServiceEnv<K> is a convenience type. Under the hood it's just a Context with one element. The difference is ergonomic:

#![allow(unused)]
fn main() {
// With ServiceEnv
fn f() -> Effect<User, E, ServiceEnv<UserRepositoryTag>> { ... }

// With raw Context (equivalent, more verbose)
fn f() -> Effect<User, E, Context<Cons<Tagged<UserRepositoryTag>, Nil>>> { ... }
}

Use ServiceEnv for single-service effects. Use impl NeedsX for effects in library code where the concrete context type shouldn't leak into the signature. The Layers machinery accepts both at runtime.

Providing Services via Layers

You have a trait. You have an implementation. Now you need a Layer that wires them together.

The Minimal Service Layer

#![allow(unused)]
fn main() {
use id_effect::{LayerFn, tagged, effect};

// The production implementation
struct PostgresUserRepository {
    pool: Pool,
}

impl UserRepository for PostgresUserRepository {
    fn get_user(&self, id: u64) -> Effect<User, DbError, ()> {
        effect! {
            let row = ~ query(&self.pool, "SELECT * FROM users WHERE id = $1", id);
            User::from_row(row)
        }
    }
    // ...
}

// The layer that builds the implementation from the database pool
let user_repo_layer = LayerFn::new(|env: &Tagged<DatabaseKey>| {
    effect! {
        let repo = PostgresUserRepository { pool: env.value().clone() };
        tagged::<UserRepositoryTag>(Arc::new(repo) as Arc<dyn UserRepository>)
    }
});
}

The layer takes Tagged<DatabaseKey> (the database pool) and produces Tagged<UserRepositoryTag> (the repository wrapped as Arc<dyn UserRepository>).

Composition with Other Layers

The repository layer needs the database. Wire them:

#![allow(unused)]
fn main() {
let app_layer = config_layer
    .stack(db_layer)
    .stack(user_repo_layer);  // db_layer output feeds into user_repo_layer
}

Now app_layer produces an environment containing ConfigKey, DatabaseKey, and UserRepositoryTag — everything get_user_profile needs.

Test Layer with Mock

#![allow(unused)]
fn main() {
struct MockUserRepository {
    users: HashMap<u64, User>,
}

impl UserRepository for MockUserRepository {
    fn get_user(&self, id: u64) -> Effect<User, DbError, ()> {
        match self.users.get(&id) {
            Some(u) => succeed(u.clone()),
            None    => fail(DbError::NotFound),
        }
    }
    // ...
}

let test_repo_layer = LayerFn::new(|_: &Nil| {
    let repo = MockUserRepository {
        users: [(1, alice()), (2, bob())].into(),
    };
    succeed(tagged::<UserRepositoryTag>(Arc::new(repo) as Arc<dyn UserRepository>))
});
}

The test layer needs nothing (no real database), produces the same Tagged<UserRepositoryTag>, and can be stacked in place of the production layer.

The Swap

#![allow(unused)]
fn main() {
// Production
run_blocking(my_app().provide_layer(
    config_layer.stack(db_layer).stack(user_repo_layer)
));

// Test
run_test(my_app().provide_layer(
    test_repo_layer  // no config or db needed
));
}

The application code doesn't change. Only the layer stack changes. The type system ensures both stacks satisfy the effect's requirements.

A Complete DI Example — Putting It All Together

This section builds a small but complete application with three services, a Layer graph, and both production and test wiring. It's the capstone of Part II.

The Domain

A blog API with users, posts, and notifications:

#![allow(unused)]
fn main() {
// Domain types
struct User { id: u64, name: String, email: String }
struct Post { id: u64, author_id: u64, title: String, body: String }
enum AppError { Db(DbError), Notify(NotifyError) }
}

Three Service Traits

#![allow(unused)]
fn main() {
trait UserRepository: Send + Sync {
    fn get_user(&self, id: u64) -> Effect<User, DbError, ()>;
}

trait PostRepository: Send + Sync {
    fn get_posts_by_author(&self, author_id: u64) -> Effect<Vec<Post>, DbError, ()>;
}

trait NotificationService: Send + Sync {
    fn send_welcome(&self, to: &str) -> Effect<(), NotifyError, ()>;
}

service_key!(UserRepoKey:   Arc<dyn UserRepository>);
service_key!(PostRepoKey:   Arc<dyn PostRepository>);
service_key!(NotifierKey:   Arc<dyn NotificationService>);
}

The Business Logic

#![allow(unused)]
fn main() {
fn get_author_feed(author_id: u64)
-> Effect<(User, Vec<Post>), AppError, impl NeedsUserRepo + NeedsPostRepo>
{
    effect! {
        let user_repo = ~ UserRepoKey;
        let post_repo = ~ PostRepoKey;
        let user  = ~ user_repo.get_user(author_id).map_error(AppError::Db);
        let posts = ~ post_repo.get_posts_by_author(author_id).map_error(AppError::Db);
        (user, posts)
    }
}

fn register_user(name: &str, email: &str)
-> Effect<User, AppError, impl NeedsUserRepo + NeedsNotifier>
{
    effect! {
        let repo     = ~ UserRepoKey;
        let notifier = ~ NotifierKey;
        let user = ~ repo.create_user(name, email).map_error(AppError::Db);
        ~ notifier.send_welcome(&user.email).map_error(AppError::Notify);
        user
    }
}
}

Production Layer Graph

let prod_graph = LayerGraph::new()
    .add(LayerNode::new("config",   config_layer))
    .add(LayerNode::new("db",       pg_db_layer).requires("config"))
    .add(LayerNode::new("users",    pg_user_repo_layer).requires("db"))
    .add(LayerNode::new("posts",    pg_post_repo_layer).requires("db"))
    .add(LayerNode::new("notifier", smtp_notifier_layer).requires("config"));

fn main() {
    let plan = prod_graph.plan().expect("layer graph has no cycles");
    run_blocking(
        get_author_feed(1).provide_layer(plan)
    ).expect("app failed");
}

Test Wiring

#![allow(unused)]
fn main() {
fn make_test_layer() -> impl Layer<Output, (), Nil> {
    let users = mock_user_repo(&[alice(), bob()]);
    let posts  = mock_post_repo(&[alice_post()]);
    let notify = capturing_notifier();

    user_repo_layer(users)
        .stack(post_repo_layer(posts))
        .stack(notifier_layer(notify))
}

#[test]
fn feed_includes_authors_posts() {
    let result = run_test(
        get_author_feed(1).provide_layer(make_test_layer())
    ).unwrap();
    assert_eq!(result.1.len(), 1);
    assert_eq!(result.1[0].title, "Alice's Post");
}

#[test]
fn registration_sends_welcome_email() {
    let notifier = capturing_notifier();
    let result = run_test(
        register_user("Carol", "carol@example.com")
            .provide_layer(
                mock_user_repo_layer(&[]).stack(notifier_layer(notifier.clone()))
            )
    );
    assert!(result.is_ok());
    assert_eq!(notifier.sent_count(), 1);
}
}

What This Demonstrates

The business logic functions (get_author_feed, register_user) are completely decoupled from the infrastructure:

  • They declare their needs via NeedsX bounds
  • They access services via ~ ServiceKey
  • They know nothing about Postgres, SMTP, or any concrete type

The Layer graph wires concrete implementations at the entry point. Swapping Postgres for SQLite or SMTP for SendGrid requires changing only the layer definition — not a single line of business logic.

That's compile-time dependency injection: the full dependency graph is verified by the type checker, not discovered at runtime.


You've completed Part II. The next section of the book shifts to operational concerns: how to handle errors properly, run fibers concurrently, manage resources safely, and schedule repeated work.

Error Handling — Cause, Exit, and Recovery

Part II gave you the full dependency injection story. Part III is about what happens when things go wrong — and in production, things always go wrong.

Rust's Result<T, E> is excellent for expected errors: outcomes you anticipated and typed. But real programs also encounter unexpected failures: panics, OOM conditions, and cancelled fibers. id_effect models all of these with a richer type hierarchy.

This chapter introduces Cause (the full error taxonomy), Exit (the terminal outcome of any effect), and the combinators for recovering from both.

Beyond Result — Why Cause Exists

Result<T, E> handles the errors you expect. But what about the errors that aren't your E?

Expected vs. Unexpected

#![allow(unused)]
fn main() {
// Expected: you planned for this
Effect<User, UserNotFound, Db>

// But what if the database panics?
// What if the fiber is cancelled?
// What if the process runs out of memory?
// None of these are UserNotFound.
}

Traditional Rust handles unexpected failures through panics, which unwind (or abort) and bypass all your error handling. In async code, panics in tasks can silently swallow errors or leave resources unreleased.

The Cause Type

Cause<E> is id_effect's complete taxonomy of failure:

#![allow(unused)]
fn main() {
use id_effect::Cause;

enum Cause<E> {
    Fail(E),                // Your typed, expected error
    Die(Box<dyn Any>),      // A panic or defect — something that shouldn't happen
    Interrupt,              // The fiber was cancelled
}
}

Every failure in the effect runtime is one of these three. Together they cover the full space of "things that can go wrong."

  • Cause::Fail(e) — an error you declared in E, handled with catch or map_error
  • Cause::Die(payload) — a panic, logic bug, or fatal error; should be logged and treated as a defect
  • Cause::Interrupt — clean cancellation; the fiber was asked to stop and cooperated

Why This Matters

Without Cause, you can only handle Cause::Fail. The other two propagate invisibly up the fiber tree and may silently swallow logs or leave resources unreleased.

With Cause, you can handle all failure modes in a structured way:

#![allow(unused)]
fn main() {
my_effect.catch_all(|cause| match cause {
    Cause::Fail(e)    => recover_from_expected(e),
    Cause::Die(panic) => log_defect_and_fail(panic),
    Cause::Interrupt  => succeed(default_value()),
})
}

Resource finalizers (Chapter 10) use this same model — they run on any Cause, ensuring cleanup regardless of how the fiber ends.

Day-to-Day Usage

In normal application code you rarely inspect Cause directly. You use:

  • .catch(f) for handling Cause::Fail
  • .catch_all(f) when you need to handle panics or interruption too
  • Exit (next section) when you need to inspect the terminal outcome

The Cause type is mostly visible at infrastructure boundaries — resource finalizers, fiber supervisors, and top-level error handlers.

Exit — Terminal Outcomes

Every effect execution ends with an Exit. It's the final word on what happened.

The Exit Type

#![allow(unused)]
fn main() {
use id_effect::Exit;

enum Exit<E, A> {
    Success(A),         // Effect completed, produced A
    Failure(Cause<E>),  // Effect failed with Cause<E>
}
}

Exit combines the success type and the full failure taxonomy. It's what you get when you use run_to_exit instead of run_blocking:

#![allow(unused)]
fn main() {
use id_effect::run_to_exit;

// run_blocking returns Result<A, E> — loses Cause::Die and Cause::Interrupt info
let user: Result<User, DbError> = run_blocking(get_user(1))?;

// run_to_exit returns Exit<E, A> — full picture
let exit: Exit<DbError, User> = run_to_exit(get_user(1));

match exit {
    Exit::Success(user)                    => println!("Got user: {}", user.name),
    Exit::Failure(Cause::Fail(DbError::NotFound)) => println!("User not found"),
    Exit::Failure(Cause::Die(panic_val))   => eprintln!("Defect: {:?}", panic_val),
    Exit::Failure(Cause::Interrupt)        => println!("Cancelled"),
}
}

Converting Exit to Result

Most application code wants Result. The conversion is straightforward:

#![allow(unused)]
fn main() {
let result: Result<User, AppError> = exit.into_result(|cause| match cause {
    Cause::Fail(e) => AppError::Expected(e),
    Cause::Die(_)  => AppError::Defect,
    Cause::Interrupt => AppError::Cancelled,
});
}

Or use the convenience method that maps Cause::Fail(e)Err(e) and treats other causes as panics:

#![allow(unused)]
fn main() {
let result: Result<User, DbError> = exit.into_result_or_panic();
}

Exit in Fiber Joins

When you join a fiber (Chapter 9), you get an Exit back:

#![allow(unused)]
fn main() {
let fiber = my_effect.fork();
let exit: Exit<E, A> = fiber.join().await;
}

This lets you inspect whether the fiber succeeded, failed with a typed error, panicked, or was cancelled — and respond appropriately in the parent fiber.

Practical Rule

Use run_blocking (which returns Result<A, E>) for 90% of cases. Use run_to_exit when you need to distinguish panics from typed failures — typically at top-level handlers, supervisors, or when integrating with external error reporting.

Recovery Combinators — catch, fold, and Friends

Knowing about Cause and Exit is only useful if you can act on them. id_effect provides a focused set of recovery combinators.

catch: Handle Expected Errors

catch intercepts Cause::Fail(e) and gives you a chance to recover:

#![allow(unused)]
fn main() {
let resilient = risky_db_call()
    .catch(|error: DbError| {
        match error {
            DbError::NotFound => succeed(User::anonymous()),
            other => fail(other),  // re-raise anything else
        }
    });
}

If risky_db_call fails with Cause::Fail(e), the closure runs. If it fails with Cause::Die or Cause::Interrupt, those propagate unchanged — catch only handles typed failures.

catch_all: Handle Everything

catch_all intercepts any Cause:

#![allow(unused)]
fn main() {
let bulletproof = my_effect.catch_all(|cause| match cause {
    Cause::Fail(e)   => handle_expected_error(e),
    Cause::Die(_)    => succeed(fallback_value()),
    Cause::Interrupt => succeed(cancelled_gracefully()),
});
}

Use catch_all when you genuinely need to handle panics or cancellation — typically at resource boundaries or top-level handlers. Don't use it to swallow defects silently.

fold: Handle Both Paths

fold transforms both success and failure into a uniform success type:

#![allow(unused)]
fn main() {
let always_string: Effect<String, Never, ()> = risky_call()
    .fold(
        |error| format!("Error: {error}"),
        |value| format!("Success: {value}"),
    );
}

After fold, the effect never fails (E = Never). Both arms produce the same type. This is useful for logging, metrics, or converting to a neutral representation.

or_else: Try an Alternative

or_else runs an alternative effect on failure:

#![allow(unused)]
fn main() {
let with_fallback = primary_source()
    .or_else(|_err| secondary_source());
}

If primary_source fails, secondary_source runs. If that also fails, the combined effect fails with the second error. Useful for fallback chains.

ignore_error: Discard Failures

When you genuinely don't care about an operation's success:

#![allow(unused)]
fn main() {
// Log "best effort" — failure is acceptable
let logged_effect = log_metrics()
    .ignore_error()
    .flat_map(|_| actual_work());
}

ignore_error converts Effect<A, E, R> to Effect<Option<A>, Never, R>. The effect always "succeeds" — with Some(value) on success or None on failure.

The Recovery Hierarchy

catch(f)      — handles Cause::Fail only
catch_all(f)  — handles all Cause variants
fold(on_e, on_a) — transforms both paths to success
or_else(f)    — runs alternative on failure
ignore_error  — converts failure to Option

Prefer the narrowest combinator that solves your problem. catch for expected errors. catch_all only when you need to touch panics or cancellation.

Error Accumulation — Collecting All Failures

catch and fold handle errors sequentially: one effect, one error, one handler. But sometimes you need to run many operations and collect all their failures — not just the first.

The Fail-Fast Problem

Sequential effect! short-circuits on the first failure:

#![allow(unused)]
fn main() {
effect! {
    let _ = ~ validate_name(&input.name);    // fails here →
    let _ = ~ validate_email(&input.email);  // never runs
    let _ = ~ validate_age(input.age);       // never runs
}
}

For form validation or batch imports, you want to report all errors to the user, not just the first one.

validate_all

validate_all runs a collection of effects and accumulates all failures:

#![allow(unused)]
fn main() {
use id_effect::validate_all;

let results = validate_all(vec![
    validate_name(&input.name),
    validate_email(&input.email),
    validate_age(input.age),
]);
// Type: Effect<Vec<Name, Email, Age>, Vec<ValidationError>, ()>
}

If any validations fail, all errors are collected and returned as a Vec. If all succeed, you get all the success values.

partition

partition runs effects and splits the results into successes and failures:

#![allow(unused)]
fn main() {
use id_effect::partition;

let (successes, failures): (Vec<User>, Vec<ImportError>) =
    run_blocking(partition(records.iter().map(import_record)))?;

println!("{} imported, {} failed", successes.len(), failures.len());
}

partition never fails. It always returns two lists: what worked and what didn't. Useful for batch operations where partial success is acceptable.

Or: Combining Two Error Types

When composing effects with different error types, Or avoids flattening into a single error type before you're ready:

#![allow(unused)]
fn main() {
use id_effect::Or;

// Instead of converting both to AppError immediately:
type BothErrors = Or<DbError, NetworkError>;

fn combined() -> Effect<Data, BothErrors, ()> {
    db_fetch()
        .map_error(Or::Left)
        .zip(network_fetch().map_error(Or::Right))
        .map(|(a, b)| merge(a, b))
}
}

Or<A, B> is the coproduct of two error types. It defers the decision of how to combine them until you actually need to handle them.

The ParseErrors Type

The Schema module (Chapter 14) uses ParseErrors — a structured accumulator for parsing failures with field paths:

#![allow(unused)]
fn main() {
let result: Result<User, ParseErrors> = user_schema.parse(data);

if let Err(errors) = result {
    for e in errors.iter() {
        eprintln!("At {}: {}", e.path(), e.message());
    }
}
}

ParseErrors is specialised for schema validation, but the pattern — collect all, report all — applies whenever you validate structured input.

When to Accumulate vs. Short-Circuit

SituationUse
Dependent steps (each needs previous result)effect! (short-circuit)
Independent validations (user input)validate_all
Batch operations (partial success OK)partition
Schema parsingParseErrors (automatic)

The choice is about what makes sense to the caller. Short-circuit is efficient; accumulation is informative. Use the one your users need.

Concurrency & Fibers — Structured Async

Async Rust gives you the ability to do many things concurrently. The challenge is doing it safely — without fire-and-forget tasks that outlive their parent, without silent failures when a task panics, and without resource leaks when tasks are cancelled.

id_effect uses Fibers for structured concurrency. A Fiber is a lightweight, interruptible async task with a typed result, an explicit lifecycle, and guaranteed cleanup.

This chapter covers spawning fibers, joining them, cancelling them gracefully, and using FiberRef for fiber-local state.

What Are Fibers? — Lightweight Structured Tasks

A Fiber is an effect-managed async task. It's lighter than an OS thread and safer than a raw tokio::spawn.

Fibers vs. Raw Tasks

#![allow(unused)]
fn main() {
// Raw tokio::spawn — fire and forget
// Who owns this? What happens if it panics?
// When does it stop? Who cleans up?
tokio::spawn(async {
    do_something().await;
});

// Effect Fiber — explicit lifecycle
let handle: FiberHandle<Result, Error> = my_effect.fork();
// You hold the handle. The fiber is yours.
let exit: Exit<Error, Result> = handle.join().await;
// The fiber stops when you join or drop the handle.
}

With tokio::spawn, the task runs independently. If it panics, the panic is captured by Tokio and may or may not surface to you. There's no built-in way to cancel it or guarantee its resources are cleaned up.

With fork, you get a FiberHandle. When you join(), you get the full Exit — success, typed failure, panic, or cancellation. When you drop the handle without joining, the fiber is cancelled automatically.

Structured Concurrency

The key property of Fibers is structured lifecycle:

  • A fiber cannot outlive its parent scope without explicit permission
  • All spawned fibers are joined (or cancelled) before the parent effect completes
  • Panics and failures propagate through the fiber tree, not silently into the void

This makes concurrent code much easier to reason about. When process_batch completes, all its helper fibers have completed too — or been cancelled and cleaned up.

FiberId

Each Fiber has a unique FiberId. You can use it for logging, tracing, and correlation:

#![allow(unused)]
fn main() {
use id_effect::FiberId;

effect! {
    let id = ~ current_fiber_id();
    ~ log(&format!("[fiber:{id}] starting work"));
    // ...
}
}

FiberId flows through the fiber's execution automatically. You don't thread it manually.

FiberHandle and FiberStatus

FiberHandle<E, A> is the control interface for a spawned fiber:

#![allow(unused)]
fn main() {
let handle = my_effect.fork();

// Check status without blocking
let status: FiberStatus = handle.status();

// Join — blocks until the fiber completes
let exit: Exit<E, A> = handle.join().await;

// Interrupt — ask the fiber to stop
handle.interrupt();
}

FiberStatus can be Running, Completed, or Interrupted. Unlike tokio::JoinHandle, you can inspect status without consuming the handle.

Spawning and Joining — fiber_all and Friends

Running a single fiber is useful; running many concurrently is where Fibers shine.

fork: Spawn One Fiber

#![allow(unused)]
fn main() {
let handle = compute_expensive_result().fork();

// Do other work while the fiber runs
let local_result = local_computation();

// Now join the fiber
let remote_result = handle.join().await.into_result_or_panic()?;

(local_result, remote_result)
}

fork spawns the effect as a concurrent fiber. You can do other work and join later.

fiber_all: Run Many, Collect All

#![allow(unused)]
fn main() {
use id_effect::fiber_all;

// Run all concurrently; collect all results
let results: Vec<User> = run_blocking(
    fiber_all(user_ids.iter().map(|&id| fetch_user(id)))
)?;
}

fiber_all takes an iterable of effects, runs them all concurrently, and waits for every one to complete. If any fails, the first failure is returned (and any remaining fibers are cancelled).

For independent work where all results are needed, fiber_all is the idiomatic choice.

fiber_race: First to Complete Wins

#![allow(unused)]
fn main() {
use id_effect::fiber_race;

// Try primary and backup concurrently — take whichever responds first
let data = run_blocking(
    fiber_race(vec![fetch_from_primary(), fetch_from_backup()])
)?;
// The slower fiber is automatically cancelled
}

fiber_race returns as soon as any fiber succeeds. The others are interrupted. Useful for timeout patterns, geographic failover, and speculative execution.

fiber_any: First Success

#![allow(unused)]
fn main() {
use id_effect::fiber_any;

// Try all; return first success (ignore failures until all done)
let result = fiber_any(vec![
    try_region_us(),
    try_region_eu(),
    try_region_ap(),
])?;
}

fiber_any differs from fiber_race in that it ignores failures and waits for the first success. Only if all fail does it return an error.

run_fork: Low-Level Spawn

For cases where you need to spawn with full control:

#![allow(unused)]
fn main() {
use id_effect::run_fork;

let runtime = Runtime::current();
let handle = run_fork(runtime, || (my_effect, my_env));
}

run_fork is the low-level primitive. effect.fork() is syntactic sugar over it when you're already inside an effect context.

Error Behaviour

CombinatorOn any failure
fiber_allCancel remaining, return first error
fiber_raceCancel remaining, return first success
fiber_anyWait for all, return first success or all errors
fork + joinWhatever the individual fiber's Exit says

Choose based on whether partial success is acceptable and whether you want to wait for everyone.

Cancellation — Interrupting Gracefully

Cancellation in async code is notoriously difficult to get right. id_effect makes it explicit and cooperative.

The Model: Cooperative Cancellation

Fibers aren't forcibly killed. They're interrupted — given a signal to stop at the next safe checkpoint. The fiber cooperates by checking for interruption at yield points.

The simplest yield point is check_interrupt:

#![allow(unused)]
fn main() {
use id_effect::check_interrupt;

effect! {
    for chunk in large_dataset.chunks(1000) {
        ~ check_interrupt();  // yields; if interrupted, stops here
        process_chunk(chunk);
    }
    "done"
}
}

Every ~ binding is also an implicit yield point. If a fiber is interrupted while awaiting an effect, the interruption propagates to the next ~ bind.

CancellationToken

For external cancellation (e.g., from HTTP request handlers or UI cancel buttons):

#![allow(unused)]
fn main() {
use id_effect::CancellationToken;

// Create a cancellation token
let token = CancellationToken::new();

// Pass it to a long-running effect
let effect = long_running_job().with_cancellation(&token);

// Spawn it
let handle = effect.fork();

// Later, cancel from outside
token.cancel();

// The fiber will stop at the next check_interrupt
let exit = handle.join().await;
// exit will be Exit::Failure(Cause::Interrupt)
}

Tokens can be cloned and shared. Cancelling any clone cancels all effects sharing that token.

Interrupting Directly via FiberHandle

#![allow(unused)]
fn main() {
let handle = background_work().fork();

// After some timeout or external event:
handle.interrupt();

let exit = handle.join().await;
// Cleanup (finalizers, scopes) runs before the handle resolves
}

.interrupt() sends the interruption signal. The fiber's finalizers (Chapter 10) still run. .join() waits for them to complete.

Graceful Shutdown

Interruption is the mechanism for graceful shutdown. The pattern:

  1. Signal all top-level fibers with .interrupt()
  2. Wait for all handles to join (with a timeout)
  3. If any fiber doesn't stop within the timeout, escalate
#![allow(unused)]
fn main() {
let handles: Vec<FiberHandle<_, _>> = workers.iter().map(|w| w.fork()).collect();

// Shutdown signal received
for h in &handles { h.interrupt(); }

// Wait with timeout
for h in handles {
    tokio::time::timeout(Duration::from_secs(5), h.join()).await;
}
}

Because effect finalizers run on interruption, all resources are cleaned up as fibers stop — no manual cleanup required at the shutdown handler.

Uninterruptible Regions

Some operations shouldn't be interrupted mid-way (e.g., writing to a database inside a transaction). Mark them uninterruptible:

#![allow(unused)]
fn main() {
use id_effect::uninterruptible;

// This block runs to completion even if interrupted
let committed = uninterruptible(effect! {
    ~ begin_transaction();
    ~ insert_records();
    ~ commit();
});
}

The interruption is deferred until committed completes. Use sparingly — long uninterruptible regions delay shutdown.

FiberRef — Fiber-Local State

FiberRef is the effect equivalent of thread-local storage. It holds a value that's scoped to the current fiber — each fiber has its own independent copy.

Defining a FiberRef

#![allow(unused)]
fn main() {
use id_effect::FiberRef;

// A fiber-local trace ID, defaulting to "none"
static TRACE_ID: FiberRef<String> = FiberRef::new(|| "none".to_string());
}

FiberRef::new takes a factory closure that produces the initial value for each fiber. The static variable is the key; each fiber has its own value.

Reading and Writing

#![allow(unused)]
fn main() {
effect! {
    // Set the trace ID for this fiber
    ~ TRACE_ID.set("req-abc-123".to_string());

    // Read it anywhere in this fiber's call stack
    let id = ~ TRACE_ID.get();
    ~ log(&format!("[{id}] processing request"));

    ~ process_request();
    Ok(())
}
}

set and get are both effects (they need the fiber context). Inside effect!, use ~ to bind them.

FiberRef Doesn't Cross Fiber Boundaries

When you fork a new fiber, it starts with its own copy of the FiberRef value (the factory closure runs again):

#![allow(unused)]
fn main() {
effect! {
    ~ TRACE_ID.set("parent-123".to_string());
    
    let child = effect! {
        let id = ~ TRACE_ID.get();
        // id is "none" — the fork starts fresh
        println!("child trace id: {id}");
    }.fork();
    
    ~ child.join();
    Ok(())
}
}

If you want the child to inherit the parent's value, pass it explicitly or use FiberRef::inherit:

#![allow(unused)]
fn main() {
let child_with_inherited = effect! {
    let id = ~ TRACE_ID.get();
    TRACE_ID.locally(id, child_effect())  // child sees parent's value
};
}

locally(value, effect) runs the effect with a temporarily overridden FiberRef value, then restores the previous value when done.

Common Use Cases

Use CasePattern
Request tracing / correlation IDsstatic TRACE_ID: FiberRef<String>
Per-request user contextstatic CURRENT_USER: FiberRef<Option<UserId>>
Metrics labelsstatic OPERATION: FiberRef<&'static str>
Debug contextstatic CALL_PATH: FiberRef<Vec<String>>

FiberRef makes it easy to carry contextual information through deep call stacks without threading extra parameters everywhere — the fiber equivalent of request-scoped context in traditional web frameworks.

Resources & Scopes — Deterministic Cleanup

RAII works beautifully in synchronous Rust: resources are released when they fall out of scope, Drop runs deterministically. In async code, the picture gets complicated.

This chapter shows why RAII breaks down in async contexts, introduces Scope and finalizers as the solution, covers the acquire_release pattern for RAII-style resource management, and concludes with Pool for reusing expensive connections.

The Resource Problem — Cleanup in Async

RAII in synchronous code:

#![allow(unused)]
fn main() {
{
    let file = File::open("data.txt")?;
    process(&file)?;
}  // file.drop() runs here, always, unconditionally
}

Reliable. Simple. The drop happens when the scope ends — no exceptions (unless you have exceptions).

The Async Complication

#![allow(unused)]
fn main() {
async fn process_data() -> Result<(), Error> {
    let conn = open_connection().await?;
    let data = fetch_data(&conn).await?;  // What if this is cancelled?
    transform_and_save(data).await?;      // Or this?
    conn.close().await?;                  // May never reach here
    Ok(())
}
}

Three problems:

  1. Cancellation: If this async function is cancelled mid-execution, conn.close() never runs.
  2. Panic: If transform_and_save panics, the async task is dropped. conn.close() is skipped.
  3. Async Drop: impl Drop for Connection can only do synchronous cleanup. If closing a connection requires .await, you can't do it in Drop.

conn.close() must be an async call, but Drop can't be async. This is a fundamental mismatch.

The Root Cause

RAII relies on Drop running synchronously when a value goes out of scope. In async code, "going out of scope" and "running cleanup" can be decoupled — by cancellation, by executor scheduling, or by the fact that async closures are state machines that might never reach certain states.

The Solution Preview

id_effect solves this with:

  • Scope — a region where finalizers are registered and guaranteed to run (even on cancellation or panic)
  • acquire_release — a combinator that pairs acquisition with its cleanup
  • Pool — for long-lived resources that need controlled reuse

All three run cleanup effects (not just synchronous Drop), and all three run them unconditionally — success, failure, or interruption.

Scopes and Finalizers — Guaranteed Cleanup

A Scope is a region of execution with a finalizer registry. Any cleanup effects registered in the scope run when the scope exits — regardless of how it exits.

Creating a Scope

#![allow(unused)]
fn main() {
use id_effect::{Scope, Finalizer, scoped};

let result = scoped(|scope| {
    effect! {
        let conn = ~ open_connection();

        // Register cleanup — runs when scope exits
        ~ scope.add_finalizer(Finalizer::new(move || {
            conn.close()
        }));

        // Do work
        let data = ~ fetch_data(&conn);
        process(data)
    }
});
}

scoped creates a scope, runs the inner effect, and then — in all cases — runs the registered finalizers in reverse order.

Finalizers Always Run

#![allow(unused)]
fn main() {
let result = scoped(|scope| {
    effect! {
        let conn = ~ open_connection();
        ~ scope.add_finalizer(Finalizer::new(move || conn.close()));

        ~ risky_operation();  // may panic or fail
        "done"
    }
});
// Whether risky_operation succeeds, fails, or panics:
// conn.close() ALWAYS runs before result is returned
}

This is the guarantee that RAII can't provide in async: the finalizer is an async effect that runs in the right context, at the right time, always.

Multiple Finalizers

Finalizers run in reverse registration order (last-in, first-out — like RAII destructors):

#![allow(unused)]
fn main() {
scoped(|scope| {
    effect! {
        let conn   = ~ open_connection();
        let txn    = ~ begin_transaction(&conn);
        let cursor = ~ open_cursor(&txn);

        ~ scope.add_finalizer(Finalizer::new(move || close_cursor(cursor)));   // runs 3rd
        ~ scope.add_finalizer(Finalizer::new(move || rollback_or_commit(txn))); // runs 2nd... wait
        ~ scope.add_finalizer(Finalizer::new(move || close_connection(conn)));  // wait — read below
    }
})
}

Actually, the first registered finalizer runs last. Register cleanup in the order you want it to run, reversed: register connection first (so it closes last), cursor last (so it closes first).

Scope Inheritance

Scopes nest. A child scope's finalizers run before the parent's:

#![allow(unused)]
fn main() {
scoped(|outer| {
    scoped(|inner| {
        effect! {
            ~ inner.add_finalizer(Finalizer::new(|| cleanup_inner()));
            ~ outer.add_finalizer(Finalizer::new(|| cleanup_outer()));
            work()
        }
    })
})
// Execution order: cleanup_inner(), then cleanup_outer()
}

Layers use scopes internally — every resource a Layer builds can register its own finalizer, and the whole graph tears down cleanly when the application shuts down.

acquire_release — The RAII Pattern

acquire_release is a convenience wrapper around Scope that pairs acquisition and release into a single value.

The Pattern

#![allow(unused)]
fn main() {
use id_effect::acquire_release;

let managed_connection = acquire_release(
    // Acquire: run this to get the resource
    open_connection(),
    // Release: run this when done (always runs)
    |conn| conn.close(),
);
}

managed_connection is itself an effect that:

  1. When executed, opens the connection
  2. Registers conn.close() as a finalizer in the current scope
  3. Produces the connection for use

Use it with flat_map or effect!:

#![allow(unused)]
fn main() {
let result = managed_connection.flat_map(|conn| {
    do_work_with_conn(&conn)
});
// conn.close() runs after do_work_with_conn, regardless of outcome
}

Or inline:

#![allow(unused)]
fn main() {
effect! {
    let conn = ~ managed_connection;
    let data = ~ fetch_data(&conn);   // conn closes after this block
    process(data)
}
}

Why This Is Better Than Manual Scope

acquire_release makes the acquisition-release pair inseparable. You can't accidentally call open_connection() without also registering its cleanup. The resource and its lifecycle are coupled at the point of creation.

#![allow(unused)]
fn main() {
// With manual scope: easy to forget the finalizer
let conn = ~ open_connection();
// ... (forgot: ~ scope.add_finalizer(...))

// With acquire_release: cleanup is mandatory, automatic
let conn = ~ acquire_release(open_connection(), |c| c.close());
}

Resource Wrapping Pattern

A common convention is to wrap acquire_release in a helper function:

#![allow(unused)]
fn main() {
fn managed_db_connection(url: &str) -> Effect<Connection, DbError, ()> {
    acquire_release(
        Connection::open(url),
        |conn| conn.close(),
    )
}

// Usage
effect! {
    let conn = ~ managed_db_connection(config.db_url());
    ~ run_query(&conn, "SELECT 1")
}
}

The helper function documents that open_connection() must always be paired with close(). Callers don't think about the lifecycle; it's handled.

Comparison with Drop

acquire_release is not a replacement for impl Drop — it's a complement:

  • impl Drop: synchronous cleanup for types that own simple resources
  • acquire_release: async cleanup for effects that acquire and release through the effect runtime

Use both appropriately. A TcpStream closing its OS file descriptor in Drop is fine. Closing a database connection pool that requires async coordination belongs in acquire_release.

Pools — Reusing Expensive Resources

Creating a database connection takes time: DNS lookup, TCP handshake, TLS, authentication. Creating one per request is wasteful. A pool maintains a set of connections and lends them out, returning them when done.

id_effect provides Pool and KeyedPool as first-class effect constructs.

Pool: Basic Connection Pool

#![allow(unused)]
fn main() {
use id_effect::Pool;

// Create a pool of up to 10 connections
let pool: Pool<Connection> = Pool::new(
    || open_connection("postgres://localhost/app"),  // factory
    10,                                               // max size
);
}

The pool lazily creates connections up to the max. Idle connections are kept alive for reuse.

Using a Pool Connection

#![allow(unused)]
fn main() {
pool.with_resource(|conn: &Connection| {
    effect! {
        let rows = ~ conn.query("SELECT * FROM users");
        rows.into_iter().map(User::from_row).collect::<Vec<_>>()
    }
})
}

with_resource acquires a connection from the pool, runs the effect, and returns the connection automatically when done — regardless of success, failure, or cancellation. No acquire_release boilerplate; the pool handles it.

Waiting for Availability

If all connections are in use, with_resource waits until one becomes available:

#![allow(unused)]
fn main() {
// Concurrent requests share the pool; each waits its turn
fiber_all(vec![
    pool.with_resource(|c| query_a(c)),
    pool.with_resource(|c| query_b(c)),
    pool.with_resource(|c| query_c(c)),
])
}

The pool queues waiters and notifies them as connections are returned.

KeyedPool: Multiple Named Pools

For scenarios with multiple distinct pools (e.g., read replica + write primary):

#![allow(unused)]
fn main() {
use id_effect::KeyedPool;

let pools: KeyedPool<&str, Connection> = KeyedPool::new(
    |key: &&str| open_connection(key),
    5,  // max per key
);

// Get a connection for the write primary
pools.with_resource("write-primary", |conn| { ... })

// Get a connection for the read replica
pools.with_resource("read-replica", |conn| { ... })
}

Each key has its own independently bounded pool.

Pool as a Service

In practice, pools live in the effect environment as services:

#![allow(unused)]
fn main() {
service_key!(DbPoolKey: Pool<Connection>);

fn query_users() -> Effect<Vec<User>, DbError, impl NeedsDbPool> {
    effect! {
        let pool = ~ DbPoolKey;
        ~ pool.with_resource(|conn| {
            effect! {
                let rows = ~ conn.query("SELECT * FROM users");
                rows.iter().map(User::from_row).collect::<Vec<_>>()
            }
        })
    }
}
}

The pool is provided via a Layer:

#![allow(unused)]
fn main() {
let pool_layer = LayerFn::new(|config: &Tagged<ConfigKey>| {
    effect! {
        let pool = Pool::new(
            || Connection::open(config.value().db_url()),
            config.value().pool_size(),
        );
        tagged::<DbPoolKey>(pool)
    }
});
}

Pool creation, lifecycle, and cleanup are all handled by the Layer. Business code sees only NeedsDbPool.

Scheduling — Retry, Repeat, and Time

Production services fail. Networks are unreliable. Downstream APIs go down. The database gets overwhelmed. Defensive engineering means anticipating failure and building policies for what to do when it happens.

id_effect models these policies with Schedule — a type that describes when to retry, how long to wait between attempts, and when to give up. Combined with Clock injection, scheduling logic becomes testable without real-time delays.

Schedule — The Retry/Repeat Policy Type

A Schedule is not just a number of retries or a delay. It's a policy — a function that takes the current state (attempt count, elapsed time, last output) and decides whether to continue and how long to wait.

The Core Concept

#![allow(unused)]
fn main() {
use id_effect::Schedule;

// A Schedule answers: "Given where we are, should we continue? And after how long?"
// Input: attempt number, elapsed time, last result
// Output: Continue(delay) or Done
}

This abstraction is more powerful than "retry 3 times with 1-second delay." A Schedule can:

  • Increase delay exponentially (standard backoff)
  • Cap the total time regardless of attempts
  • Stop after a maximum number of attempts
  • Adjust based on the error type or last result
  • Combine policies with && and ||

Creating Schedules

#![allow(unused)]
fn main() {
use id_effect::Schedule;

// Fixed delay: always wait the same amount
let fixed = Schedule::spaced(Duration::from_secs(1));

// Exponential: 100ms, 200ms, 400ms, 800ms, ...
let exponential = Schedule::exponential(Duration::from_millis(100));

// Fibonacci: 100ms, 100ms, 200ms, 300ms, 500ms, 800ms, ...
let fibonacci = Schedule::fibonacci(Duration::from_millis(100));

// Forever: repeat indefinitely with no delay
let forever = Schedule::forever();

// Once: run exactly once (useful for testing)
let once = Schedule::once();
}

Combining Schedules

Schedules compose:

#![allow(unused)]
fn main() {
// Retry up to 5 times
let max_5 = Schedule::exponential(100.ms()).take(5);

// But stop after 30 seconds total
let bounded = Schedule::exponential(100.ms()).until_total_duration(Duration::from_secs(30));

// Combine with &&: both conditions must agree to continue
let safe = Schedule::exponential(100.ms())
    .take(5)
    .until_total_duration(Duration::from_secs(30));
}

Schedule as a Value

Like effects, schedules are values. You can define them once and reuse them:

#![allow(unused)]
fn main() {
const DEFAULT_RETRY: Schedule = Schedule::exponential(Duration::from_millis(100))
    .take(5)
    .with_jitter(Duration::from_millis(50));

fn call_external_api() -> Effect<Response, ApiError, HttpClient> {
    make_request().retry(DEFAULT_RETRY)
}
}

Jitter (random delay variation) reduces thundering-herd problems when many processes retry simultaneously. .with_jitter(d) adds a random delay in [0, d) to each wait.

Built-in Schedules — exponential, fibonacci, forever

id_effect provides a library of common schedule types. This section catalogs them with typical use cases.

exponential

#![allow(unused)]
fn main() {
Schedule::exponential(Duration::from_millis(100))
// Delays: 100ms, 200ms, 400ms, 800ms, 1600ms, ...
}

The standard backoff pattern. Each delay doubles. Use for most network retry scenarios — it backs off quickly and gives the downstream service time to recover.

#![allow(unused)]
fn main() {
// With cap: 100ms, 200ms, 400ms, 800ms, 800ms, 800ms (capped at 800ms)
Schedule::exponential(Duration::from_millis(100))
    .with_max_delay(Duration::from_millis(800))
}

fibonacci

#![allow(unused)]
fn main() {
Schedule::fibonacci(Duration::from_millis(100))
// Delays: 100ms, 100ms, 200ms, 300ms, 500ms, 800ms, 1300ms, ...
}

Fibonacci backoff grows more gradually than exponential — useful when you want more retries in the early attempts before backing off significantly.

spaced

#![allow(unused)]
fn main() {
Schedule::spaced(Duration::from_secs(5))
// Delays: 5s, 5s, 5s, 5s, ... (constant)
}

Fixed interval. Use for polling (checking status every N seconds), heartbeats, or scenarios where the delay should be predictable.

forever

#![allow(unused)]
fn main() {
Schedule::forever()
// No delay between repetitions; runs indefinitely
}

Run an effect as fast as possible, forever. Use with repeat for continuous background jobs (e.g., metrics collection, background syncs). Almost always combined with .take(n) or .until_total_duration(d).

Limiters

Limiters constrain any schedule:

#![allow(unused)]
fn main() {
// Stop after N attempts
schedule.take(5)

// Stop after N successes (for repeat)
schedule.take_successes(3)

// Stop after total elapsed time
schedule.until_total_duration(Duration::from_secs(30))

// Stop after N total elapsed retries including initial
schedule.take_with_initial(6)  // 1 initial + 5 retries
}

Jitter

#![allow(unused)]
fn main() {
// Add random delay in [0, jitter_max) to each wait
schedule.with_jitter(Duration::from_millis(50))

// Alternatively, full jitter (random in [0, delay])
schedule.with_full_jitter()
}

Jitter prevents retry storms. When 1000 clients all hit a failing service and all retry at the same time, they create a "thundering herd." Jitter spreads the retries out.

Combining with &&

#![allow(unused)]
fn main() {
let bounded_exponential = Schedule::exponential(Duration::from_millis(100))
    .take(5)
    .with_jitter(Duration::from_millis(20))
    .until_total_duration(Duration::from_secs(10));
}

The && (chain) operation means "continue only if both schedules agree to continue." The first schedule that says "done" wins.

retry and repeat — Applying Policies

Schedule is a policy description. retry and repeat are the two operations that apply it.

retry: On Failure, Try Again

#![allow(unused)]
fn main() {
use id_effect::Schedule;

let result = flaky_api_call()
    .retry(Schedule::exponential(Duration::from_millis(100)).take(3));
}

retry runs the effect. If it fails, it checks the schedule. If the schedule says "continue", it waits the indicated delay and tries again. When the schedule says "done" or the effect succeeds, retry returns.

Return value: the success value on success, or the last error if all retries are exhausted.

retry_while: Conditional Retry

Not all errors are retriable. Retry only when the error matches a condition:

#![allow(unused)]
fn main() {
let result = api_call()
    .retry_while(
        Schedule::exponential(Duration::from_millis(100)).take(5),
        |error| error.is_transient(),  // only retry transient errors
    );
}

Permanent errors (e.g., 404 Not Found, permission denied) shouldn't be retried — they won't go away. .retry_while lets you distinguish them.

repeat: On Success, Run Again

#![allow(unused)]
fn main() {
let polling = check_job_status()
    .repeat(Schedule::spaced(Duration::from_secs(5)));
}

repeat runs the effect. When it succeeds, it checks the schedule. If the schedule says "continue", it waits and runs again. This is the complement of retry: same mechanism, triggered by success instead of failure.

Use cases:

  • Poll for job completion every 5 seconds
  • Send heartbeats every 30 seconds
  • Refresh a cache on a fixed interval

repeat_until: Stop When Condition Met

#![allow(unused)]
fn main() {
let waiting_for_ready = poll_service()
    .repeat_until(
        Schedule::spaced(Duration::from_secs(1)),
        |status| status == ServiceStatus::Ready,
    );
}

repeat_until repeats until the success value satisfies a predicate. When the condition is met, it stops and returns the value.

Composition with Other Operations

retry and repeat return effects — they compose like everything else:

#![allow(unused)]
fn main() {
// Retry the individual call, then repeat the whole batch
let batch = process_single_item(item)
    .retry(Schedule::exponential(100.ms()).take(3));

let continuous = batch
    .repeat(Schedule::spaced(Duration::from_secs(60)));
}

Error Information in Retry

If you need to inspect errors during retry (for logging, metrics, etc.):

#![allow(unused)]
fn main() {
let instrumented = risky_call()
    .retry_with_feedback(
        Schedule::exponential(100.ms()).take(3),
        |attempt, error| {
            // Called before each retry
            println!("Attempt {attempt} failed: {error:?}");
        },
    );
}

retry_with_feedback passes the attempt number and the error to a side-effectful callback before each retry. Useful for structured logging of retry behaviour.

Clock Injection — Testable Time

Schedule::exponential(100.ms()).take(3) with real retries takes ~700ms to test. Multiply by hundreds of tests and your test suite takes minutes. Clock injection solves this.

The Clock Trait

#![allow(unused)]
fn main() {
use id_effect::{Clock};

// The Clock trait abstracts time
trait Clock: Send + Sync {
    fn now(&self) -> UtcDateTime;
    fn sleep(&self, duration: Duration) -> Effect<(), Never, ()>;
}
}

All time-related operations in id_effect go through the current fiber's Clock. Replace the clock, and "time" moves as fast as you drive it.

Production: LiveClock

#![allow(unused)]
fn main() {
use id_effect::LiveClock;

// Uses real system time and tokio::time::sleep
let live_clock = LiveClock::new();
}

In production, inject LiveClock through the environment. Effect code never calls std::time::SystemTime::now() directly — it uses the injected clock.

Testing: TestClock

#![allow(unused)]
fn main() {
use id_effect::TestClock;

let clock = TestClock::new();

// The clock starts at epoch and doesn't advance on its own
assert_eq!(clock.now(), EPOCH);

// Advance time instantly
clock.advance(Duration::from_secs(60));
assert_eq!(clock.now(), EPOCH + 60s);
}

TestClock is deterministic. It advances only when you tell it to. Sleep effects don't wait — they check the clock, and if the clock is past their wake time, they return immediately.

Test Example

#![allow(unused)]
fn main() {
#[test]
fn exponential_retry_makes_three_attempts() {
    let clock = TestClock::new();
    let attempts = Arc::new(AtomicU32::new(0));

    let effect = {
        let attempts = attempts.clone();
        failing_operation(attempts.clone())
            .retry(Schedule::exponential(Duration::from_secs(1)).take(3))
    };

    // Fork the effect with the test clock
    let handle = effect.fork_with_clock(&clock);

    // Advance time to trigger each retry
    clock.advance(Duration::from_secs(1));   // retry 1
    clock.advance(Duration::from_secs(2));   // retry 2
    clock.advance(Duration::from_secs(4));   // retry 3 (exhausted)

    let exit = handle.join_blocking();
    assert!(matches!(exit, Exit::Failure(_)));
    assert_eq!(attempts.load(Ordering::Relaxed), 4);  // initial + 3 retries
}
}

The test runs in microseconds despite testing multi-second retry behaviour. No tokio::time::pause() hacks. No sleep(Duration::ZERO) workarounds.

Clock in the Environment

Like all services, Clock lives in the effect environment:

#![allow(unused)]
fn main() {
service_key!(ClockKey: Arc<dyn Clock>);

fn now() -> Effect<UtcDateTime, Never, impl NeedsClock> {
    effect! {
        let clock = ~ ClockKey;
        clock.now()
    }
}
}

The production Layer provides LiveClock; test code provides TestClock. Business logic is identical in both contexts.

Software Transactional Memory — Optimistic Concurrency

Shared mutable state is hard. Mutexes work but compose poorly: lock two mutexes in the wrong order and you deadlock. Lock them separately and you get torn reads. Lock the whole world and you serialise unnecessarily.

Software Transactional Memory (STM) takes a different approach: every operation on shared state runs inside a transaction. Transactions commit atomically or roll back and retry. No explicit locks. No deadlocks. No torn reads.

This chapter covers id_effect's STM implementation: Stm, TRef, commit, and the transactional collection types.

Why STM? — The Shared State Problem

Consider transferring money between two accounts. With mutexes:

#![allow(unused)]
fn main() {
fn transfer(from: &Mutex<Account>, to: &Mutex<Account>, amount: u64) {
    let from_guard = from.lock().unwrap();
    // Thread B might be doing transfer(to, from, ...) right here
    let to_guard = to.lock().unwrap();
    // DEADLOCK: Thread A holds from, waiting for to.
    //           Thread B holds to, waiting for from.
    from_guard.balance -= amount;
    to_guard.balance += amount;
}
}

The standard fix (always lock in a consistent order) requires global coordination across your codebase. Add a third account and you need to sort three locks. It doesn't compose.

STM: Optimistic Concurrency

STM operates on the assumption that conflicts are rare. Instead of locking, it:

  1. Reads current values into a local transaction log
  2. Computes new values based on those reads
  3. Attempts to commit: checks that nothing changed since the reads, then atomically writes

If anything changed between step 1 and step 3, the transaction retries automatically from step 1.

#![allow(unused)]
fn main() {
use id_effect::{TRef, stm, commit};

fn transfer(from: &TRef<Account>, to: &TRef<Account>, amount: u64)
-> Effect<(), TransferError, ()>
{
    commit(stm! {
        let from_acct = ~ from.read_stm();
        let to_acct   = ~ to.read_stm();

        if from_acct.balance < amount {
            ~ stm::fail(TransferError::InsufficientFunds);
        }

        ~ from.write_stm(Account { balance: from_acct.balance - amount, ..from_acct });
        ~ to.write_stm(Account { balance: to_acct.balance + amount, ..to_acct });
        ()
    })
}
}

No locks. No deadlock risk. The transaction retries automatically if another transaction modified either account between our read and our write.

When STM Wins

SituationMutexSTM
Single shared value✓ simple✓ fine
Multiple related values✗ deadlock risk✓ composable
Read-heavy workloads✗ blocks writers✓ reads never block
Composing two existing operations✗ requires coordination✓ just nest in stm!
Long operations with I/O✓ (STM would retry too much)✗ wrong tool

STM shines when:

  • You need to update multiple values atomically
  • You're composing smaller transactional operations into larger ones
  • Contention is low (retries are cheap)

Avoid STM for long-running operations that do I/O — transactions should be short and pure. The stm! macro is for read-modify-write, not for network calls.

TRef — Transactional References

TRef<T> is the fundamental mutable cell in id_effect's STM system. It wraps a value that can be read and written inside transactions.

Creating a TRef

#![allow(unused)]
fn main() {
use id_effect::TRef;

let counter: TRef<i32> = TRef::new(0);
let balance: TRef<f64> = TRef::new(1000.0);
}

TRef::new(value) creates a transactional reference with an initial value. TRefs are typically created once (at startup or when initialising shared state) and then shared across fibers via Arc.

Transactional Operations

All TRef operations return Stm<_> — transactional descriptions, not effects. They only work inside stm! (or when run through commit/atomically):

#![allow(unused)]
fn main() {
use id_effect::{TRef, stm};

let counter = TRef::new(0);

// Read inside a transaction
let read_op: Stm<i32> = counter.read_stm();

// Write inside a transaction
let write_op: Stm<()> = counter.write_stm(42);

// Modify (read-write atomically)
let modify_op: Stm<()> = counter.modify_stm(|n| n + 1);
}

These are descriptions. Nothing happens until they're committed.

Running Inside stm!

The stm! macro provides do-notation for composing Stm operations, exactly like effect! does for Effect:

#![allow(unused)]
fn main() {
use id_effect::{stm, TRef};

let counter = TRef::new(0_i32);
let total   = TRef::new(0_i32);

let transaction: Stm<()> = stm! {
    let count = ~ counter.read_stm();
    let sum   = ~ total.read_stm();
    ~ counter.write_stm(count + 1);
    ~ total.write_stm(sum + count);
    ()
};
}

Sharing TRefs

TRefs are Clone + Send + Sync. Wrap in Arc to share across fibers:

#![allow(unused)]
fn main() {
use std::sync::Arc;

let shared: Arc<TRef<i32>> = Arc::new(TRef::new(0));

// Clone for each fiber
let clone1 = Arc::clone(&shared);
let clone2 = Arc::clone(&shared);

fiber_all(vec![
    increment_n_times(clone1, 1000),
    increment_n_times(clone2, 1000),
])
// Result: counter = 2000 (atomically, without locks)
}

TRef vs. Mutex

PropertyTRefMutex
Composable across updates
Deadlock-free
Blocking read✗ never✓ blocks writers
Works with I/O
Overheadretry costlock/unlock cost

Use TRef for short, composable state mutations. Use Mutex when you need to hold a lock across I/O (though ideally you redesign to avoid that).

Stm and commit — Building Transactions

The stm! macro produces Stm<A> values — descriptions of transactional computations. To execute them, you use commit or atomically.

commit: Lift Stm into Effect

#![allow(unused)]
fn main() {
use id_effect::{commit, Stm, Effect};

let transaction: Stm<i32> = stm! {
    let a = ~ ref_a.read_stm();
    let b = ~ ref_b.read_stm();
    a + b
};

// Lift into Effect
let effect: Effect<i32, Never, ()> = commit(transaction);

// Now run it
let result = run_blocking(effect)?;
}

commit wraps a Stm in an effect that, when run, executes the transaction and retries if there's a conflict. The E type of commit(stm) is Never unless the Stm can fail (see stm::fail).

atomically: Direct Execution

#![allow(unused)]
fn main() {
use id_effect::atomically;

// Run a transaction immediately in the current context
let value: i32 = atomically(stm! {
    ~ counter.modify_stm(|n| n + 1);
    ~ counter.read_stm()
});
}

atomically is the synchronous equivalent of commit + run_blocking. Use it when you're already outside the effect system and need a quick transactional update.

stm::fail: Transactional Errors

Transactions can fail with typed errors:

#![allow(unused)]
fn main() {
use id_effect::stm;

fn withdraw(account: &TRef<u64>, amount: u64) -> Stm<u64> {
    stm! {
        let balance = ~ account.read_stm();
        if balance < amount {
            ~ stm::fail(InsufficientFunds);  // abort the transaction
        }
        ~ account.write_stm(balance - amount);
        balance - amount
    }
}

// commit propagates the error into E
let effect: Effect<u64, InsufficientFunds, ()> = commit(withdraw(&account, 100));
}

stm::fail(e) aborts the current transaction with error e. The transaction is not retried — it fails immediately with the given error.

stm::retry: Block Until Condition

Sometimes a transaction should wait until a condition is true rather than failing:

#![allow(unused)]
fn main() {
// Block (retry) until the queue has items
fn dequeue(queue: &TRef<Vec<Item>>) -> Stm<Item> {
    stm! {
        let items = ~ queue.read_stm();
        if items.is_empty() {
            ~ stm::retry();  // block until queue changes, then retry
        }
        let item = items[0].clone();
        ~ queue.write_stm(items[1..].to_vec());
        item
    }
}
}

stm::retry() doesn't mean "try again immediately." It means "block until any TRef I read has changed, then try again." This is how TQueue implements blocking dequeue without busy-waiting.

Composing Transactions

Transactions compose by sequencing stm! blocks:

#![allow(unused)]
fn main() {
let big_transaction: Stm<()> = stm! {
    // Sub-transaction 1
    let _ = ~ transfer_funds(&from, &to, amount);
    // Sub-transaction 2
    let _ = ~ record_audit_log(&from, &to, amount);
    ()
};

// Both operations commit atomically or neither does
let effect = commit(big_transaction);
}

The composed transaction retries as a unit — if either sub-operation sees a conflict, the whole thing restarts from the beginning.

TQueue, TMap, TSemaphore — Transactional Collections

id_effect provides STM-aware versions of common collection types. They compose with other STM operations and integrate with stm!.

TQueue: Bounded Transactional Queue

#![allow(unused)]
fn main() {
use id_effect::TQueue;

let queue: TQueue<Job> = TQueue::bounded(100);

// Enqueue (blocks/retries if full)
let offer: Stm<()> = queue.offer_stm(job);

// Dequeue (blocks/retries if empty)
let take: Stm<Job> = queue.take_stm();

// Peek without removing
let peek: Stm<Option<Job>> = queue.peek_stm();

// Non-blocking try
let try_take: Stm<Option<Job>> = queue.try_take_stm();
}

TQueue::bounded(n) creates a queue with capacity n. offer_stm blocks (via stm::retry) when full; take_stm blocks when empty. Both integrate naturally with stm!.

Producer-Consumer Pattern

#![allow(unused)]
fn main() {
fn producer(queue: Arc<TQueue<Job>>, jobs: Vec<Job>) -> Effect<(), Never, ()> {
    effect! {
        for job in jobs {
            ~ commit(queue.offer_stm(job));
        }
        Ok(())
    }
}

fn consumer(queue: Arc<TQueue<Job>>) -> Effect<Never, Never, ()> {
    effect! {
        loop {
            let job = ~ commit(queue.take_stm());  // blocks if empty
            ~ process_job(job);
        }
    }
}
}

TMap: Transactional Hash Map

#![allow(unused)]
fn main() {
use id_effect::TMap;

let map: TMap<String, User> = TMap::new();

// Inside stm!:
let insert: Stm<()> = map.insert_stm("alice".into(), alice_user);
let get:    Stm<Option<User>> = map.get_stm("alice");
let remove: Stm<Option<User>> = map.remove_stm("alice");
let update: Stm<()> = map.modify_stm("alice", |u| { u.name = "ALICE".into(); u });
}

TMap is a concurrent hash map where all operations participate in STM transactions. Reading from TMap and TRef in the same transaction is atomic:

#![allow(unused)]
fn main() {
commit(stm! {
    let user = ~ user_map.get_stm("alice");
    let count = ~ access_counter.read_stm();
    ~ access_counter.write_stm(count + 1);
    user
})
// Either the map read AND the counter increment happen, or neither does
}

TSemaphore: Transactional Semaphore

#![allow(unused)]
fn main() {
use id_effect::TSemaphore;

// Create a semaphore with 10 permits
let sem: TSemaphore = TSemaphore::new(10);

// Acquire 1 permit (blocks if none available)
let acquire: Stm<()> = sem.acquire_stm(1);

// Release 1 permit
let release: Stm<()> = sem.release_stm(1);
}

TSemaphore limits concurrent access to a resource. Use it with acquire_release for resource pools where you want transactional semantics:

#![allow(unused)]
fn main() {
commit(stm! {
    ~ sem.acquire_stm(1);  // blocks until permit available
    ()
}).flat_map(|()| {
    do_limited_work().flat_map(|result| {
        commit(sem.release_stm(1)).map(|()| result)
    })
})
}

Summary

TypePurpose
TRef<T>Single mutable value
TQueue<T>Blocking FIFO queue
TMap<K, V>Concurrent hash map
TSemaphoreConcurrency limiter

All compose inside stm! and commit atomically with other STM operations.

Streams — Backpressure and Chunked Processing

An Effect produces one value. A Stream produces many values over time. When you need to process a potentially infinite or very large sequence — database result sets, event logs, file lines, sensor readings — Stream is the right abstraction.

This chapter covers when to use Stream vs Effect, how streams process data in Chunks for efficiency, how to control flow with backpressure policies, and how to consume streams with Sink.

Stream vs Effect — When to Use Each

The choice is simple: how many values does your computation produce?

Effect<A, E, R>  → produces exactly one A (or fails)
Stream<A, E, R>  → produces zero or more A values over time (or fails)

Concrete Examples

#![allow(unused)]
fn main() {
// Effect: get one user
fn get_user(id: u64) -> Effect<User, DbError, Db>

// Stream: all users, one at a time
fn all_users() -> Stream<User, DbError, Db>

// Effect: count rows
fn count_orders() -> Effect<u64, DbError, Db>

// Stream: export all orders for a report
fn export_orders() -> Stream<Order, DbError, Db>
}

If you fetch 10 million rows into a Vec and return it as an Effect, you'll run out of memory. A Stream loads and processes them incrementally.

Stream Transformations

Stream has the same transformation API as Effect:

#![allow(unused)]
fn main() {
all_users()
    .filter(|u| u.is_active())
    .map(|u| UserSummary::from(u))
    .take(100)
}

.map, .filter, .flat_map, .take, .drop, .zip — all work on streams. None of them load the whole stream into memory; they process elements as they arrive.

Collecting a Stream into an Effect

When you do need all the results:

#![allow(unused)]
fn main() {
let users: Effect<Vec<User>, DbError, Db> = all_users().collect();
}

.collect() consumes the stream and accumulates into a Vec. Use only when the full result fits in memory.

For large results, prefer a fold or a sink:

#![allow(unused)]
fn main() {
let count: Effect<usize, DbError, Db> = all_users().fold(0, |acc, _| acc + 1);
}

Converting Effect to Stream

Wrap an Effect in a single-element stream when you need to compose with streaming operators:

#![allow(unused)]
fn main() {
use id_effect::Stream;

let single_user_stream: Stream<User, DbError, Db> = Stream::from_effect(get_user(1));

// Now compose with other streams
let combined = single_user_stream.chain(all_users());
}

The Rule

  • Need one result: Effect
  • Need to process many results without loading all at once: Stream
  • Need to compose multiple streams: Stream with chain, zip, merge
  • Need all results in memory: Stream + .collect() (with appropriate size caution)

Chunks — Efficient Batched Processing

A Stream doesn't emit elements one at a time at the memory level. It emits them in Chunks — contiguous, fixed-capacity batches. Most of the time you don't interact with Chunk directly; the stream API works element-wise and handles chunking internally. But understanding chunks helps you tune performance.

What Is a Chunk

#![allow(unused)]
fn main() {
use id_effect::Chunk;

// A Chunk is a fixed-capacity contiguous sequence
let chunk: Chunk<i32> = Chunk::from_vec(vec![1, 2, 3, 4, 5]);

// Access elements
let first: Option<&i32> = chunk.first();
let len: usize = chunk.len();

// Iterate
for item in &chunk {
    println!("{item}");
}
}

A Chunk<A> is essentially a smart Arc<[A]> slice: cheap to clone (reference-counted), cache-friendly (contiguous layout), and zero-copy when slicing.

Why Chunks Exist

Processing elements one at a time through a chain of .map and .filter calls has overhead: each step is a separate allocation or indirection. Chunks amortize that cost:

Single-element model:
  elem1 → map → filter → emit → elem2 → map → filter → emit → ...
  (N function calls for N elements through each operator)

Chunk model:
  chunk[1..64] → map_chunk → filter_chunk → emit_chunk → ...
  (N/64 overhead calls; SIMD-friendly layout)

The default chunk size is 64 elements. You can change it when constructing a stream:

#![allow(unused)]
fn main() {
let stream = Stream::from_iter(0..1_000_000)
    .with_chunk_size(256);
}

Larger chunks improve throughput for CPU-bound map/filter operations. Smaller chunks reduce latency when downstream consumers need to act quickly.

Working with Chunks Directly

Most operators are element-wise, but a few operate at the chunk level for efficiency:

#![allow(unused)]
fn main() {
// map_chunks: apply a function to entire chunks at once
stream.map_chunks(|chunk| {
    chunk.map(|x| x * 2)  // vectorizable
})

// flat_map_chunks: emit a new chunk per input chunk
stream.flat_map_chunks(|chunk| {
    Chunk::from_iter(chunk.iter().flat_map(expand))
})
}

Use map_chunks when your transformation is pure and benefits from batching (e.g., numeric processing, serialisation).

Chunk in Sinks and Collectors

When a Sink receives data, it receives Chunks. Custom sinks that write to a file or network socket often want to write whole chunks at once:

#![allow(unused)]
fn main() {
impl Sink<Bytes> for FileSink {
    fn on_chunk(&mut self, chunk: Chunk<Bytes>) -> Effect<(), IoError, ()> {
        // write all bytes in one system call
        effect! {
            for bytes in &chunk {
                ~ self.write(bytes);
            }
            ()
        }
    }
}
}

Building Chunks

#![allow(unused)]
fn main() {
// From an iterator
let chunk = Chunk::from_iter([1, 2, 3]);

// From a Vec (no copy if Vec capacity matches)
let chunk = Chunk::from_vec(v);

// Empty chunk
let empty: Chunk<i32> = Chunk::empty();

// Single element
let one = Chunk::single(42);

// Concatenate two chunks (zero-copy if they're adjacent)
let combined = Chunk::concat(chunk_a, chunk_b);
}

Summary

You rarely construct Chunk by hand in application code. The stream runtime handles chunking for you. Understand chunks when:

  • You're writing a custom Sink and want efficient writes
  • You're tuning throughput with .with_chunk_size(n)
  • You're implementing a library operator with map_chunks

Backpressure Policies — Controlling Flow

A stream is a pipeline: a producer emits data, operators transform it, a consumer processes it. Problems arise when the producer is faster than the consumer. Backpressure is the mechanism that handles this mismatch.

The Problem

Producer: emits 10,000 events/sec
Consumer: processes 1,000 events/sec

What happens to the 9,000 surplus events per second?

Your options are: block the producer, drop events, or buffer them. Each is correct in different contexts. id_effect makes the choice explicit via BackpressurePolicy.

BackpressurePolicy

#![allow(unused)]
fn main() {
use id_effect::BackpressurePolicy;

// Block the producer until the consumer catches up (default for bounded channels)
BackpressurePolicy::Block

// Drop the newest events when the buffer is full
BackpressurePolicy::DropLatest

// Drop the oldest events when the buffer is full (keep the freshest data)
BackpressurePolicy::DropOldest

// Unbounded buffering — never drop, never block (use carefully)
BackpressurePolicy::Unbounded
}

Applying a Policy to a Channel-Backed Stream

The most common place to specify backpressure is when bridging from a channel to a Stream:

#![allow(unused)]
fn main() {
use id_effect::{stream_from_channel_with_policy, BackpressurePolicy};
use std::sync::mpsc;

let (tx, rx) = mpsc::channel::<Event>();

// Drop old events; always reflect the latest state
let stream = stream_from_channel_with_policy(rx, 1024, BackpressurePolicy::DropOldest);
}

Contrast with stream_from_channel, which uses Block by default. If you don't think about backpressure at this point, Block is the safe choice — you won't lose data, but a slow consumer will slow down the producer.

Choosing a Policy

ScenarioPolicy
Financial transactions — no data loss acceptableBlock
Real-time sensor readings — only latest mattersDropOldest
Log pipeline — drop excess if overwhelmedDropLatest
Batch import — control memory, halt on overflowBlock
Dashboard metrics — fresh data over completenessDropOldest

Stream-Level Backpressure

Streams composed with flat_map or merge also have implicit backpressure: downstream operators signal upstream when they can accept more work. This happens automatically and doesn't require a policy setting — the Stream runtime handles it.

For explicit control over concurrency in flat_map:

#![allow(unused)]
fn main() {
stream
    .flat_map_with_concurrency(4, |id| fetch_record(id))
    // Only 4 fetch_record effects run concurrently
    // Others wait until a slot frees — natural backpressure
}

Monitoring Drops

When using DropLatest or DropOldest, you often want to know how many events were dropped:

#![allow(unused)]
fn main() {
let (stream, dropped_counter) = stream_from_channel_with_policy_and_counter(
    rx,
    1024,
    BackpressurePolicy::DropOldest,
);

// Periodically log the counter
effect! {
    loop {
        let n = dropped_counter.load(Ordering::Relaxed);
        if n > 0 {
            ~ log.warn(format!("Dropped {n} events due to backpressure"));
        }
        ~ sleep(Duration::from_secs(10));
    }
}
}

Summary

Always choose a backpressure policy explicitly. The default (Block) is safe but can stall producers. DropOldest is often right for real-time data. DropLatest is right when order matters but throughput doesn't. Unbounded is only acceptable when the rate is truly bounded by the domain.

Sinks — Consuming Streams

A Stream describes a sequence of values. A Sink describes how to consume them. Together they form a complete pipeline: Stream → operators → Sink.

Built-in Sinks

Most of the time you don't write a Sink explicitly — you use one of the consuming methods on Stream:

#![allow(unused)]
fn main() {
// Collect all elements into a Vec
let users: Effect<Vec<User>, DbError, Db> = all_users().collect();

// Fold into a single value
let total: Effect<u64, DbError, Db> = orders()
    .fold(0u64, |acc, order| acc + order.amount);

// Run a side-effecting action for each element
let logged: Effect<(), DbError, Db> = events()
    .for_each(|event| log_event(event));

// Drain (discard all values, run for side effects only)
let drained: Effect<(), DbError, Db> = events()
    .map(|e| emit_metric(e))
    .drain();

// Take the first N elements
let first_ten: Effect<Vec<User>, DbError, Db> = all_users()
    .take(10)
    .collect();
}

Each of these methods turns a Stream<A, E, R> into an Effect<B, E, R>, which you can then run with run_blocking or compose further.

The Sink Trait

When the built-in consumers aren't enough, implement Sink:

#![allow(unused)]
fn main() {
use id_effect::{Sink, Chunk, Effect};

struct CsvWriter {
    path: PathBuf,
    written: usize,
}

impl Sink<Record> for CsvWriter {
    type Error = IoError;
    type Env   = ();

    fn on_chunk(
        &mut self,
        chunk: Chunk<Record>,
    ) -> Effect<(), IoError, ()> {
        effect! {
            for record in &chunk {
                ~ self.write_csv_line(record);
            }
            ()
        }
    }

    fn on_done(&mut self) -> Effect<(), IoError, ()> {
        effect! {
            ~ self.flush();
            ()
        }
    }
}
}

on_chunk is called for each chunk of elements. on_done is called once when the stream ends — use it to flush buffers or close handles.

Running a Stream into a Sink

#![allow(unused)]
fn main() {
let writer = CsvWriter::new("output.csv");

let effect: Effect<(), IoError, Db> = all_records()
    .run_into_sink(writer);

run_blocking(effect, env)?;
}

run_into_sink drives the stream and feeds each chunk to the sink. If the stream fails, on_done is not called — use resource scopes around the sink when cleanup is unconditionally required.

Sink Composition

Sinks can be composed: a ZipSink feeds the same stream to two sinks simultaneously:

#![allow(unused)]
fn main() {
let count_sink  = CountSink::new();
let csv_sink    = CsvWriter::new("out.csv");

// Both sinks receive every element
let combined = ZipSink::new(count_sink, csv_sink);

all_records().run_into_sink(combined)
}

Each element is delivered to both sinks in order. If either sink fails, the whole pipeline fails.

Finite vs Infinite Streams and Sinks

A Sink doesn't know whether its stream is finite or infinite. Combine with take, take_while, or take_until to bound an infinite stream before running it into a sink:

#![allow(unused)]
fn main() {
// Process at most 1 hour of events
let one_hour = Duration::from_secs(3600);

event_stream()
    .take_until(sleep(one_hour))
    .for_each(|event| process(event))
}

Summary

MethodReturnsUse when
.collect()Effect<Vec<A>, …>Small result sets that fit in memory
.fold(init, f)Effect<B, …>Single aggregated value
.for_each(f)Effect<(), …>Side effects per element
.drain()Effect<(), …>Discard results, keep side effects
.run_into_sink(s)Effect<(), …>Custom consumption logic

Schema — Parse, Don't Validate

Data enters your program from the outside world: HTTP request bodies, database rows, configuration files, message queue payloads. All of it is untrusted. All of it needs to be checked.

The naive approach is to deserialise first and validate later — accept a User struct via serde, then check that email is non-empty and age is positive in a separate step. The problem: your type says User but your program has a User that might have an empty email. The type lies.

The better approach is parse, don't validate: transform untrusted input into trusted types in one step. If the parse succeeds, you have a valid User. If it fails, you have a structured ParseError that tells you exactly what was wrong.

id_effect's schema module is built on this principle.

What This Chapter Covers

  • Unknown — the type for unvalidated wire data (next section)
  • Schema combinators — the building blocks for describing data shapes (ch14-02)
  • Validation and refinementrefine, filter, and Brand for domain constraints (ch14-03)
  • ParseErrors — structured, accumulating error reports (ch14-04)

The Unknown Type — Unvalidated Wire Data

Unknown is the type for data that hasn't been validated yet. Think of it as a typed serde_json::Value — it can hold any shape of data, but you can't do anything useful with it until you run it through a schema.

Creating Unknown Values

#![allow(unused)]
fn main() {
use id_effect::schema::Unknown;

// From a JSON string
let u: Unknown = Unknown::from_json_str(r#"{"name": "Alice", "age": 30}"#)?;

// From a serde_json Value
let v: serde_json::Value = serde_json::json!({ "name": "Alice" });
let u: Unknown = Unknown::from_serde_json(v);

// From raw parts
let u: Unknown = Unknown::object([
    ("name", Unknown::string("Alice")),
    ("age",  Unknown::integer(30)),
]);

// Primitives
let s: Unknown = Unknown::string("hello");
let n: Unknown = Unknown::integer(42);
let b: Unknown = Unknown::boolean(true);
let null: Unknown = Unknown::null();
let arr: Unknown = Unknown::array([Unknown::integer(1), Unknown::integer(2)]);
}

Why Not serde_json::Value Directly?

serde_json::Value is an excellent data type, but it's stringly typed: value["name"] gives you an Option<&Value> and there's no structure around parse errors, path tracking, or accumulation. Unknown wraps the same idea but integrates with id_effect's schema parser, which gives you:

  • Path tracking — "error at .users[3].email"
  • Accumulated errors — all failures in one parse, not just the first
  • Composable schemas — build complex validators from simple primitives

Inspecting Unknown Values

You don't normally inspect Unknown directly — you run it through a schema. But when debugging:

#![allow(unused)]
fn main() {
// Check what shape the value has
match u.kind() {
    UnknownKind::Object(fields) => { /* … */ }
    UnknownKind::Array(elems)   => { /* … */ }
    UnknownKind::String(s)      => { /* … */ }
    UnknownKind::Integer(n)     => { /* … */ }
    UnknownKind::Float(f)       => { /* … */ }
    UnknownKind::Boolean(b)     => { /* … */ }
    UnknownKind::Null            => { /* … */ }
}

// Access a field without parsing (returns Option<&Unknown>)
let name: Option<&Unknown> = u.field("name");
}

The Parse Boundary

Unknown is your import type. At every IO boundary — HTTP handler, NATS message, config file, database row — convert incoming data to Unknown first, then parse it with a schema:

#![allow(unused)]
fn main() {
async fn handle_request(body: Bytes) -> Effect<CreateUserResponse, ApiError, Deps> {
    effect! {
        // Convert raw bytes to Unknown
        let raw = Unknown::from_json_bytes(&body)
            .map_err(ApiError::InvalidJson)?;

        // Parse Unknown into a typed, validated struct
        let req: CreateUserRequest = ~ parse_schema(create_user_schema(), raw);

        // Now req is fully trusted — proceed with domain logic
        ~ create_user(req)
    }
}
}

Nothing beyond the parse boundary sees Unknown. Domain functions only accept validated types.

Unknown and Serde

If you have existing serde-deserializable types, use the serde bridge (requires the schema-serde feature):

#![allow(unused)]
fn main() {
use id_effect::schema::serde_bridge::unknown_from_serde_json;

// Deserialise via serde, then convert to Unknown for schema validation
let value: serde_json::Value = serde_json::from_str(input)?;
let u: Unknown = unknown_from_serde_json(value);
}

This lets you incrementally adopt the schema system without rewriting all your serde impls at once.

Schema Combinators — Describing Data Shapes

A schema is a value that describes how to parse an Unknown into a typed result. Schemas compose: build small schemas for primitive types, then combine them into schemas for complex structures.

Primitive Schemas

#![allow(unused)]
fn main() {
use id_effect::schema::{string, integer, i64, f64, boolean, null};

// Parse a string
let name_schema = string();

// Parse an integer (i64)
let age_schema = i64();

// Parse a float
let price_schema = f64();

// Parse a boolean
let active_schema = boolean();
}

Each schema has type Schema<T>string() is a Schema<String>, i64() is a Schema<i64>, and so on.

Struct Schemas

#![allow(unused)]
fn main() {
use id_effect::schema::struct_;

#[derive(Debug)]
struct User {
    name: String,
    age:  i64,
}

let user_schema = struct_!(User {
    name: string(),
    age:  i64(),
});
}

struct_! maps field names to their schemas and constructs the target type. If any field is missing or has the wrong type, parsing fails with a ParseError that includes the field path.

For schemas without a derive macro, use object:

#![allow(unused)]
fn main() {
use id_effect::schema::object;

let user_schema = object([
    ("name", string().map(|s| s)),
    ("age",  i64()),
]).map(|(name, age)| User { name, age });
}

Optional Fields

#![allow(unused)]
fn main() {
use id_effect::schema::optional;

struct Config {
    host:    String,
    port:    Option<u16>,
    timeout: Option<Duration>,
}

let config_schema = struct_!(Config {
    host:    string(),
    port:    optional(u16()),
    timeout: optional(duration_ms()),
});
}

optional(schema) produces Schema<Option<T>>. A missing field or null both parse as None.

Array Schemas

#![allow(unused)]
fn main() {
use id_effect::schema::array;

// Vec of strings
let tags_schema: Schema<Vec<String>> = array(string());

// Vec of User
let users_schema: Schema<Vec<User>> = array(user_schema);
}

array(item_schema) parses a JSON array where each element is validated by item_schema. Errors include the index: "[2].email: expected string, got null".

Union Schemas

#![allow(unused)]
fn main() {
use id_effect::schema::{union_, literal_string};

#[derive(Debug)]
enum Status { Active, Inactive, Pending }

let status_schema = union_![
    literal_string("active")   => Status::Active,
    literal_string("inactive") => Status::Inactive,
    literal_string("pending")  => Status::Pending,
];
}

union_! tries each branch in order and returns the first that succeeds. Errors report all branches that failed.

Transforming Schemas

Schemas are values — you can .map them:

#![allow(unused)]
fn main() {
// Parse a string and convert it to uppercase
let upper_schema: Schema<String> = string().map(|s| s.to_uppercase());

// Parse a string and try to convert to a domain type
let email_schema: Schema<Email> = string().try_map(|s| {
    Email::parse(s).map_err(ParseError::custom)
});
}

.map transforms on success. .try_map can fail and produce a ParseError.

Running a Schema

#![allow(unused)]
fn main() {
use id_effect::schema::parse;

let raw: Unknown = Unknown::from_json_str(r#"{"name":"Alice","age":30}"#)?;

match parse(user_schema, raw) {
    Ok(user)  => println!("Got: {user:?}"),
    Err(errs) => println!("Errors: {errs}"),
}
}

parse returns Result<T, ParseErrors>. ParseErrors accumulates all errors — not just the first — so a caller gets the complete picture of what's wrong.

Schema as a Type Contract

A schema is documentation. Where you use Schema<CreateUserRequest>, readers know: this function requires exactly this shape of data, checked at runtime. The schema is the spec.

#![allow(unused)]
fn main() {
pub fn create_user_handler() -> impl Fn(Unknown) -> Effect<User, ApiError, Db> {
    let schema = create_user_schema();
    move |raw| {
        effect! {
            let req = parse(schema.clone(), raw)
                .map_err(ApiError::Validation)?;
            ~ db_create_user(req)
        }
    }
}
}

Validation and Refinement — Constrained Types

Schemas parse structure. Validation adds constraints: an age must be positive, an email must contain @, a price must have at most two decimal places. Refinement goes further: a validated Email is a different type from a raw String, so you can never accidentally pass an unvalidated string where an email is expected.

refine: Attach a Predicate

refine takes a schema and a predicate. Parsing succeeds only if both the schema's parse and the predicate pass:

#![allow(unused)]
fn main() {
use id_effect::schema::{string, i64, refine};

// Age must be between 0 and 150
let age_schema = refine(
    i64(),
    |n| (0..=150).contains(n),
    "age must be between 0 and 150",
);

// Non-empty string
let non_empty = refine(
    string(),
    |s: &String| !s.is_empty(),
    "must not be empty",
);
}

If the predicate returns false, parsing fails with a ParseError containing the message you provided.

filter: Same as refine, Different Style

filter is an alias for refine with a closure-first signature, matching Rust iterator conventions:

#![allow(unused)]
fn main() {
let positive = i64().filter(|n| *n > 0, "must be positive");
let trimmed  = string().filter(|s| s == s.trim(), "must not have leading/trailing whitespace");
}

Use whichever reads more naturally.

try_map: Fallible Transformation

When conversion logic can fail — parsing a date, constructing a URL, validating an email — use .try_map:

#![allow(unused)]
fn main() {
use id_effect::schema::ParseError;

let url_schema = string().try_map(|s| {
    url::Url::parse(&s).map_err(|e| ParseError::custom(format!("invalid URL: {e}")))
});

let date_schema = string().try_map(|s| {
    chrono::NaiveDate::parse_from_str(&s, "%Y-%m-%d")
        .map_err(|e| ParseError::custom(format!("invalid date: {e}")))
});
}

.try_map runs after the base schema succeeds. The closure returns Result<NewType, ParseError>.

Brand — Newtypes with Zero Cost

A Brand is a newtype wrapper that exists only at the type level. At runtime it's transparent. At compile time it prevents mixing up bare primitives with domain values:

#![allow(unused)]
fn main() {
use id_effect::schema::Brand;

// Define branded types
type UserId   = Brand<i64,    UserIdMarker>;
type Email    = Brand<String, EmailMarker>;
type PosPrice = Brand<f64,    PosPriceMarker>;

struct UserIdMarker;
struct EmailMarker;
struct PosPriceMarker;
}

Build schemas that produce branded types:

#![allow(unused)]
fn main() {
let user_id_schema: Schema<UserId> = i64()
    .filter(|n| *n > 0, "user id must be positive")
    .map(Brand::new);

let email_schema: Schema<Email> = string()
    .try_map(|s| {
        if s.contains('@') {
            Ok(Brand::new(s))
        } else {
            Err(ParseError::custom("invalid email"))
        }
    });
}

Now functions that need an Email won't compile with a bare String:

#![allow(unused)]
fn main() {
fn send_welcome(to: Email) -> Effect<(), MailError, Mailer> { /* … */ }

// This compiles:
send_welcome(parsed_email);

// This doesn't:
send_welcome("alice@example.com".to_string()); // type error: expected Email, found String
}

HasSchema — Attaching Schemas to Types

When a type always has the same schema, implement HasSchema:

#![allow(unused)]
fn main() {
use id_effect::schema::HasSchema;

impl HasSchema for User {
    fn schema() -> Schema<Self> {
        struct_!(User {
            id:    user_id_schema(),
            email: email_schema(),
            name:  non_empty_string_schema(),
        })
    }
}

// Now parse using the impl
let user: User = User::schema().run(raw)?;
}

HasSchema types work with generic tooling (exporters, documentation generators, UI scaffolding) that needs to know a type's schema without being parameterised over it.

Summary

ToolWhen to use
refine / filterPredicate on a successfully-parsed value
try_mapFallible conversion after parse
BrandNewtypes that prevent mixing domain values
HasSchemaAttach the canonical schema to a type

ParseErrors — Structured Error Accumulation

When a user submits a form with five invalid fields, they deserve to know about all five — not just the first one you found. ParseErrors is id_effect's solution: errors accumulate across an entire parse, and you report them all at once.

ParseError vs ParseErrors

#![allow(unused)]
fn main() {
use id_effect::schema::{ParseError, ParseErrors};

// One error
let e: ParseError = ParseError::custom("age must be positive");

// Many errors
let es: ParseErrors = ParseErrors::single(e);
}

ParseError is a single failure. ParseErrors is a non-empty collection of failures with path information.

What a ParseError Contains

#![allow(unused)]
fn main() {
// A parse error has:
// - a message
// - a path (where in the data structure it occurred)
// - optionally, the value that failed

let err = ParseError::builder()
    .message("expected integer, got string")
    .path(["users", "0", "age"])
    .received(Unknown::string("thirty"))
    .build();

println!("{err}");
// → users[0].age: expected integer, got string (received: "thirty")
}

Path Tracking

Paths are built automatically as schemas descend into nested structures. You don't need to set them manually:

#![allow(unused)]
fn main() {
let raw = Unknown::from_json_str(r#"
  {
    "users": [
      { "name": "Alice", "age": 30 },
      { "name": "Bob",   "age": "thirty" }
    ]
  }
"#)?;

let result = parse(users_schema, raw);
// Err(ParseErrors {
//   errors: [
//     ParseError { path: "users[1].age", message: "expected integer" }
//   ]
// })
}

The struct_! macro and array combinator push path segments automatically. Custom schemas using .try_map or .filter inherit the current path.

Accumulation

The key property of ParseErrors is accumulation. When parsing a struct with multiple fields, failures from different fields are collected, not short-circuited:

#![allow(unused)]
fn main() {
let raw = Unknown::from_json_str(r#"
  { "name": "", "age": -5, "email": "not-an-email" }
"#)?;

let result: Result<User, ParseErrors> = parse(user_schema, raw);
// Err(ParseErrors {
//   errors: [
//     ParseError { path: "name",  message: "must not be empty" },
//     ParseError { path: "age",   message: "age must be between 0 and 150" },
//     ParseError { path: "email", message: "invalid email" },
//   ]
// })
}

All three errors reported in one call. No round-trips.

Using ParseErrors at API Boundaries

Convert ParseErrors to your API's error type:

#![allow(unused)]
fn main() {
#[derive(Debug)]
enum ApiError {
    Validation(Vec<FieldError>),
    Internal(String),
}

#[derive(Debug)]
struct FieldError {
    field:   String,
    message: String,
}

fn to_api_errors(errs: ParseErrors) -> ApiError {
    ApiError::Validation(
        errs.into_iter()
            .map(|e| FieldError {
                field:   e.path().to_string(),
                message: e.message().to_string(),
            })
            .collect()
    )
}
}

ParseErrors in Effects

parse returns a plain Result. To lift into an Effect:

#![allow(unused)]
fn main() {
effect! {
    let raw = Unknown::from_json_bytes(&body)
        .map_err(ApiError::InvalidJson)?;

    let req = parse(create_user_schema(), raw)
        .map_err(to_api_errors)?;

    ~ create_user(req)
}
}

The ? operator on a Result<T, ParseErrors> inside effect! maps the error into E via From. Define impl From<ParseErrors> for YourError to make this ergonomic.

Displaying ParseErrors

ParseErrors implements Display with a human-readable multiline format:

Validation failed (3 errors):
  name: must not be empty
  age: age must be between 0 and 150
  email: invalid email

And Debug for the raw structure when inspecting in tests.

Summary

  • ParseError = one failure with a message and a path
  • ParseErrors = all failures from a complete parse attempt
  • Paths are tracked automatically by schema combinators
  • Accumulation means the user sees all problems at once
  • Convert to your API error type at the boundary; keep the path information

Testing — Effects Are Easy to Test

Testing async code often means standing up infrastructure, dealing with timing, and wide mocks—then chasing occasional flakes in CI.

Effect programs can be tested differently. Because an Effect is a description of what to do — not the doing itself — you control everything about how it runs. Swap in a test clock. Provide fake services through the Layer system. Detect fiber leaks automatically. Run in microseconds instead of seconds.

This chapter covers the testing tools id_effect provides.

What Makes Effects Testable

Three properties make effect programs easy to test:

1. Services are injected, not ambient. Your code doesn't call DatabaseClient::global(). It declares R: NeedsDb and gets its database from the environment. In tests, you provide a different environment — one with a fake database.

2. Time is injectable. Code that uses Clock instead of std::time::SystemTime::now() can be tested with TestClock, which advances only when you tell it to.

3. Effects don't run until you run them. An Effect is inert. You can inspect, compose, and modify it before running. run_test runs it in a harness that adds leak detection and deterministic scheduling.

What This Chapter Covers

  • run_test — the test harness that replaces run_blocking in tests (next section)
  • TestClock — deterministic time control in tests (ch15-02)
  • Mocking services — injecting test doubles via layers (ch15-03)
  • Property testing — generating inputs and checking invariants (ch15-04)

run_test — The Test Harness

run_test is the test equivalent of run_blocking. Use it in every #[test] that runs an effect.

Basic Usage

#![allow(unused)]
fn main() {
use id_effect::{run_test, succeed};

#[test]
fn simple_effect_succeeds() {
    let result = run_test(succeed(42));
    assert_eq!(result, Exit::Success(42));
}
}

run_test returns an Exit<A, E> rather than a Result<A, E>. This lets you assert on the exact exit reason — success, typed failure, defect, or cancellation.

Why Not run_blocking in Tests?

run_blocking is correct but missing test-specific guarantees:

Featurerun_blockingrun_test
Runs the effect
Detects fiber leaks
Deterministic scheduling
Reports leaked resources

Fiber leaks — effects that spawn children and don't join them — are silent in production but become test failures under run_test. This catches a class of resource leak bugs at unit-test time.

Asserting on Exit

#![allow(unused)]
fn main() {
#[test]
fn division_by_zero_fails() {
    let eff = divide(10, 0);
    let exit = run_test(eff);

    // Assert specific failure
    assert!(matches!(exit, Exit::Failure(Cause::Fail(DivError::DivisionByZero))));
}

#[test]
fn effect_that_panics_is_a_defect() {
    let eff = effect!(|_r: &mut ()| {
        panic!("oops");
    });
    let exit = run_test(eff);

    assert!(matches!(exit, Exit::Failure(Cause::Die(_))));
}
}

Exit::Success(a) — the effect succeeded with value a Exit::Failure(Cause::Fail(e)) — the effect failed with typed error e Exit::Failure(Cause::Die(s)) — the effect panicked or encountered a defect Exit::Failure(Cause::Interrupt) — the effect was cancelled

run_test with an Environment

When your effect needs services, provide a test environment:

#![allow(unused)]
fn main() {
#[test]
fn create_user_inserts_into_db() {
    let fake_db = FakeDatabase::new();
    let env = ctx!(DbKey => Arc::new(fake_db.clone()));

    let eff = create_user(NewUser { name: "Alice".into(), age: 30 });
    let exit = run_test_with_env(eff, env);

    assert!(matches!(exit, Exit::Success(_)));
    assert_eq!(fake_db.users().len(), 1);
}
}

run_test_with_env(effect, env) is the full version. run_test(effect) is shorthand for run_test_with_env(effect, ()) when the effect requires no environment.

run_test_and_unwrap

When you're confident an effect succeeds and just want the value:

#![allow(unused)]
fn main() {
#[test]
fn addition_works() {
    let result: i32 = run_test_and_unwrap(succeed(1 + 1));
    assert_eq!(result, 2);
}
}

run_test_and_unwrap panics on any non-success Exit, with a descriptive message. Use it for happy-path tests where a failure is a bug in the test setup.

Fiber Leak Detection

#![allow(unused)]
fn main() {
#[test]
fn this_test_will_fail_due_to_leak() {
    let eff = effect!(|_r: &mut ()| {
        // Spawns a fiber but never joins it
        run_fork(/* … */);
        ()
    });

    // run_test detects the leaked fiber and fails the test
    let exit = run_test(eff);
    // exit: Exit::Failure(Cause::Die("fiber leak detected: 1 fiber(s) not joined"))
}
}

Fix leaks by joining fibers or explicitly cancelling them before the effect completes.

TestClock — Deterministic Time in Tests

TestClock was introduced in Clock Injection from a scheduling perspective. This section focuses on how to use it in tests — specifically with run_test_with_clock and multi-step scenarios.

The Problem with Real Time in Tests

#![allow(unused)]
fn main() {
// This test takes 7 seconds to run
#[test]
fn retry_exhaustion_slow() {
    let eff = failing_call()
        .retry(Schedule::exponential(Duration::from_secs(1)).take(3));
    let exit = run_blocking(eff, ());
    assert!(matches!(exit, Exit::Failure(_)));
}
}

Multiply this by dozens of tests and your suite is unusable. TestClock makes it instant.

run_test_with_clock

#![allow(unused)]
fn main() {
use id_effect::{run_test_with_clock, TestClock};

#[test]
fn retry_exhaustion_fast() {
    let result = run_test_with_clock(|clock| {
        let eff = failing_call()
            .retry(Schedule::exponential(Duration::from_secs(1)).take(3));

        // Fork the effect
        let handle = eff.fork();

        // Drive time forward to trigger each retry
        clock.advance(Duration::from_secs(1));
        clock.advance(Duration::from_secs(2));
        clock.advance(Duration::from_secs(4));

        // Collect the result
        handle.join()
    });

    assert!(matches!(result, Exit::Failure(_)));
}
}

run_test_with_clock creates a TestClock, injects it into the effect environment, and calls your closure with a handle to the clock. You advance time; the runtime processes sleep effects that become due.

TestClock API

#![allow(unused)]
fn main() {
let clock = TestClock::new();

// Read the current (fake) time — starts at Unix epoch
let now: UtcDateTime = clock.now();

// Advance by a duration
clock.advance(Duration::from_millis(500));

// Jump to an absolute time
clock.set_time(UtcDateTime::from_unix_secs(1_700_000_000));

// How many sleeps are currently waiting?
let pending: usize = clock.pending_sleeps();
}

pending_sleeps() is useful in tests to assert that an effect is blocked on a timer rather than having silently completed or failed.

Testing Scheduled Work

#![allow(unused)]
fn main() {
#[test]
fn cron_job_runs_every_minute() {
    run_test_with_clock(|clock| {
        let counter = Arc::new(AtomicU32::new(0));
        let c = counter.clone();

        let job = effect!(|_r: &mut ()| {
            c.fetch_add(1, Ordering::Relaxed);
        })
        .repeat(Schedule::fixed(Duration::from_secs(60)));

        let _handle = job.fork();

        // Advance through 3 minutes
        clock.advance(Duration::from_secs(60));
        clock.advance(Duration::from_secs(60));
        clock.advance(Duration::from_secs(60));

        succeed(counter.load(Ordering::Relaxed))
    });

    // Verify 3 executions happened
}
}

Time and Race Conditions

TestClock is deterministic — time moves only when you call advance. This means tests that use TestClock have no time-based race conditions: the scheduler runs wake-up callbacks synchronously when you advance.

If your effect spawns multiple fibers that all sleep, advancing time wakes all fibers whose sleep deadline has passed, in a consistent order.

Combining TestClock with Fake Services

#![allow(unused)]
fn main() {
#[test]
fn rate_limiter_enforces_window() {
    let fake_store = InMemoryRateLimitStore::new();
    let env = ctx!(RateLimitStoreKey => Arc::new(fake_store));

    run_test_with_clock_and_env(env, |clock| {
        let eff = effect!(|_r: &mut Deps| {
            // Should succeed (first request in window)
            ~ check_rate_limit("alice");

            // Exhaust the limit
            for _ in 0..9 {
                ~ check_rate_limit("alice");
            }

            // Advance past the window
            // (clock advance happens outside the effect, so we fork here)
        });

        let handle = eff.fork();
        clock.advance(Duration::from_secs(61));
        handle.join()
    });
}
}

run_test_with_clock_and_env combines both: a controlled clock and a custom service environment.

Mocking Services — Test Doubles via Layers

In id_effect, "mocking" isn't a special testing concept — it's just providing a different Layer. Production code gets a PostgresDb layer. Test code gets an InMemoryDb layer. Business logic never knows the difference.

No mock frameworks. No #[automock]. No vi.mock() equivalent. Just layers.

The Pattern

Define a service trait (or use a service key with a trait object):

#![allow(unused)]
fn main() {
service_key!(DbKey: Arc<dyn Db>);

trait Db: Send + Sync {
    fn get_user(&self, id: UserId) -> Effect<User, DbError, ()>;
    fn save_user(&self, user: User) -> Effect<(), DbError, ()>;
}
}

Provide two implementations — one for production, one for tests:

#![allow(unused)]
fn main() {
// Production
struct PostgresDb { pool: PgPool }
impl Db for PostgresDb { /* real SQL queries */ }

// Test double
struct InMemoryDb { users: Mutex<HashMap<UserId, User>> }
impl Db for InMemoryDb {
    fn get_user(&self, id: UserId) -> Effect<User, DbError, ()> {
        let users = self.users.lock().unwrap();
        match users.get(&id) {
            Some(u) => succeed(u.clone()),
            None    => fail(DbError::NotFound(id)),
        }
    }

    fn save_user(&self, user: User) -> Effect<(), DbError, ()> {
        self.users.lock().unwrap().insert(user.id, user);
        succeed(())
    }
}
}

Injecting the Test Double

#![allow(unused)]
fn main() {
#[test]
fn get_user_returns_saved_user() {
    let db = Arc::new(InMemoryDb::new());
    let env = ctx!(DbKey => db.clone() as Arc<dyn Db>);

    let eff = effect!(|r: &mut Deps| {
        let db = ~ DbKey;
        ~ db.save_user(User { id: UserId::new(1), name: "Alice".into() });
        ~ db.get_user(UserId::new(1))
    });

    let exit = run_test_with_env(eff, env);
    let user = exit.unwrap_success();
    assert_eq!(user.name, "Alice");
}
}

The business logic (save_user then get_user) is identical to production. Only the environment differs.

Asserting on Calls

When you need to verify that a service was called with specific arguments, add tracking to the test double:

#![allow(unused)]
fn main() {
struct SpyMailer {
    sent: Mutex<Vec<Email>>,
}

impl Mailer for SpyMailer {
    fn send(&self, email: Email) -> Effect<(), MailError, ()> {
        self.sent.lock().unwrap().push(email.clone());
        succeed(())
    }
}

#[test]
fn registration_sends_welcome_email() {
    let spy = Arc::new(SpyMailer::new());
    let env = ctx!(MailerKey => spy.clone() as Arc<dyn Mailer>);

    let exit = run_test_with_env(register_user("alice@example.com"), env);
    assert!(matches!(exit, Exit::Success(_)));

    let sent = spy.sent.lock().unwrap();
    assert_eq!(sent.len(), 1);
    assert_eq!(sent[0].to, "alice@example.com");
}
}

Failing Services

Test that your code handles service failures correctly by providing a failing test double:

#![allow(unused)]
fn main() {
struct FailingDb;
impl Db for FailingDb {
    fn get_user(&self, _id: UserId) -> Effect<User, DbError, ()> {
        fail(DbError::ConnectionLost)
    }
    fn save_user(&self, _user: User) -> Effect<(), DbError, ()> {
        fail(DbError::ConnectionLost)
    }
}

#[test]
fn get_user_propagates_db_errors() {
    let env = ctx!(DbKey => Arc::new(FailingDb) as Arc<dyn Db>);
    let exit = run_test_with_env(get_user(UserId::new(1)), env);
    assert!(matches!(exit, Exit::Failure(Cause::Fail(DbError::ConnectionLost))));
}
}

Layer-Based Test Setup

For more complex scenarios, build a test layer:

#![allow(unused)]
fn main() {
fn test_layer() -> Layer<Deps, (), ()> {
    Layer::provide(DbKey, Arc::new(InMemoryDb::new()) as Arc<dyn Db>)
        .stack(Layer::provide(MailerKey, Arc::new(SpyMailer::new()) as Arc<dyn Mailer>))
        .stack(Layer::provide(ClockKey, Arc::new(TestClock::new()) as Arc<dyn Clock>))
}

#[test]
fn full_registration_flow() {
    let env = test_layer().build().unwrap();
    let exit = run_test_with_env(full_registration_flow(), env);
    assert!(matches!(exit, Exit::Success(_)));
}
}

The test layer mirrors your production layer in structure but with test implementations. Add new services in one place and all tests pick them up.

What You Don't Need

  • No mockall, no mock! macros
  • No #[cfg(test)] on business logic
  • No Box<dyn Fn(…)> callback injection patterns
  • No global state reset between tests

The Layer system is the mock framework.

Property Testing — Invariants over Inputs

Unit tests check specific cases. Property tests check invariants: statements that must be true for any valid input. Effect programs are excellent targets for property testing because their inputs and outputs are well-typed, their schemas define exactly what's valid, and the layer system makes it easy to run thousands of executions cheaply.

Setup

id_effect works with both proptest and quickcheck. The examples below use proptest.

[dev-dependencies]
proptest = "1"

Testing Pure Effects

#![allow(unused)]
fn main() {
use proptest::prelude::*;

proptest! {
    #[test]
    fn addition_is_commutative(a: i64, b: i64) {
        let eff_ab = add(a, b);
        let eff_ba = add(b, a);

        let r_ab = run_test_and_unwrap(eff_ab);
        let r_ba = run_test_and_unwrap(eff_ba);

        prop_assert_eq!(r_ab, r_ba);
    }
}
}

proptest! generates hundreds of (a, b) pairs. Each iteration calls run_test_and_unwrap, which is cheap for pure effects.

Testing Schema Round-Trips

Schemas have a round-trip property: if you serialise a valid value and re-parse it, you get the same value back.

#![allow(unused)]
fn main() {
proptest! {
    #[test]
    fn user_schema_round_trips(
        name in "[a-zA-Z]{1,50}",
        age in 0i64..=120,
    ) {
        let original = User {
            name: name.clone(),
            age,
        };

        // Serialise to Unknown
        let raw = User::schema().encode(&original);

        // Re-parse
        let parsed = User::schema().run(raw);

        prop_assert!(parsed.is_ok());
        prop_assert_eq!(parsed.unwrap(), original);
    }
}
}

Round-trip tests catch asymmetries between your serialiser and parser that unit tests often miss.

Testing Error Invariants

Property tests are excellent for verifying that your error handling is consistent:

#![allow(unused)]
fn main() {
proptest! {
    #[test]
    fn withdraw_never_goes_negative(
        balance in 0u64..=1_000_000,
        amount  in 0u64..=1_000_000,
    ) {
        let account = TRef::new(balance);
        let exit = run_test_and_unwrap(commit(withdraw(&account, amount)));

        if amount <= balance {
            // Should succeed and balance should be reduced
            assert!(matches!(exit, Exit::Success(_)));
            let new_balance = atomically(account.read_stm());
            assert_eq!(new_balance, balance - amount);
        } else {
            // Should fail — balance must not go negative
            assert!(matches!(exit, Exit::Failure(Cause::Fail(InsufficientFunds))));
            let new_balance = atomically(account.read_stm());
            assert_eq!(new_balance, balance);  // unchanged
        }
    }
}
}

Generating Arbitrary Service Environments

For integration-style property tests, generate random state in the fake service:

#![allow(unused)]
fn main() {
proptest! {
    #[test]
    fn get_user_returns_what_was_saved(user in arbitrary_user()) {
        let db = Arc::new(InMemoryDb::new());
        let env = ctx!(DbKey => db.clone() as Arc<dyn Db>);

        // Save
        run_test_with_env(
            save_user(user.clone()),
            env.clone(),
        );

        // Retrieve
        let exit = run_test_with_env(get_user(user.id), env);
        let retrieved = exit.unwrap_success();

        prop_assert_eq!(retrieved, user);
    }
}
}

Define arbitrary_user() as a proptest Strategy:

#![allow(unused)]
fn main() {
fn arbitrary_user() -> impl Strategy<Value = User> {
    (
        "[a-zA-Z ]{1,50}",
        0i64..=120,
        any::<u64>().prop_map(UserId::new),
    ).prop_map(|(name, age, id)| User { id, name, age })
}
}

Schema-Driven Generation

When a type has HasSchema, you can derive a generator that always produces valid inputs:

#![allow(unused)]
fn main() {
// generate_valid::<User>() produces Users that would pass User::schema()
let strategy = generate_valid::<User>();

proptest! {
    #[test]
    fn valid_users_are_always_accepted(user in generate_valid::<User>()) {
        let raw = User::schema().encode(&user);
        prop_assert!(User::schema().run(raw).is_ok());
    }
}
}

This ensures the generator and schema stay in sync: if you tighten a refine constraint, generate_valid starts producing inputs that satisfy the new constraint.

Shrinking

proptest automatically shrinks failing inputs to the smallest example that still fails. Since run_test is fast (no I/O, no real timers), shrinking runs quickly even with hundreds of iterations.

When a property fails, you'll see the minimal failing case:

Test failed. Minimal failing input:
  name = ""
  age = -1
Reason: must not be empty (path: name)

This is far more actionable than a raw failure trace from a specific hand-chosen test case.

API Quick Reference

A condensed reference for the most commonly used types and functions in id_effect. For full documentation, use cargo doc --open -p id_effect.

Core Types

TypeDescription
Effect<A, E, R>A computation that produces A, can fail with E, and requires environment R
Stream<A, E, R>A sequence of A values that can fail with E and requires environment R
Stm<A>A transactional computation that produces A
Exit<A, E>The result of running an effect: Success(A) or Failure(Cause<E>)
Cause<E>Fail(E), Die(Box<dyn Any>), or Interrupt
Context<R>A heterogeneous map of services, the R at runtime
Layer<Out, In, E>A recipe for building Out from In, can fail with E
Chunk<A>A contiguous, reference-counted batch of A values
UnknownUnvalidated wire data; input type for schemas
ParseErrorsAccumulated parse failures with paths

Constructors

FunctionTypeNotes
succeed(a)Effect<A, E, R>Always succeeds with a
fail(e)Effect<A, E, R>Always fails with typed error e
pure(a)Effect<A, Never, ()>Alias for succeed; E = Never
from_async(f)Effect<A, E, R>Lift an async closure
effect!(…)Effect<A, E, R>Do-notation macro
commit(stm)Effect<A, Never, ()>Run an STM transaction
Stream::from_iter(i)Stream<A, Never, ()>Stream from an iterator
Stream::from_effect(e)Stream<A, E, R>Single-element stream
Stream::unfold_effect(s, f)Stream<A, E, R>Generate stream from state

Effect Combinators

MethodNotes
.map(f)Transform success value
.flat_map(f)Chain effects
.map_err(f)Transform error
.catch(f)Handle typed failure
.catch_all(f)Handle any Cause
.fold(on_e, on_a)Both paths to success
.or_else(f)Try alternative on failure
.ignore_error()Convert failure to Option
.zip(other)Run two effects, tuple result
.zip_left(other)Run two effects, keep left
.zip_right(other)Run two effects, keep right
.retry(schedule)Retry on failure
.repeat(schedule)Repeat on success
.timeout(dur)Fail with Timeout if too slow

Concurrency

Function/MethodNotes
run_fork(rt, f)Spawn a fiber
handle.join()Effect that waits for the fiber
handle.interrupt()Cancel a fiber
FiberRef::new(initial)Fiber-scoped dynamic variable
fiber_ref.get()Read current fiber's value
fiber_ref.set(v)Set current fiber's value
with_fiber_id(id, f)Run f with a specific fiber id

STM

FunctionNotes
TRef::new(v)Create a transactional cell
tref.read_stm()Read inside stm!
tref.write_stm(v)Write inside stm!
tref.modify_stm(f)Modify inside stm!
commit(stm)Lift Stm<A> into Effect<A, Never, ()>
atomically(stm)Execute Stm synchronously
stm::retry()Block until any read TRef changes
stm::fail(e)Abort transaction with error
TQueue::bounded(n)Transactional FIFO queue
TMap::new()Transactional hash map
TSemaphore::new(n)Transactional semaphore

Resources

FunctionNotes
scope.acquire(res, f)Use a resource, run finalizer on exit
acquire_release(acq, rel)Bracket-style resource management
Pool::new(size, factory)Reusable resource pool
pool.get()Effect that borrows one resource
Cache::new(loader)Cache backed by an effect

Scheduling

FunctionNotes
Schedule::fixed(d)Repeat every d
Schedule::exponential(base)Exponential backoff
Schedule::linear(step)Linear backoff
Schedule::immediate()No delay
.take(n)At most n repetitions
.until(pred)Stop when predicate holds
eff.retry(sched)Retry with a schedule
eff.repeat(sched)Repeat with a schedule

Running Effects

FunctionNotes
run_blocking(eff, env)Synchronous runner (main/binaries)
run_async(eff, env)Async runner (tokio integration)
run_test(eff)Test harness; detects leaks
run_test_and_unwrap(eff)Test harness; panics on failure
run_test_with_env(eff, env)Test with custom environment
run_test_with_clock(f)Test with controlled TestClock

Schema

FunctionNotes
string()Schema<String>
i64()Schema<i64>
f64()Schema<f64>
boolean()Schema<bool>
optional(s)Schema<Option<T>>
array(s)Schema<Vec<T>>
struct_!(Type { … })Struct schema via macro
refine(s, pred, msg)Add a predicate constraint
parse(schema, unknown)Run schema; returns Result<T, ParseErrors>
Unknown::from_json_str(s)Parse JSON into Unknown
Unknown::from_serde_json(v)Convert serde_json::Value

Macros

MacroNotes
effect!(…)Do-notation for effects; use ~expr to bind
ctx!(Key => value, …)Build a Context from key-value pairs
service_key!(Name: Type)Declare a service key
pipe!(v, f, g, …)Pipeline for pure values

Migrating from async fn to effects

This appendix is a practical guide for converting existing async Rust code to id_effect. It covers common patterns and their id_effect equivalents, with migration steps for each.

The Mental Model Shift

In typical async Rust, a function returns a Future; when that future is awaited, the work runs:

#![allow(unused)]
fn main() {
async fn get_user(id: u64, db: &DbClient) -> Result<User, DbError> {
    db.query_one("SELECT * FROM users WHERE id = $1", &[&id]).await
}
}

In id_effect, many domain functions return an Effect—a description you run later with an environment:

#![allow(unused)]
fn main() {
fn get_user<A, E, R>(id: u64) -> Effect<A, E, R>
where
    A: From<User> + 'static,
    E: From<DbError> + 'static,
    R: NeedsDb + 'static,
{
    effect!(|r: &mut R| {
        let db = ~ DbKey;
        let user = ~ db.get_user(id);
        A::from(user)
    })
}
}

The database client is no longer a function parameter. It's declared in R and retrieved by the runtime. The business logic is identical; what changes is how dependencies are supplied.

Pattern 1: async fn → fn returning Effect

Before

#![allow(unused)]
fn main() {
pub async fn process_order(
    order_id: OrderId,
    db: &DbClient,
    mailer: &MailClient,
) -> Result<Receipt, AppError> {
    let order = db.get_order(order_id).await?;
    let receipt = db.complete_order(order).await?;
    mailer.send_receipt(&receipt).await?;
    Ok(receipt)
}
}

After

#![allow(unused)]
fn main() {
pub fn process_order<A, E, R>(order_id: OrderId) -> Effect<A, E, R>
where
    A: From<Receipt> + 'static,
    E: From<AppError> + 'static,
    R: NeedsDb + NeedsMailer + 'static,
{
    effect!(|r: &mut R| {
        let db     = ~ DbKey;
        let mailer = ~ MailerKey;
        let order   = ~ db.get_order(order_id);
        let receipt = ~ db.complete_order(order);
        ~ mailer.send_receipt(&receipt);
        A::from(receipt)
    })
}
}

Migration steps:

  1. Remove the dependency parameters (db, mailer)
  2. Add <A, E, R> generic parameters
  3. Add where bounds for each removed dependency
  4. Replace async move { … } with effect!(|r: &mut R| { … })
  5. Replace .await? with ~ prefix
  6. Wrap the return value with A::from(…)

Pattern 2: Wrapping Third-Party Async

Third-party libraries return Futures, not Effects. Use from_async to wrap them:

Before

#![allow(unused)]
fn main() {
async fn fetch_price(symbol: &str) -> Result<f64, reqwest::Error> {
    reqwest::get(format!("https://api.example.com/price/{symbol}"))
        .await?
        .json::<PriceResponse>()
        .await
        .map(|r| r.price)
}
}

After

#![allow(unused)]
fn main() {
fn fetch_price<A, E, R>(symbol: String) -> Effect<A, E, R>
where
    A: From<f64> + 'static,
    E: From<reqwest::Error> + 'static,
    R: 'static,
{
    from_async(move |_r| async move {
        let price = reqwest::get(format!("https://api.example.com/price/{symbol}"))
            .await?
            .json::<PriceResponse>()
            .await
            .map(|r| r.price)?;
        Ok(A::from(price))
    })
}
}

The from_async closure still uses .await internally. Only the outermost function signature changes.

Pattern 3: Error Types

Before — single monolithic error enum

#![allow(unused)]
fn main() {
#[derive(Debug)]
enum AppError {
    DbError(DbError),
    MailError(MailError),
    NotFound(String),
}
}

After — effects propagate errors through From bounds

#![allow(unused)]
fn main() {
// Keep domain errors as-is
#[derive(Debug)] struct NotFoundError(String);

// Effect signatures declare what they can fail with:
fn get_user<A, E, R>(id: u64) -> Effect<A, E, R>
where
    E: From<DbError> + From<NotFoundError> + 'static, // …
}

You still need an AppError at the top level (in main or your HTTP handler), but individual functions no longer need to know about unrelated error variants.

Pattern 4: Shared State

BeforeArc<Mutex<T>> passed through function calls

#![allow(unused)]
fn main() {
async fn handler(state: Arc<Mutex<AppState>>) -> Response {
    let mut s = state.lock().unwrap();
    s.request_count += 1;
    // …
}
}

After — shared state in a service, accessed via R

#![allow(unused)]
fn main() {
service_key!(AppStateKey: Arc<Mutex<AppState>>);

fn handler<A, E, R>() -> Effect<A, E, R>
where
    R: NeedsAppState + 'static,
    // …
{
    effect!(|r: &mut R| {
        let state = ~ AppStateKey;
        let mut s = state.lock().unwrap();
        s.request_count += 1;
        // …
    })
}
}

Or, for mutable state that needs transactional semantics across fibers, use TRef:

#![allow(unused)]
fn main() {
// Replace Arc<Mutex<Counter>> with TRef<u64>
service_key!(CounterKey: TRef<u64>);

fn increment_counter<E, R>() -> Effect<u64, E, R>
where
    R: NeedsCounter + 'static,
    E: 'static,
{
    effect!(|r: &mut R| {
        let counter = ~ CounterKey;
        ~ commit(counter.modify_stm(|n| n + 1));
        ~ commit(counter.read_stm())
    })
}
}

Pattern 5: Resource Cleanup

Before — manual drop or relying on Drop impls

#![allow(unused)]
fn main() {
async fn with_connection<F, T>(pool: &Pool, f: F) -> Result<T, DbError>
where F: AsyncFnOnce(&Connection) -> Result<T, DbError>
{
    let conn = pool.get().await?;
    let result = f(&conn).await;
    // conn is dropped here — relies on Drop
    result
}
}

After — explicit Scope

#![allow(unused)]
fn main() {
fn with_connection<A, E, R, F>(f: F) -> Effect<A, E, R>
where
    F: FnOnce(&Connection) -> Effect<A, E, R> + 'static,
    R: NeedsPool + 'static,
    E: From<DbError> + 'static,
    A: 'static,
{
    effect!(|r: &mut R| {
        let pool = ~ PoolKey;
        ~ scope.acquire(
            pool.get(),           // acquire
            |conn| pool.release(conn),  // release (always runs)
            |conn| f(conn),       // use
        )
    })
}
}

The Scope finalizer runs whether the inner effect succeeds, fails, or is cancelled. Drop doesn't give you that guarantee for async code.

Migration Strategy

Migrate gradually, one module at a time:

  1. Start with leaf functions (those with no id_effect dependencies yet) — convert them first.
  2. Move up the call graph. Functions that call converted leaf functions become easy to convert.
  3. Push the run_blocking call to main or the request handler entry point.
  4. Convert tests last — once business logic is effect-based, tests become simple layer swaps.

You can mix old-style async functions and effect functions during the transition: wrap async functions with from_async and call effect functions with run_blocking in async contexts when needed.

Glossary

Key terms used throughout this book, in alphabetical order.


~ (bind operator) The prefix operator inside effect! that runs an inner effect and binds its success value to a variable. let x = ~eff runs eff and assigns the result to x; ~eff runs eff and discards the result.


Backpressure The mechanism by which a slow consumer signals to a fast producer to slow down or drop data. In id_effect, expressed via BackpressurePolicy: Block, DropLatest, DropOldest, or Unbounded. See Backpressure Policies.


Brand A zero-cost newtype wrapper that creates a distinct type from a primitive. Brand<String, EmailMarker> and Brand<String, NameMarker> are different types even though both wrap String, preventing accidental mixing. See Validation and Refinement.


Cause<E> The reason an effect failed. Three variants: Cause::Fail(E) (expected error), Cause::Die(Box<dyn Any>) (panic or defect), Cause::Interrupt (cancelled). See Exit.


Chunk<A> A contiguous, reference-counted batch of A values. The unit of data in Stream pipelines. Cheap to clone; efficient to process in bulk. See Chunks.


Clock A trait abstracting time. LiveClock uses real system time; TestClock advances only when told to. Inject Clock through the environment so scheduling logic is testable. See Clock Injection.


commit The function that lifts a Stm<A> into an Effect<A, Never, ()>. Executing the effect runs the STM transaction and retries on conflict. See Stm and commit.


Context<R> The runtime representation of the environment R — a heterogeneous map from service keys to service values. Built with ctx! or assembled by the Layer system. See Context and HLists.


Effect<A, E, R> The central type. A description of a computation that: succeeds with a value of type A, can fail with a typed error of type E, and requires environment R. Effects are lazy: nothing runs until you call a runtime function. See What Even Is an Effect?.


effect! macro The do-notation macro for writing effect programs. Converts ~expr into flat bind chains so you can write sequential effect code without nested closures. See The effect! Macro.


Exit<A, E> The result of running an effect: Exit::Success(A) or Exit::Failure(Cause<E>). Returned by run_test and accessible via FiberHandle::join. See Exit.


Fiber A lightweight, independently-scheduled unit of concurrent work. Fibers are cheaper than OS threads and support structured cancellation. Spawn with run_fork; join with handle.join(). See What Are Fibers?.


FiberRef A fiber-scoped dynamic variable. Each fiber has its own copy; changes don't leak to parent or sibling fibers. Use for request IDs, trace contexts, and other per-fiber state. See FiberRef.


from_async A constructor that lifts an async closure into an Effect. Use when wrapping third-party library futures that return Future rather than Effect. See Creating Effects.


HList (heterogeneous list) The compile-time linked list Cons<Head, Tail> / Nil that represents the environment R. Each Cons cell holds one tagged service. You usually don't write HList types manually — use NeedsX traits and ctx!. See Context and HLists.


HasSchema A trait that attaches a canonical Schema<Self> to a type. Implement it when a type should always be parsed the same way and you want schema-driven tooling to work automatically. See Validation and Refinement.


Layer A recipe for constructing one or more services from a set of dependencies. Layers compose with .stack() and form a DAG that the runtime resolves automatically. See What Is a Layer?.


NeedsX trait A supertrait bound on R that expresses "this environment must contain service X." Prefer NeedsDb over Get<DbKey, Here, Target = DbClient> for readability. See Widening and Narrowing.


Never The uninhabited type. Effect<A, Never, R> cannot fail with a typed error (but may still Die or Interrupt). Eliminate Err(never) branches with absurd(never). See Error Handling.


ParseErrors An accumulated collection of ParseError values, each with a path and message. Returned by parse(schema, unknown). Reports all validation failures at once, not just the first. See ParseErrors.


R (environment type parameter) The third type parameter of Effect<A, E, R>. Encodes which services the computation needs. Library functions stay generic over R; binaries and tests supply a concrete Context. See The R Parameter.


run_blocking The synchronous effect runner. Use in main and integration tests where you want a blocking call. Do not call from within library functions — return Effect instead. See Laziness as a Superpower.


run_test The test-aware effect runner. Like run_blocking but also detects fiber leaks and uses deterministic scheduling. Use in all #[test] functions. See run_test.


Schedule A value describing how to space out repeated or retried operations. Combinators: fixed, exponential, linear, .take(n), .until(pred). Used with .retry() and .repeat(). See Schedule.


Schema A value of type Schema<T> that describes how to parse an Unknown into a T. Schemas are composable: build complex schemas from primitive ones. See Schema Combinators.


Scope A resource lifetime boundary. Finalizers registered with a Scope run when the scope exits, whether by success, failure, or cancellation. Use acquire_release for the common bracket pattern. See Scopes and Finalizers.


service_key! A macro that declares a typed service key: service_key!(DbKey: Arc<dyn Db>). The key type-indexes into the environment so services are looked up by type, not by string. See Tags.


Sink A consumer of Stream elements. Receives Chunks via on_chunk and a completion signal via on_done. Built-in sinks: collect, fold, for_each, drain. See Sinks.


Stm<A> A transactional computation over TRef values. Compose with stm!; execute with commit or atomically. Retries automatically on conflict; aborts on stm::fail. See Stm and commit.


Stream A lazy, potentially infinite sequence of values of type A. Processes elements in Chunks. Supports all the combinators of Effect plus streaming-specific operators like flat_map, merge, and take_until. See Streams.


Tag / Tagged The mechanism for keying services in the environment. A Tag<K, V> associates key type K with value type V. service_key! generates tags and their associated types. See Tags.


TestClock A Clock implementation for tests. Starts at Unix epoch and advances only when you call .advance(dur) or .set_time(t). Sleep effects complete instantly when the clock passes their wake time. See TestClock.


TRef<T> A transactional cell: a mutable T that can be read and written inside Stm transactions. Multiple TRefs can be read and written atomically. See TRef.


Unknown The type for unvalidated wire data. All external data enters your program as Unknown and is converted to typed values by running it through a Schema. See The Unknown Type.