Worker Pool Scaling
Problem
Maintain a fixed number of workers in a distributed system.
Challenges:
- workers crash
- external changes affect count
- race conditions in scaling
Ved Approach
Define the desired state:
goal workers == 5
Implementation
domain WorkerPool {
state {
workers: int
}
}
goal workers == 5
transition scale_up {
step {
if workers < 5 {
workers += 1
}
}
}
transition scale_down {
step {
if workers > 5 {
workers -= 1
}
}
}
How it Executes (The Control Loop)
Imagine you deploy this code. The system boots up with workers = 0.
- Tick 1: The runtime evaluates the goal (
workers == 5). It isfalse. The scheduler looks at the available transitions and sees thatscale_upis valid becauseworkers < 5. It executesscale_up, incrementing the count to 1. - Ticks 2-5: The system continuously loops, executing
scale_upuntilworkers == 5. - Stability: The goal is met. The Domain goes to sleep.
The Chaos Scenario:
Suddenly, an AWS availability zone goes down, and 2 of your workers crash. The external Effect Adapter notifies the Domain, updating the state so workers = 3.
- The goal (
workers == 5) is instantly broken. - The runtime wakes up, evaluates the state, and automatically triggers
scale_uptwice to restore the pool.
Alternatively, if a junior engineer manually boots up 3 extra servers in the AWS console (making workers = 8), the runtime wakes up, sees the goal is broken, and triggers scale_down three times to terminate the rogue servers.
Behavior
If workers drop below 5 → system scales up
If workers exceed 5 → system scales down
System continuously stabilizes
Why This Matters
Imperative approach:
- requires manual checks
- prone to race conditions
Ved approach:
- self-correcting
- deterministic
- continuously converging
Key Takeaways
1. The Death of the "Reconciliation Loop" Boilerplate
If you want to build an auto-scaler in Go or Python today, you have to write a massive while(true) loop. You have to write logic to poll the API, calculate the difference between desired and actual, write if/else branches for scaling up vs. scaling down, and handle thread locks so two loops don't accidentally scale up at the same time.
Ved completely eliminates the loop. You just define the mathematical "fixed point" (the Goal) and the rules of movement (the Transitions). The runtime handles the looping, the polling, and the thread safety.
2. Bidirectional Healing (Self-Correcting)
In imperative scripts, developers usually only write the "happy path" (e.g., if workers < 5: add worker). They often forget to write the logic for what happens if there are too many workers.
Because Ved is driven by a strict mathematical Goal (== 5), it forces the system to be bidirectionally self-healing. It doesn't just recover from crashes; it recovers from human tampering and configuration drift.
3. Immunity to Race Conditions
In a traditional system, if two different monitoring alerts fire at the exact same millisecond saying "worker died," a standard script might accidentally boot up two replacement workers instead of one, overshooting the target. Because Ved executes Transitions in strict, deterministic, single-threaded slices, race conditions are mathematically impossible. It will process the first alert, update the state, evaluate the goal, and then process the second alert.
4. Extreme Readability
You don't need to be a senior distributed systems architect to understand what that 18-line block of code does. It reads almost like plain English. Yet, under the hood, it compiles down to a highly resilient, crash-safe, deterministic state machine.
Summary
Worker scaling becomes:
a convergence problem, not a control flow problem