Rate-Limited Processing
The Problem: The "Thread-Blocking" API Trap
Imagine you are building a system that needs to send 100,000 emails via a third-party service like SendGrid, but SendGrid strictly limits you to 10 requests per second.
In standard programming (Node.js, Go, Python), developers usually solve this by looping through the emails and calling sleep() to artificially slow down the loop.
The fatal flaw? If 100,000 emails are in memory and the thread is sleeping, your server is holding thousands of network sockets open and consuming massive amounts of RAM just to "wait." If a traffic spike hits, your server runs out of memory and crashes. The alternative is setting up a complex, external Redis cluster just to manage a "Token Bucket" rate limiter.
The Code: Rate Limiting in Ved
In Ved, because you cannot use sleep() (as it breaks determinism and blocks the CPU), rate limiting is modeled purely as a mathematical state constraint.
Here is how Ved implements a natural Token Bucket rate limiter:
domain EmailProcessor {
state {
pending_emails: list<EmailPayload>
available_tokens: int = 10
}
// The domain only rests when the queue is entirely empty
goal InboxCleared {
predicate pending_emails.length == 0
}
// This transition is MATHEMATICALLY BLOCKED if tokens reach 0
transition SendEmail {
step {
if pending_emails.length > 0 && available_tokens > 0 {
let email = pending_emails.pop()
emit SendGrid.Dispatch(email)
available_tokens -= 1
}
}
}
// The runtime's Effect Adapter injects time deterministically
on Event::TimerTick_1Sec {
// Replenish the tokens every second, capped at 10
if available_tokens < 10 {
available_tokens = 10
}
}
}
How it Executes (The Control Loop)
- The Flood: A microservice drops 100,000 emails into the
pending_emailsqueue. - The Sprint: The Ved runtime wakes up. The goal is
false. It triggers theSendEmailtransition. Becauseavailable_tokensstarts at 10, it rapidly processes 10 emails in the span of a few milliseconds. - The Natural Brake: On the 11th evaluation,
available_tokens == 0. TheSendEmailtransition is now invalid. The Goal is still false, but the Domain has no valid moves left. - Zero-Cost Waiting: The Domain elegantly goes to sleep. It releases 100% of the CPU back to the scheduler. It is not "sleeping" on a thread; it is completely dormant.
- The Replenish: One second later, the Impure Effect Adapter (which tracks real-world wall-clock time) fires a
TimerTick_1Secevent into the Mailbox. - The Resume: The Domain wakes up, processes the mailbox, resets
available_tokensto 10, and instantly burns through 10 more emails.
Behavior
- Processing stops at limit
- System remains stable
Key Takeaways
1. Algorithmic Backpressure (No Redis Required)
In standard microservices, to achieve this level of cross-server rate limiting without dropping data during a crash, you have to spin up a Redis instance, write Lua scripts for atomicity, and maintain a separate database.
Ved gives you distributed rate limiting for free. Because the memory state (available_tokens and pending_emails) is continuously flushed to disk via COW snapshots, the rate limiter survives server reboots perfectly intact. You don't need Redis; the Ved runtime is your state manager.
2. Deterministic Time
Notice how the Domain doesn't check System.now(). It waits for an Event::TimerTick.
Because if your code reads the system clock, you cannot accurately test it. By injecting time as a Mailbox event, you can use Ved's Time-Travel Simulator to test this logic. You can write a unit test that instantly fires 100,000 "TimerTicks" to prove that your code will correctly process the queue over exactly 2.7 hours, and the test runs locally in 50 milliseconds.
3. CPU Liberation
The concept of "Zero-Cost Waiting" cannot be overstated. Standard infrastructure spends an ungodly amount of compute power simply waiting for network I/O. By turning rate limits into state conditions, Ved allows a single CPU core to orchestrate millions of pending tasks simultaneously without thread starvation.
4. Safe Degradation under Load
If SendGrid goes down and starts throwing HTTP 500 errors, those errors get routed back to the domain (similar to the Retry Reconciliation example). The queue simply grows. But because Ved domains are strictly bounded and memory-snapshotted, a massive queue won't crash the program. The backpressure naturally bubbles up, safely buffering the system until the API comes back online.
Summary
Rate limiting becomes:
a state constraint, not a timing hack