Distributed Queue Simulation

5 min read
Suggest an edit

The Problem: Distributed Locks and "Double Processing"

In standard architectures, if you have a queue of 10,000 video files that need to be processed by 50 different worker servers, you run into the Race Condition Trap. If Worker A and Worker B both ask the queue for a task at the exact same millisecond, they might both receive Video #1. They both process it, wasting CPU and potentially corrupting the database.

To fix this today, engineers have to introduce heavy external dependencies like Redis or Apache Kafka, implementing complex "Distributed Mutex Locks" and "Visibility Timeouts." If a worker crashes mid-processing, the lock has to magically expire so another worker can pick it up. It is a fragile, infrastructure-heavy nightmare.

The Code: Lock-Free Queues in Ved

Because Ved strictly enforces ADR-002: Absolute Rejection of Shared Mutable State, workers cannot share a memory lock. Instead, Ved solves this using the Actor Model - coordinating through isolated state machines and asynchronous Mailboxes.

Here is how a distributed queue is modeled in Ved:

// DOMAIN 1: The Central Broker
domain QueueBroker {
  state {
    tasks: list<TaskPayload>
    idle_workers: list<WorkerID>
  }

  // Broker rests only when all tasks are assigned
  goal QueueEmpty {
    predicate tasks.length == 0
  }

  // Mailbox: Listen for workers announcing they are free
  on Event::WorkerIdle(worker_id) {
    idle_workers = idle_workers.add(worker_id)
  }

  // Deterministically match a task to a worker
  transition DispatchTask {
    step {
      if tasks.length > 0 && idle_workers.length > 0 {
        let task = tasks.pop()
        let worker = idle_workers.pop()

        // Push the task to the specific worker's mailbox
        emit Network.RouteToWorker(worker, task)
      }
    }
  }
}

// DOMAIN 2: The Worker Node (Deployed on 50 different servers)
domain WorkerNode {
  state {
    current_task: TaskPayload?
    status: string = "idle"
  }

  goal AwaitingWork {
    predicate status == "idle"
  }

  // Mailbox: Receive a task from the Broker
  on Event::ReceiveTask(task) {
    current_task = task
    status = "working"
  }

  transition ProcessTask {
    step {
      if status == "working" && current_task != null {
        // Hand off to the Impure Zone to actually encode the video
        emit Compute.Execute(current_task)

        // Reset state and broadcast availability back to the Broker
        current_task = null
        status = "idle"
        emit Broadcast(Event::WorkerIdle(self.id))
      }
    }
  }
}

How it Executes (The Control Loop)

  1. The Announcement: 50 WorkerNode domains boot up. They reach their goal (status == "idle"). They emit WorkerIdle events to the message broker.
  2. The Registry: The QueueBroker receives these events in its mailbox and updates its idle_workers state list.
  3. The Influx: 10,000 tasks are dropped into the QueueBroker's state. The QueueEmpty goal is broken.
  4. The Dispatch: The Broker wakes up. It deterministically pops one task and one worker ID from its state, and emits the payload. Because the Broker processes its own state in a strict, single-threaded slice, a race condition is mathematically impossible. It can never assign the same task to two workers.
  5. The Processing: The assigned WorkerNode receives the task in its mailbox, updates its state to "working", emits the heavy compute intent to the external adapter, and once finished, announces it is idle again.

Behavior

  • workers consume tasks
  • queue stabilizes

Why This Matters

1. The Elimination of Redis & Kafka

To achieve this level of safe, distributed queueing today, you must run external infrastructure (like Redis for locking or Kafka for streaming). Ved makes these dependencies obsolete for orchestration. The runtime itself acts as the deterministic message broker. Because the memory of the QueueBroker is protected by COW (Copy-On-Write) snapshots, if the Broker server crashes, it reboots, loads the snapshot, and instantly remembers exactly which tasks were in the queue and which workers were idle. You get Kafka-level resilience with zero infrastructure overhead.

2. The "Blast Radius" is Perfectly Contained

If WorkerNode #42 encounters a corrupted video file and physically crashes, what happens to the system?

  • In a tight script: The loop crashes, and the other 9,999 videos stop processing.
  • In Ved: Nothing happens to the system. The QueueBroker doesn't care. Worker #42 simply stops sending WorkerIdle events. The Broker stops routing tasks to it. The remaining 49 workers continue pulling tasks. The failure is completely isolated to the hardware it occurred on.

3. Time-Travel Testing for Distributed Systems

How do you test a distributed queue with 50 workers and 10,000 tasks? Normally, you have to spin up 50 AWS instances and run a massive integration test. Because of Ved's strictly Pure deterministic logic and simulated mailboxes, you can use ved simulate on your laptop. You can tell the compiler: "Simulate 1 Broker and 50 Workers. Inject 10,000 tasks. Randomly drop 5% of network packets and simulate 3 hardware crashes." The Ved runtime will execute the entire distributed choreography locally in seconds, proving mathematically whether your code drops tasks or deadlocks.


Summary

Queue processing becomes:

coordinated state evolution