Skip to main content

Command Palette

Search for a command to run...

Offline Support in Web Apps: Foreground Queue for Offline Mutations — Part 2

Updated
7 min read
Offline Support in Web Apps: Foreground Queue for Offline Mutations — Part 2

This is the fifth post in my series about offline support in web applications and the second one focused specifically on the foreground queue. In the previous article, I introduced a foreground queue for pending offline mutations and outlined its basic shape. In this post, I want to focus on the parts that are easy to get wrong in practice.

We’ll look at how to make the queue correct and predictable under concurrent access: guaranteeing atomic read–modify–write operations, handling deduplication and bounded queue size at enqueue time, and running an explicit, single-flight sync loop that can be paused safely.

Atomicity and correctness

Using IndexedDB for queue storage is great because it doesn't stop users from interacting with the interface. Reads and writes to IndexedDB are asynchronous. However, this can lead to potential issues when reads and writes occur almost simultaneously.

Imagine that during a long-running sync, a user might want to add another item to the queue. If we're not careful about how we read and write to the async storage, we could accidentally overwrite some user actions, which we definitely want to avoid.

To avoid race conditions, every persistence operation has to run inside a mutex-protected critical section. Here's a simple example of the code needed to make this work.

export class Queue<T> {
  private lock: Promise<void> = Promise.resolve();

  private async withLock<T>(fn: () => Promise<T>): Promise<T> {
    let release!: () => void;

    const previousLock = this.lock;
    // Create a new lock and hold on to it's `resolve` function...
    this.lock = new Promise<void>((resolve) => {
      release = resolve;
    });

    // Wait for the previous lock to be released...
    await previousLock;

    try {
      // Execute the wrapped function...
      return await fn();
    } finally {
      // Release the lock once the wrapped function finishes...
      release();
    }
  }
}

Here's how the queue might use this mechanism: we read and write modified items in a single, atomic operation (at the queue abstraction level) to ensure that no other queue method can overwrite items during this process.

await this.withLock(async () => {
  const items = await this.read();
  // Modify items...
  await this.write(items);
});

It’s sufficient for a single JS execution context, it won’t protect against multiple tabs or workers.

Enqueueing with deduplication and bounded size

Next, we'll talk about deduplication and bounded size together because they happen at the same time during the enqueue operation. First, let's update the config by adding two more fields:

  • identityKey returns a unique identifier for each item (needed for deduplication).

  • maxDepth defines the maximum depth of the queue.

Here is a simplified version of the enqueue method. Since we need to read all items to decide whether to replace or add an item, the entire operation is atomic, starting with a read and ending with a write.

export interface QueueConfig<T> {
  name: string;
  storageKey: string;
  processor: (item: T) => Promise<void>;
  identityKey: (item: T) => string;
  maxDepth?: number;
  // Other configuration fields...
}

export class Queue<T> {
  async enqueue(item: T): Promise<void> {
    return this.withLock(async () => {
      const items = await this.read();
      const identityKey = this.config.identityKey(item);
      const existingIndex = items.findIndex((i) => i.id === identityKey);

      const queueItem: QueueItem<T> = {
        id: identityKey,
        // Other queue item fields...
      };

      if (existingIndex >= 0) {
        // An item with the same key already exists - replace it...
        items[existingIndex] = queueItem;
      } else {
        // No item with the same key found - push it to the queue...
        items.push(queueItem);

        // If exceeding the max depth, remove oldest items...
        if (items.length > this.config.maxDepth) {
          items.splice(0, items.length - this.config.maxDepth);
        }
      }

      await this.write(items);
    });
  }
}

Whether you want to add deduplication or reorder the items depends on the use case. In our todo list example, if a user changes the status of a single todo item multiple times, we only want to keep the last change, as this reflects the user’s intent. The order of the operations doesn't really matter.

A rule of thumb: deduplication works best for idempotent or “last-write-wins” mutations, for append-only or causal operations, this approach may be wrong.

Manual sync with guardrails

As mentioned in the previous article, syncing is explicit. I decided that the queue should have two public methods—one to start the sync and the other to pause it. This allows the clients of the queue to toggle syncing when the user's network status changes.

if (isOnline) {
  await queue.startSync();
} else {
  queue.pauseSync();
}

Before we sketch out the implementation, here are a few design decisions that matter a lot:

  1. Items are processed sequentially. There's no right or wrong choice here, but processing items one at a time might be a bit easier on the server.

  2. Only one sync can run at a time. Each queue holds a single list, so it only makes sense for one sync process to happen at a time.

  3. Sync can be paused mid-flight. This is something we already discussed above.

Retries are also an important consideration, and we will discuss them in future posts. However, we will design the methods so that adding retry behaviour is straightforward.

export interface SyncResult<T> {
  status: 'completed' | 'paused';
  success: Array<QueueItem<T>>;
  failure: Array<QueueItem<T>>;
}

export class Queue<T> {  
  // Fields for controling the state (could be represented as a single field)...
  private isProcessing = false;
  private isPaused = false;

  pauseSync(): void {
    // Setting a flag that will be read during sync before every item is processed...
    this.isPaused = true;
  }

  async startSync(): Promise<SyncResult<T>> {
    // Only allow for a single sync to take place...
    if (this.isProcessing) {
      return {
        status: 'completed',
        success: [],
        failure: [],
      };
    }

    // Reset the flag if the sync has previously been paused...
    this.isPaused = false;

    const result: SyncResult<T> = {
      status: 'completed',
      success: [],
      failure: [],
    };

    try {
      // Single ongoing processing loop (to accommodate for retries later)...
      while (true) {
        // Read the queue items...
        const items = await this.withLock(async () => {
          return await this.read();
        });

        // If the queue is empty, finish processing...
        if (items.length === 0) {
          break;
        }

        // If the sync has been paused, finish processing...
        if (this.isPaused) {
          result.status = 'paused';
          break;
        }

        this.isProcessing = true;

        // Keep track of items that are to be removed...
        const itemsToRemove = new Set<string>();

        for (const item of items) {
          // If the sync has been paused, finish processing...
          if (this.isPaused) {
            result.status = 'paused';
            break;
          }

          try {
            await this.process(item);
            // The item has been processed...
            // Add it to `success` array and mark to be removed...
            result.success.push(item);
            itemsToRemove.add(item.id);
          } catch (error) {
            // The item has failed...
            // Add it to `failure` array...
            result.failure.push(item);
          }
        }

        // All items have been processed...
        // Read the fresh items from the queue and save changes as one operation...
        await this.withLock(async () => {
          const items = await this.read();
          const newItems = items.filter((item) => !itemsToRemove.has(item.id));
          await this.write(newItems);
        });
      }
    } finally {
      this.isProcessing = false;
    }

    return result;
  }

This simple processing takes care of the happy path — when every item is processed exactly once, without retries (again, we’ll add them in a later post). There are a few areas worth highlighting in this implementation.

  • Items are read only during the beginning of the sync. The loop operates on this in-memory snapshot. Newly enqueued items during processing are not picked up immediately. They’ll be handled in the next iteration of the loop. This is mainly for simplicity and determinism: you always know which items you’re attempting to process, and you’re not mixing in reads mid-loop.

  • Pausing only affects the next item after the queue is paused. This means the currently running item is always allowed to finish. Pausing stops future work, not in-flight work. Again, this is made for simplicity and predictability — interrupting an active mutation would require request cancellation and make error handling significantly.

  • Changes are saved after reading from queue again in a single operation. This avoids partial updates. Either all successfully processed items are removed together, or none are. It also ensures the queue state reflects any changes that may have happened while processing was ongoing.

Summary

That’s it for today! This article walks through the core implementation details of a foreground queue for offline mutations:

  • Atomicity and correctness are enforced by wrapping every read–modify–write cycle in a mutex. This prevents race conditions when users enqueue items during an ongoing sync.

  • Deduplication and bounded size happen at enqueue time, using a stable identity key and optional maximum depth. This lets the queue reflect user intent rather than raw action history.

  • Explicit sync control keeps behaviour predictable. Only one sync runs at a time, items are processed sequentially, and pausing affects future work—not in-flight requests.

  • Deterministic processing comes from operating on a snapshot of the queue and committing changes in a single atomic write after processing completes.

This gives us a robust foundation that handles the happy path and leaves room for retries, backoff, and more advanced error handling later. Next, we’ll introduce state awareness and show how to integrate the queue with React, so the UI can respond to sync progress and failures without tight coupling.

If you enjoyed the article or have a question, feel free to reach out on Bluesky! 👋

Further reading and references

More from this blog

T

Tomasz Gil - Software Engineer | Blog

53 posts

I help product teams build quality software and lead engineering efforts. Currently working at OpenSpace as a Senior Software Engineer.