Offline Support in Web Apps: Foreground Queue for Offline Mutations — Part 1

Deciding how to handle user actions when the network disappears is one of the trickiest parts of offline support. Reads are usually manageable once you have persistence in place. Writes are where things get interesting.
This is the fourth post in my series about offline support in web applications. In the previous articles, I covered general approaches, app shell loading, and data persistence. In this post, I want to focus on a very practical piece of the puzzle: implementing a foreground queue for pending mutations.
In my experience, a well-designed foreground queue is the backbone of reliable offline writes. It keeps your app honest about what happened, what still needs syncing, and what went wrong.
Introducing: queue
Before going any further, it’s worth touching on what I mean by a queue in this context. Formally, a queue is a data structure used to hold items that need to be processed later, in a defined order. You add items to the end, and you process them one by one from the front.
Most queues follow the FIFO rule — first in, first out. The first item you enqueue is the first one you process. In practice, the queue I describe in this article behaves this way implicitly: items are processed sequentially in the order they were added, unless deduplication replaces an older entry with a newer one.
This model turns out to be a very natural fit for offline mutations:
User actions become explicit units of work
Each mutation is processed exactly once, in a predictable order
Failures are isolated to individual items instead of breaking the whole flow
Finally, this is simply an application-level queue, not a message broker or background job system.
Why a foreground queue?
Once you allow users to mutate data offline, you need a place to put those mutations. You can’t just “fire and forget” API calls anymore.
Saving these pending mutations in queue gives you:
Explicit control over when syncing happens
Clear visibility into pending and failed operations
Predictable behaviour when connectivity changes
As mentioned in the first article, choosing between background sync and a foreground queue mainly depends on the user experience you want and the level of complexity you can handle. Compared to background sync, this approach is less complex, simpler to reason about and easier to debug.
I won't go into the exact details of the queue implementation I was working on, but I'll focus on the key principles and the most important parts of the implementation. Let’s dive in!
What I wanted from the queue
Before writing any code, I wrote down a short list of requirements and characteristics I wanted my queue implementation to have. Here’s what I considered:
Persistence: queued mutations must survive reloads, crashes, and restarts (same principles for data persistence apply as we discussed in the previous article)
Retryability: failures should be retried automatically, using exponential backoff to avoid hammering the network or backend
Observability: every important operation should be trackable, so I can log, measure, and reason about what the queue is doing
State awareness: the application should be able to subscribe to queue state changes and react in real time
Deduplication: only the latest intent for a given entity should be kept
Bounded size: the queue must have a maximum depth to prevent unbounded growth
These constraints strongly shaped the design, which you’ll see throughout this and the following posts.
Queue class
At its core, the queue is just a class. There’s no magic here — you create an instance, configure it, and interact with it through a small, explicit API.
Here's a simplified example. We'll add more details as we proceed.
export interface QueueConfig<T> {
name: string;
storageKey: string;
processor: (item: T) => Promise<void>;
// Other configuration fields...
}
export class Queue<T> {
private config: QueueConfig<T>;
constructor(config: QueueConfig<T>) {
this.config = config;
}
// Public and private methods...
}
I’ve found this approach works quite nice because it keeps responsibilities clear. The queue owns persistence, ordering, and retries. The rest of the application just tells it what to do (such as passing the processor function to be called on each item during syncing).
Managing items
There are a handful of public methods the queue needs. The first set of public methods handles the lifecycle of items in the queue, reading and writing to the IndexedDB using the storageKey property.
enqueueadds a new item to the queue and persists it immediatelydequeueremoves a specific item by identity, permanently discarding itclearremoves all items from the queue (useful for resets and tests)
Controlling syncing
The second group controls when and how items are processed:
startSyncbegins processing queued items sequentiallypauseSyncstops processing after the current item finishes
Syncing is always a foreground action. Only one sync operation can run at a time, and additional calls are ignored while a sync is in progress. This keeps behaviour predictable and avoids subtle race conditions.
The trade-off is that the client needs to decide when syncing should occur. The queue implementation doesn't rely on the network state (whether the user is online or offline); this is managed externally by the queue users. In practice, startSync is called when the user goes online, and pauseSync is called when the user goes offline.
Queue item shape
To wrap up this section, let's examine the structure of a queue item. This is how entries are stored in IndexedDB. Each entry in the queue is a small, self-contained record that represents a single user action. I keep this structure intentionally simple, but every field is there for a specific reason.
export interface QueueItem<T> {
id: string;
payload: T;
enqueuedAt: number;
attemptCount: number;
lastAttemptAt: number | null;
}
Here’s how I think about each field:
id- a unique identifier for the item. This is the foundation for deduplication we’ll touch on later — if a new item is enqueued with the same id, it replaces the old one. In practice, this usually maps to the domain entity being mutated (like changing a status on a todo twice while offline).payload- the actual data needed to perform the mutation. The type here is configured by the client, which makes the whole implementation generic. The queue doesn’t care what’s inside, only that it can hand it to the processor.enqueuedAt- a timestamp of when the item was added to the queue, useful for monitoring.attemptCount- tracks how many times processing has been attempted for the item. This drives retry behaviour and determines when an item should be considered permanently failed.lastAttemptAt- records when the item was last processed. Combined withattemptCount, this allows exponential backoff and makes retry timing explicit and observable.
This structure strikes a balance for me: it’s rich enough to support retries, backoff, and monitoring, while keeping it fairly compact and easy to manage (item statuses can be derived from existing fields).
Summary
That’s it for this article! We introduced the idea of a foreground queue as a practical foundation for handling offline mutations in web apps.
We covered the core responsibilities of such a queue: persistence, retries, observability, state awareness, deduplication, and bounded growth.
We outlined a simple class-based design, the public API for managing items and controlling syncing,
We described the shape of a queue item — to enable retries, backoff, and monitoring.
To sum it up in one sentence: in a foreground queue, treat every offline mutation as durable work that must be explicitly tracked.
In the next articles, I’ll dive into the most important implementation details and show how to actually satisfy the characteristics outlined at the beginning — from persistence mechanics and retry strategies to deduplication and state subscriptions in practice.
If you enjoyed the article or have a question, feel free to reach out on Bluesky! 👋
Further reading and references
Photo by freestocks on Unsplash
Queue (abstract data type) on Wikipedia






