<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Tomasz Gil - Software Engineer | Blog]]></title><description><![CDATA[I help product teams build quality software and lead engineering efforts. Currently working at OpenSpace as a Senior Software Engineer.]]></description><link>https://blog.tomaszgil.me</link><generator>RSS for Node</generator><lastBuildDate>Sat, 18 Apr 2026 05:10:57 GMT</lastBuildDate><atom:link href="https://blog.tomaszgil.me/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Offline Support in Web Apps: Data Prefetching Strategies]]></title><description><![CDATA[Deciding how to approach offline support can be challenging, especially once you move beyond the basics. So far, I focused on persisting data that was already fetched as a result of user actions. This]]></description><link>https://blog.tomaszgil.me/offline-support-in-web-apps-data-prefetching-strategies</link><guid isPermaLink="true">https://blog.tomaszgil.me/offline-support-in-web-apps-data-prefetching-strategies</guid><category><![CDATA[software development]]></category><category><![CDATA[Software Engineering]]></category><category><![CDATA[software architecture]]></category><category><![CDATA[Web Development]]></category><category><![CDATA[offline]]></category><dc:creator><![CDATA[Tomasz Gil]]></dc:creator><pubDate>Wed, 01 Apr 2026 08:30:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/608ed7efdc886b4318006917/adf48d3f-99da-4778-bcf9-c8f8bb5b265c.jpg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Deciding how to approach offline support can be challenging, especially once you move beyond the basics. So far, I focused on persisting data that was already fetched as a result of user actions. This works well for many cases. But in my experience, it quickly breaks down when users expect the app to “just work” offline — without thinking ahead.</p>
<p>This is the ninth post in my series about offline support in web applications. Today, I want to cover prefetching strategies: how to proactively prepare data for offline use, and more importantly, how to do it without hurting the user experience.</p>
<h2>Why Prefetching Matters</h2>
<p>If you only persist data that the user explicitly fetched, your offline support is technically correct — but practically limited.</p>
<p>The core issue is: <strong>users don’t know what will be available offline</strong>.</p>
<p>Imagine a todo app where a user opens a board while online, so the app fetches and stores its data. Later, they go offline and try to open an item from that board — something that, from their perspective, should just work. But since that specific item was never fetched before, it isn’t available, and the app feels inconsistent: the board is there, but the item isn't. It feels inconsistent.</p>
<p>Users don’t typically plan their navigation around connectivity, so in our example the app becomes unpredictable when offline. This leads to an important UX problem: a lack of trust. The user doesn’t know what will work. This inconsistency is probably worse than a full failure. At least with a full failure, the user understands the limitation.</p>
<p>Prefetching is a way to fix that. Instead of relying purely on user-driven fetching, we proactively prepare data ahead of time. Offline support becomes much more useful when the system, not the user, does the planning.</p>
<h2>Step 1 — Decide When Prefetching Happens</h2>
<p>First, we need to choose how we want to handle data prefetching for users.</p>
<h3><strong>Automatic Prefetching</strong></h3>
<p>The app automatically gets data in the background without the user doing anything. At first, this sounds ideal — and in many cases, it is. But there’s a catch. If you’re not careful, prefetching competes for resources in a critical moment: <em>the first interaction</em>.</p>
<p>Imagine this:</p>
<ol>
<li><p>The user opens the app.</p>
</li>
<li><p>The UI renders.</p>
</li>
<li><p>The user starts interacting with the application.</p>
</li>
<li><p>At the same time, 20 background requests start firing.</p>
</li>
</ol>
<p>The user network requests are effectively competing for resources with prefetching requests. Even if everything is <em>working</em>, the app will feel sluggish. Interactions become less responsive. This doesn’t feel like a great first experience.</p>
<p>Automatic prefetching optimises for readiness, but often at the cost of responsiveness. In my experience, naive automatic prefetching often does more harm than good.</p>
<h3>User-controlled Prefetching</h3>
<p>The alternative is to give control to the user. This could mean a “Prepare for offline” button, a settings toggle or a specific flow (e.g. downloading a project). This avoids the performance problem entirely. But it introduces another one: <em>discoverability</em>.</p>
<p>Users need to:</p>
<ol>
<li><p>Know the feature exists.</p>
</li>
<li><p>Understand when to use it.</p>
</li>
<li><p>Remember to trigger it.</p>
</li>
</ol>
<p>User-controlled prefetching optimises for performance, but often at the cost of adoption. In practice, this often means the feature is underused — or not used at all.</p>
<h3>Hybrid Approach</h3>
<p>In most cases, I find that a hybrid approach works best: some level of automatic prefetching combined with user-triggered actions for larger datasets.</p>
<p>This gives you a decent default behavior, with an escape hatch for power users, leading to more predictable performance characteristics. The key is to <strong>keep automatic prefetching conservative</strong>.</p>
<h2>Step 2 — Introduce Prefetching in Your Data Layer</h2>
<p>Once you’ve decided when to prefetch, the next step is making it possible in your data layer.</p>
<p>Here's an example with using React Query, which makes this quite straightforward. Typically, this is how you'd fetch a list of todos.</p>
<pre><code class="language-typescript">const fetchTodo = async (id: string) =&gt; {
  return request.get(`/todos/${id}`);
};

export const useTodo = (id: string) =&gt; {
  return useQuery({
    queryKey: queryKeys.todo(id),
    queryFn: () =&gt; fetchTodo(id),
  });
}; 
</code></pre>
<p>Here's what will allow us to prefetch the same data.</p>
<pre><code class="language-typescript">const fetchTodo = async (id: string) =&gt; {
  return request.get(`/todos/${id}`);
};

const todoQueryOptions = (id: string) =&gt; {
  return queryOptions({
    queryKey: queryKeys.todo(id),
    queryFn: () =&gt; fetchTodo(id),
  });
};

export const prefetchTodo = (
  queryClient: QueryClient,
  id: string,
) =&gt; queryClient.prefetchQuery(todoQueryOptions(id));

export const useTodo = (id: string) =&gt; {
  return useQuery(todoQueryOptions(id));
};
</code></pre>
<p>You now have a reusable query definition, a function to fetch data without rendering a component, and a clear separation between reading and preparing data.</p>
<h2>Step 3 — Control the Impact of Prefetching</h2>
<p>This is where most nuanced work happens. Prefetching is relatively easy to add — but hard to do well. If you take one thing from this post, it’s this:</p>
<blockquote>
<p>Prefetching should never compete with the user.</p>
</blockquote>
<p>Here are a few practical rules I try to follow.</p>
<h3>Limit Concurrency</h3>
<p>Firing dozens of requests at once is rarely a good idea. Instead, I recommend introducing a simple concurrency limit. Even a small cap (1–2 concurrent requests, possibly with delay in between) can significantly improve perceived performance.</p>
<p>Here’s a minimal example:</p>
<pre><code class="language-typescript">const MAX_CONCURRENT_PREFETCH = 3;

let active = 0;
const queue: (() =&gt; Promise&lt;unknown&gt;)[] = [];

const runNext = () =&gt; {
  if (active &gt;= MAX_CONCURRENT_PREFETCH || queue.length === 0)
    return;

  const task = queue.shift();
  if (!task) return;

  active++;
  
  task().finally(() =&gt; {
    active--;
    runNext();
  });
};

export const schedulePrefetch = (task: () =&gt; Promise) =&gt; {   
  queue.push(task);
  runNext();
};
</code></pre>
<p>You can then wrap your prefetch calls:</p>
<pre><code class="language-typescript">schedulePrefetch(() =&gt;
  queryClient.prefetchQuery(todoQueryOptions(id))
);
</code></pre>
<p>This is intentionally simple, but it already gives you much better control.</p>
<h3>Prefetch When the App Is Idle</h3>
<p>One of the simplest and most effective strategies is to only prefetch when the app isn’t busy. The browser gives you a useful primitive for this: <code>requestIdleCallback</code>.</p>
<pre><code class="language-typescript">const scheduleIdlePrefetch = (task: () =&gt; void) =&gt; {
  if ('requestIdleCallback' in window) {
    requestIdleCallback(() =&gt; task());
  } else {
    // Fallback
    setTimeout(() =&gt; task(), 1000);
  }
};
</code></pre>
<p>Usage:</p>
<pre><code class="language-typescript">scheduleIdlePrefetch(() =&gt;
  schedulePrefetch(() =&gt;
    queryClient.prefetchQuery(todoQueryOptions(id))
  )
);
</code></pre>
<p>This ensures that prefetching happens when the main thread is less busy, which helps preserve responsiveness.</p>
<h3>Respect Network Conditions</h3>
<p>Not all connections are equal, and prefetching blindly can be wasteful — or even harmful. You can use the <code>navigator.connection</code> API to make basic decisions:</p>
<pre><code class="language-typescript">const shouldPrefetch = () =&gt; {
  const connection =
    (navigator as any).connection ||
    (navigator as any).mozConnection ||
    (navigator as any).webkitConnection;

  if (!connection) return true;

  const slowTypes = ['slow-2g', '2g'];
  if (slowTypes.includes(connection.effectiveType)) {
    return false;
  }

  if (connection.saveData) {
    return false;
  }

  return true;
};
</code></pre>
<p>Usage:</p>
<pre><code class="language-typescript">if (shouldPrefetch()) {
  scheduleIdlePrefetch(() =&gt;
    schedulePrefetch(() =&gt;
      queryClient.prefetchQuery(todoQueryOptions(id))
    )
  );
}
</code></pre>
<p>This doesn’t need to be perfect — even basic checks can make a noticeable difference.</p>
<h3>Be Selective About What You Prefetch</h3>
<p>Not all data is worth prefetching. First, focus on high-probability user paths, small to medium payloads and data required for core flows. You can skip rarely accessed data and large datasets.</p>
<p>A good rule of thumb:</p>
<blockquote>
<p>Prefetch what the user is likely to need next — not everything they might need.</p>
</blockquote>
<h3>Prioritise User-Initiated Requests</h3>
<p>This is the last point I'll bring up. In the beginning of this section, I mentioned that user actions should always win. That means not saturating the network, avoiding blocking important requests and being ready to pause or deprioritise prefetching.</p>
<p>In practice, this is harder than it sounds. Doing this properly usually requires a centralised request handling mechanism, awareness of in-flight requests and the ability to cancel or deprioritise prefetch requests.</p>
<p>I won’t go deep into this here, but it’s worth calling out — this is where simple setups often hit their limits. If you reach this point, it’s usually a sign that prefetching should be treated as a first-class concern in your data layer — not just an add-on.</p>
<h2>Takeaways</h2>
<p>Prefetching is one of those things that sounds simple but has a lot of nuance.</p>
<p>Done well, it makes your offline experience feel seamless and reliable. Done poorly, it slows down your app for everyone and wastes resources.</p>
<p>To summarise:</p>
<ul>
<li><p>Persisting fetched data is not enough for great offline UX</p>
</li>
<li><p>Prefetching improves predictability and trust</p>
</li>
<li><p>A hybrid approach usually works best</p>
</li>
<li><p>Control when and how prefetching happens — carefully</p>
</li>
</ul>
<p>In the next part, I’ll focus on a more concrete (and often overlooked) piece of the puzzle — how to reliably serve images when there’s no network at all.</p>
<p>If you enjoyed the article or have a question, feel free to reach out on <a href="https://bsky.app/profile/tomaszgil.me">Bluesky</a>! 👋</p>
<h3>Further Reading and References</h3>
<ul>
<li><p>MDN — <a href="https://developer.mozilla.org/en-US/docs/Web/API/Window/requestIdleCallback"><code>requestIdleCallback</code></a></p>
</li>
<li><p>Photo by <a href="https://unsplash.com/@aenniways?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Anna Maria Gnadl</a> on <a href="https://unsplash.com/photos/a-single-frosted-rosebud-on-a-branch-JFDjpXDL80Q?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Unsplash</a></p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[The Agent-First Workflow: Building Software with AI]]></title><description><![CDATA[Software engineering is changing faster than most of us can comfortably adapt to. The capabilities of LLM models change almost monthly. New tools appear every week. Advice that felt solid three months]]></description><link>https://blog.tomaszgil.me/the-agent-first-workflow-building-software-with-ai</link><guid isPermaLink="true">https://blog.tomaszgil.me/the-agent-first-workflow-building-software-with-ai</guid><category><![CDATA[AI]]></category><category><![CDATA[ai agents]]></category><category><![CDATA[Software Engineering]]></category><category><![CDATA[#ai-tools]]></category><category><![CDATA[software development]]></category><dc:creator><![CDATA[Tomasz Gil]]></dc:creator><pubDate>Mon, 23 Mar 2026 10:00:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/608ed7efdc886b4318006917/8860a3ad-b79b-4fd6-830c-d0213870b097.jpg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Software engineering is changing faster than most of us can comfortably adapt to. The capabilities of LLM models change almost monthly. New tools appear every week. Advice that felt solid three months ago already feels outdated.</p>
<p>This post is an attempt to capture <strong>how I currently work with AI agents</strong> when building software. Think of it as a snapshot in time rather than a definitive guide.</p>
<p>About six months ago I <a href="https://blog.tomaszgil.me/enhancing-software-engineering-workflow-with-cursor-background-agents">wrote</a> that I was getting roughly 50% of my working code from AI agents. A lot has changed since then — and so has my approach.</p>
<h2>AI Agent is Just Another Engineer</h2>
<p>As AI tools have advanced, my workflow has evolved accordingly. Now, I believe that in most cases, <strong>there is no legitimate reason to write code by hand anymore</strong>.</p>
<p>I will only type code manually when I’m confident it’s faster than prompting an agent — which usually means a few lines at most. If the change requires thinking about structure, edge cases, or multiple files, I involve an agent. In practice, this means 95–100% of the code I deliver is AI-generated. My role shifts more toward guiding the work than writing the implementation.</p>
<p>More importantly, agents are no longer just a code generation tool for me. They are part of <strong>the entire development workflow</strong>. Instead of thinking about them as autocomplete on steroids, I treat them more like collaborators that help with:</p>
<ol>
<li><p>Exploring possible approaches.</p>
</li>
<li><p>Structuring work.</p>
</li>
<li><p>Implementing changes.</p>
</li>
<li><p>Reviewing code.</p>
</li>
<li><p>Refining the final solution.</p>
</li>
</ol>
<p>What helped me the most was adopting a simple mental model:</p>
<blockquote>
<p>Work with AI as if you're guiding another engineer through a task.</p>
</blockquote>
<p>In that situation, you would typically explain the problem clearly, share the relevant context, discuss constraints, agree on a rough plan, review the implementation and refine the result together.</p>
<p>Working with AI agents turns out to be surprisingly (or perhaps unsurprisingly) similar. The quality of the result is largely determined by how well the work is prepared and guided.</p>
<p>Once I started thinking about agents as collaborators rather than tools, my workflow naturally evolved around that idea. What emerged is a simple process that I now follow for most non-trivial changes.</p>
<h2>The workflow</h2>
<p>The process I currently follow looks roughly like this.</p>
<h3>Step 1 — Describe the Problem Clearly</h3>
<p>Everything starts with a clear problem description.</p>
<p>This step sounds trivial, but it's where many failures start. You have to know what the problem is. Even better if you roughly know what the solution should look like. If the problem description is vague, the agent will produce vague solutions.</p>
<p>I try to ensure the description covers four elements:</p>
<ol>
<li><p><strong>The exact problem</strong>.</p>
</li>
<li><p><strong>Current behavior</strong>.</p>
</li>
<li><p><strong>Desired behavior</strong>.</p>
</li>
<li><p><strong>Constraints</strong>.</p>
</li>
</ol>
<p>Here's a simplified example of this pattern.</p>
<pre><code class="language-plaintext">Problem:
Offline mutations sometimes disappear when the page reloads.

Current behavior:
The foreground queue stores mutations in memory.

Desired behavior:
Mutations should persist across reloads using IndexedDB.

Constraints:
- Must not block UI rendering
- Must support retry logic
- Should work with React Query mutations
</code></pre>
<p>This does two things — it forces <strong>clarity of thinking</strong> and it gives the agent <strong>a precise target</strong>. In my experience, spending an extra minute here saves several iterations later.</p>
<h3>Step 2 — Prepare the Context</h3>
<p>Before touching the codebase, I usually expand the context around the problem. This step is essentially <strong>research with the help of agents</strong>, either outside or within the codebase, depending on how much I want the agent to recognize existing patterns.</p>
<p>Typical questions I ask include:</p>
<ul>
<li><p>What are common solutions to this problem?</p>
</li>
<li><p>What patterns are used in similar systems?</p>
</li>
<li><p>What trade-offs should I consider?</p>
</li>
<li><p>What implementation approaches might work here?</p>
</li>
</ul>
<p>At this stage I’m not asking the agent to plan or write code yet. Instead, I’m gathering ideas, architectural options, potential pitfalls and possible implementation patterns.</p>
<p>The goal is to <strong>enter the codebase with a rough map in mind</strong>.</p>
<h3>Step 3 — Plan the Implementation</h3>
<p>Once the problem is clear and the context is prepared, I ask the agent to help produce an implementation plan. I do this for almost every task and I highly recommend it, even if it feels like overkill. In my experience, it drastically reduces the number of iterations needed after the initial implementation attempt.</p>
<p>When creating a plan, the most important rule here is:</p>
<blockquote>
<p>Encourage the agent to ask questions.</p>
</blockquote>
<p>You probably haven't thought of everything you should up front and you certainly didn't specify everything in your prompt. Have the agent discuss edge cases, simplify the approach, or describe assumptions. Keep the discussion going as the agent reads relevant files, identify affected areas of the codebase, propose structural changes, and outline the implementation steps.</p>
<p>In my experience, a good plan has two characteristics:</p>
<ol>
<li><p><strong>Concrete enough to visualise the solution</strong>. You should be able to roughly imagine how the system will look after the change.</p>
</li>
<li><p><strong>Flexible enough to benefit from delegation</strong>. Agents are most useful when they can still adjust implementation details while working.</p>
</li>
</ol>
<p>The result should feel like a clear but adaptable roadmap.</p>
<h3>Step 4 — Implementation and Iteration</h3>
<p>Now the agent begins implementing the plan, and what follows is highly iterative.</p>
<p>Even with a good plan, I expect several things to happen — imperfect structure, missed edge cases, minor architectural drift or suboptimal abstractions.</p>
<p>That’s normal. Instead of trying to get everything perfect in one prompt, I treat this phase as <strong>guided iteration</strong>. The loop usually looks like this — the agent implements a change, I briefly review the diff and check the behavior and request further refinements.</p>
<p>During this phase I typically focus on three things in this order.</p>
<ol>
<li><p><strong>Behavior</strong>. Does the feature actually work how it should?</p>
</li>
<li><p><strong>Structure</strong>. Is the code organised in a way that makes sense? Is it going to be easy to maintain and change over time?</p>
</li>
<li><p><strong>Readability</strong>. Would another engineer (or agent) understand this code quickly?</p>
</li>
</ol>
<p>I usually continue this process until I'm happy with the functionality and the code structure.</p>
<h3>Step 5 — Full Review Pass</h3>
<p>Once the feature works, I always run a final review pass. This step is important because agent-generated code can accumulate small issues during iteration.</p>
<p>There are multiple layers to it. First, I read through the code myself. Second, I usually ask the agent to do the same for me. Finally, I add additional tools on top, like CodeRabbit or Cursor Bugbot, to cover the change holistically, double check alignment with project rules, and find subtle bugs.</p>
<h3>Step 6 — Ship 🚀</h3>
<p>At this point the feature should be ready to go.</p>
<hr />
<p>The important part is that the implementation has already gone through multiple structured passes — planning, iteration and review. Not every change requires every step in this workflow, but in my experience, most meaningful changes benefit from the full process.</p>
<p>The key idea is not the exact sequence — it's the <strong>process discipline</strong>.</p>
<h2>Conclusion</h2>
<p>To sum it up in one sentence:</p>
<blockquote>
<p>Agentic software engineering is mostly about <strong>structured problem decomposition</strong> and <strong>process discipline</strong>.</p>
</blockquote>
<p>The better the process, the more effective the agents become.</p>
<p>This is just a snapshot in time. Six months ago my workflow looked different, and six months from now it probably will again.</p>
<p>For now, most of my work still involves a single agent at a time. One direction I'm already exploring is parallel agent workflows — orchestrating multiple agents working on different parts of a problem simultaneously.</p>
<p>Either way, it’s a fascinating time to be building software.</p>
<p>If you enjoyed the article or have a question, feel free to reach out on <a href="https://bsky.app/profile/tomaszgil.me">Bluesky</a>! 👋</p>
<h3>Further Reading and References</h3>
<ul>
<li><p><a href="https://ddewhurst.com/blog/structured-workflow-for-building-features-with-ai-coding-agents/">A structured workflow for building features with AI coding agents</a> by <a href="https://ddewhurst.com/">Daniel Dewhurst</a></p>
</li>
<li><p><a href="https://harper.blog/2025/04/17/an-llm-codegen-heros-journey/">An LLM Codegen Hero's Journey</a> by <a href="https://harperreed.com/">Harper Reed</a></p>
</li>
<li><p>Photo by <a href="https://unsplash.com/@tine999?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Tine Ivanič</a> on <a href="https://unsplash.com/photos/spiral-concrete-staircase-u2d0BPZFXOY?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Unsplash</a></p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Offline Support in Web Apps: Foreground Queue for Offline Mutations — Part 5]]></title><description><![CDATA[This is the eighth post in my series about offline support in web applications and the fifth focused on the foreground queue. In the previous article, we covered error handling and retry strategies. A]]></description><link>https://blog.tomaszgil.me/offline-support-in-web-apps-foreground-queue-for-offline-mutations-part-5</link><guid isPermaLink="true">https://blog.tomaszgil.me/offline-support-in-web-apps-foreground-queue-for-offline-mutations-part-5</guid><category><![CDATA[software development]]></category><category><![CDATA[Software Engineering]]></category><category><![CDATA[software architecture]]></category><category><![CDATA[Web Development]]></category><category><![CDATA[offline]]></category><dc:creator><![CDATA[Tomasz Gil]]></dc:creator><pubDate>Tue, 17 Mar 2026 09:30:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/608ed7efdc886b4318006917/ca4b4318-8685-4584-86d7-2e3952ab478b.jpg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>This is the eighth post in my series about offline support in web applications and the fifth focused on the foreground queue. In the previous article, we covered error handling and retry strategies. At this point, the queue implementation itself is complete.</p>
<p>In this post, I want to show how the queue is actually used inside a React application — how we safely store offline mutations and sync them when connectivity returns. In other words, we’re moving from <strong>queue implementation</strong> to <strong>application integration</strong>.</p>
<h2>Accessing the Queue from React</h2>
<p>In the third article in this series, we integrated the queue with React using a small <strong>context factory</strong>. The goal was simple: ensure there is exactly <strong>one queue instance per persisted queue</strong>, while still making it easily accessible throughout the component tree.</p>
<p>The factory creates three things — a <code>QueueProvider</code>, a <code>useQueue</code> hook and a configured queue instance.</p>
<p>Here’s the configuration snippet from that article as a quick reminder:</p>
<pre><code class="language-typescript">const { QueueProvider, useQueue } = createQueueContext(() =&gt; ({
  name: 'todo-mutations',
  storageKey: 'todo-mutation-queue-v1',
  identityKey: (item) =&gt; item.id,
  processor: async (item) =&gt; api.post('/todos', item),
}));
</code></pre>
<p>The important part is that the <strong>queue instance is created once and shared</strong>. Every component interacting with the queue talks to the same object, which prevents race conditions and keeps the persisted state consistent.</p>
<p>In practice, integration is straightforward. We wrap the application (or a feature subtree) with the provider:</p>
<pre><code class="language-typescript">export default function App() {
  return (
    &lt;QueueProvider&gt;
      &lt;AppContent /&gt;
    &lt;/QueueProvider&gt;
  )
}
</code></pre>
<p>From this point on, any component can call <code>useQueue()</code> and interact with the queue directly.</p>
<p>With the queue accessible throughout the application, we can now focus on the part that actually matters: <strong>using it to capture offline mutations</strong>. Let’s start with the most common scenario: executing mutations conditionally depending on the network state.</p>
<h2>Capturing Offline Mutations</h2>
<p>We need the queue for our mutation logic. In this example, we’re updating the status of a todo item. The key idea is simple:</p>
<ul>
<li><p><strong>Offline:</strong> store the intent in the queue.</p>
</li>
<li><p><strong>Online:</strong> execute the request immediately.</p>
</li>
</ul>
<p>The key idea is simple: mutations should behave the same from the UI perspective, regardless of network state. The mutation itself decides whether to execute immediately or store the action in the queue.</p>
<pre><code class="language-typescript">export const useTodoStatusUpdate = (id: string) =&gt; {
  const queue = useQueue();
  const isOnline = useNetworkStatus();

  return useMutation({
    networkMode: 'always',
    mutationFn: async (status: string) =&gt; {
      if (!isOnline) {
        await queue.enqueue({ id, status });
        return { queued: true };
      }

      await api.post(`/todos/${id}`, { status });
      return { queued: false };
    },
  });
};
</code></pre>
<p>When the user is offline, we store the user’s intent in the queue. This means the action is not lost. The application can safely retry it later. When the user is online, we call the API immediately and complete the mutation normally.</p>
<p>This approach keeps the mutation API consistent while transparently supporting offline scenarios. With this pattern, <strong>the UI doesn’t need to know about the queue directly</strong>. It simply calls the mutation, and the mutation decides whether to enqueue or execute.</p>
<h3>Interlude: Knowing if the User Is Online</h3>
<p>In the code we looked at, we mentioned that we can check whether the user is online. Detecting whether the user is online sounds trivial, but it turns out to be surprisingly nuanced.</p>
<p>Browsers expose a property called <code>navigator.onLine</code>. Based on the <a href="https://developer.mozilla.org/en-US/docs/Web/API/Navigator/onLine">MDN documentation</a>:</p>
<blockquote>
<p>The <code>onLine</code> property of the <code>Navigator</code> interface returns whether the device is connected to a network.</p>
</blockquote>
<pre><code class="language-typescript">if (!navigator.onLine) {
  // Definitely no network
}
</code></pre>
<p>At first glance this looks like exactly what we need. Unfortunately, it's not always reliable enough for application logic.</p>
<p>The property returns a boolean: <code>true</code> when the browser detects the device has some network connection, <code>false</code> when the browser detects there is no connection. Browsers typically determine this using OS-level heuristics such as whether a network interface is active or whether the device is connected to Wi-Fi or Ethernet.</p>
<p>There are several practical issues with this approach.</p>
<ol>
<li><p><strong>It does not guarantee Internet access</strong>. A device might be connected to a Wi-Fi network that has no Internet connectivity. In that situation, <code>navigator.onLine</code> still returns <code>true</code>.</p>
</li>
<li><p><strong>Browser and OS differences</strong>. Some browsers treat <strong>any local network connection</strong> as "online", even if external connectivity is unavailable. Others behave differently.</p>
</li>
<li><p><strong>Delays and false positives</strong>. The <code>online</code> / <code>offline</code> events can lag behind real connectivity changes. Short network interruptions may also go undetected.</p>
</li>
</ol>
<p>In practice, your best bet is to use a two-step approach — use <code>navigator.onLine</code> as a quick heuristic and verify connectivity with a <em>real network request</em>.</p>
<pre><code class="language-typescript">fetch('/health-check', { method: 'HEAD' })
  .then(() =&gt; {
    // Internet reachable
  })
  .catch(() =&gt; {
    // Still offline or server unreachable
  });
</code></pre>
<p>This ensures that the browser is connected to a network, that network has access to the internet, and it can actually reach your backend. In our example hook (<code>useNetworkStatus</code>), this logic can be encapsulated so the rest of the application doesn't need to worry about the details.</p>
<h2>Showing Pending Work</h2>
<p>When users work offline, actions are queued but not immediately executed. This means the UI should communicate that <strong>there is pending work waiting to sync</strong>. Our approach is to subscribe to the queue state and check whether the current item appears in the queue.</p>
<p>Here is a simplified example component.</p>
<pre><code class="language-typescript">type TodoCardProps = {
  id: string;
  title: string;
  description?: string;
}

type TodoQueueState = {
  isPending: boolean;
  isProcessing: boolean;
}

export function TodoCard({ id, title, description }: TodoCardProps) {
  const queue = useQueue();
  const [state, setState] = useState&lt;TodoQueueState&gt;({
    isPending: false,
    isProcessing: false,
  });

  useEffect(() =&gt; {
    const unsubscribe = queue.subscribe((s) =&gt; {
      setState({
        isProcessing: s.isProcessing,
        isPending: s.items.some((item) =&gt; item.id === id),
      });
    });

    return unsubscribe;
  }, [queue, id]);

  return (
    &lt;Card&gt;
      &lt;CardTitle&gt;
        {title}
        {state.isPending &amp;&amp; !state.isProcessing &amp;&amp; (
          &lt;Badge&gt;
            Pending sync
          &lt;/Badge&gt;
        )}
        {state.isProcessing &amp;&amp; (
          &lt;Spinner size="sm" /&gt;
        )}
      &lt;/CardTitle&gt;
      &lt;CardDescription&gt;{description}&lt;/CardDescription&gt;
    &lt;/Card&gt;
  );
}
</code></pre>
<p>When the queue state changes, we update the UI state to know if the queue is processing or if this item is in the queue.</p>
<p>From there, the UI can show helpful indicators such as a “pending sync” badge and a spinner while processing. The exact UI depends on the application, but the important part is that <strong>users can see that their action was captured</strong>, even if it hasn’t reached the server yet.</p>
<p>In my experience, this greatly improves user trust when working offline.</p>
<h2>Running the Sync</h2>
<p>Finally, we need to decide when the queued work should be processed.</p>
<p>In this series, I intentionally keep the sync model simple: with the foreground queue model, queue runs only <strong>when the user transitions from offline to online</strong>. This keeps the implementation predictable and avoids overlapping sync processes.</p>
<p>Here's a simplified component example that handles this automatically.</p>
<pre><code class="language-typescript">export function QueueAutoSync() {
  const queue = useQueue();
  const isOnline = useNetworkStatus();

  useEffect(() =&gt; {
    // Run when status changes to online...
    if (isOnline) {
      queue.sync()
    }
  }, [isOnline, queue]);

  return null;
}
</code></pre>
<p>This component doesn't render anything. Its only responsibility is to observe the network status and trigger a sync when connectivity is restored.</p>
<p>One small caveat here: depending on your application, you may also want additional triggers such as manual “Sync now” actions, sync pausing when the user goes back offline, periodic background sync or syncing on app focus. Starting with “sync when connection is restored” is often a good default.</p>
<h2>Conclusion</h2>
<p>That's it for today! In this post, we moved from the queue implementation to <strong>how it integrates with a React application</strong>.</p>
<p>Here's what we've covered:</p>
<ul>
<li><p>expose and consume the queue through React Context</p>
</li>
<li><p>enqueue mutations when offline and execute them immediately when online</p>
</li>
<li><p>show pending work in the UI using state listeners</p>
</li>
<li><p>run a sync when connectivity returns</p>
</li>
</ul>
<p>The queue captures <strong>user intent immediately</strong> and executes it <strong>when the network allows</strong>, making offline interactions reliable without complicating the UI.</p>
<p>Now that we know how mutations behave offline, the next step is improving the <strong>read side</strong> of the application — starting with data prefetching strategies.</p>
<p>If you enjoyed the article or have a question, feel free to reach out on <a href="https://bsky.app/profile/tomaszgil.me"><strong>Bluesky</strong></a>! 👋</p>
<h3>Further Reading and References</h3>
<ul>
<li>Photo by <a href="https://unsplash.com/@johnprice?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">John Price</a> on <a href="https://unsplash.com/photos/shallow-focus-photography-of-trees-filled-of-snow-UdgvzNom0Xs?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Unsplash</a></li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Offline Support in Web Apps: Foreground Queue for Offline Mutations — Part 4]]></title><description><![CDATA[This is the seventh post in the series about offline support in web applications and the fourth one focused specifically on the foreground queue. In the previous article, we exposed queue state, how UI components subscribe to it, and how to wire ever...]]></description><link>https://blog.tomaszgil.me/offline-support-in-web-apps-foreground-queue-for-offline-mutations-part-4</link><guid isPermaLink="true">https://blog.tomaszgil.me/offline-support-in-web-apps-foreground-queue-for-offline-mutations-part-4</guid><category><![CDATA[software development]]></category><category><![CDATA[Software Engineering]]></category><category><![CDATA[software architecture]]></category><category><![CDATA[offline]]></category><category><![CDATA[Web Development]]></category><dc:creator><![CDATA[Tomasz Gil]]></dc:creator><pubDate>Thu, 12 Feb 2026 07:34:48 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1770209125629/2663d772-289f-4c16-a5f2-1077618935c1.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>This is the seventh post in the series about offline support in web applications and the fourth one focused specifically on the foreground queue. In the previous article, we exposed queue state, how UI components subscribe to it, and how to wire everything into React without turning the queue into yet another state store.</p>
<p>So far, the queue processes each mutation exactly once. If it succeeds, we’re done. If it fails, we mark it as failed and move on. That’s a reasonable baseline, but in real-world offline scenarios it breaks down quickly.</p>
<p>In this post, I want to focus on error handling and retry strategies. In particular, how to handle transient failures automatically, without pushing that complexity or frustration onto the user.</p>
<h2 id="heading-why-a-single-attempt-isnt-enough">Why a single attempt isn’t enough</h2>
<p>In the implementation so far, the foreground queue attempts to process each item exactly once. If it succeeds, great. If it fails, we mark it as failed and move on. That’s a reasonable starting point, but in practice it’s not enough.</p>
<p>The main issue is that failures during sync can be transient — the network might briefly drop, the backend could be restarting, or unavailable for a few seconds. None of these mean the mutation itself is invalid — they just mean <em>now is a bad time</em> to process it.</p>
<p>When everything happens online, this is usually fine. If a form submission fails, the user sees an error and can retry immediately. The context is still fresh, the UI is still on screen, and retrying is typically one click away. Placing that burden on the user is acceptable, and in many cases expected.</p>
<p>Offline workflows are very different.</p>
<p>An offline mutation may have been recorded minutes or hours ago. By the time the app comes back online and starts syncing, the user may be doing something entirely unrelated. Asking them to manually retry a background failure at that point is both disruptive and unrealistic. From the user’s perspective, the action already <em>happened</em>. They filled in the form, pressed submit, and moved on. Any failure during sync is an implementation detail they shouldn’t have to care about unless it truly cannot be resolved.</p>
<p>This is where retries become essential. We can smooth over temporary issues without involving the user at all by adding automated retries. Only after multiple attempts should we consider the mutation genuinely failed and surface that state to the user. To put it in one sentence: <strong>assume failures are temporary until proven otherwise</strong>.</p>
<p>With all of this in mind, let’s zoom in and talk about retry strategies.</p>
<h2 id="heading-exponential-backoff-for-foreground-retries">Exponential backoff for foreground retries</h2>
<p>There are different strategies you can choose from. Linear retries, fixed delays, adaptive algorithms, circuit breakers — it’s a deep topic. I won’t be diving into all of them here. If you want a broader overview, Sam Who wrote a <a target="_blank" href="https://encore.dev/blog/retries">great article</a> that’s well worth reading.</p>
<p>For foreground queues in offline-capable apps, I find <strong>exponential backoff</strong> hits a good balance between effectiveness and simplicity.</p>
<h3 id="heading-what-exponential-backoff-actually-means">What exponential backoff actually means</h3>
<p>Exponential backoff is a retry strategy where the delay between retry attempts increases exponentially after each failure, usually by doubling, until it reaches a maximum cap. Instead of retrying immediately or waiting a fixed amount of time, each failed attempt waits longer than the previous one.</p>
<p>A typical sequence might look like this:</p>
<ul>
<li><p>1st retry after 500ms</p>
</li>
<li><p>2nd retry after 1s</p>
</li>
<li><p>3rd retry after 2s</p>
</li>
<li><p>4th retry after 4s</p>
</li>
<li><p>…until a retry limit is reached</p>
</li>
</ul>
<p>This simple change in timing has surprisingly large effects on system behaviour:</p>
<ul>
<li><p><strong>Reduced load on struggling services</strong>. When many clients fail at once, immediate retries can create a thundering herd. Backoff spreads retries out over time instead of amplifying the problem.</p>
</li>
<li><p><strong>Better odds of success for errors</strong>. Short outages often resolve themselves quickly. Waiting a bit before retrying dramatically increases the chance that the next attempt succeeds.</p>
</li>
</ul>
<p>Finally, a few guardrails are essential to keep retries from turning into hidden problems.</p>
<ul>
<li><p><strong>Cap the delay</strong>. Without a cap, delays can grow so large that retries become effectively useless or block important resources for too long.</p>
</li>
<li><p><strong>Limit the number of retries</strong>. If something has persistently failed six times, continuing might no longer make sense.</p>
</li>
<li><p><strong>Add jitter</strong>. If many clients retry with the same timing, they can still synchronise. Adding a small random offset (jitter) to each delay helps spread retries out and avoids retry spikes.</p>
</li>
</ul>
<p>This usually translates into fewer visible errors and less aggressive retry noise while the user is waiting.</p>
<p>With the strategy chosen, the remaining work is the implementation: capture retry state, compute delays, and teach the sync loop how to wait. Everything that follows assumes retries are safe — meaning the mutation is idempotent or can tolerate last-write-wins semantics. I’ll come back to cases where this isn’t true at the end.</p>
<h2 id="heading-retry-configuration">Retry configuration</h2>
<p>We'll start by adding configuration for the retries for our queue. The shape below captures all the controls needed to implement the exponential backoff strategy.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">export</span> <span class="hljs-keyword">interface</span> BackoffConfig {
  baseDelay?: <span class="hljs-built_in">number</span>; <span class="hljs-comment">// Default: 500 ms</span>
  maxDelay?: <span class="hljs-built_in">number</span>; <span class="hljs-comment">// Default: 30 s </span>
  maxRetries?: <span class="hljs-built_in">number</span>; <span class="hljs-comment">// Default: 5</span>
  jitter?: <span class="hljs-built_in">boolean</span>; <span class="hljs-comment">// Default: true</span>
}
</code></pre>
<h2 id="heading-implementing-backoff-logic">Implementing backoff logic</h2>
<p>Let’s start with drafting a simple module specifically for backoff. We’ll need a couple of utility functions to help us track and manage items in backoff. I’ll briefly explain what each function does without going into the implementation details, as they are mostly straightforward.</p>
<pre><code class="lang-typescript"><span class="hljs-comment">// Checks if an item should be retried based on attempt count.</span>
<span class="hljs-keyword">export</span> <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">shouldRetry</span>(<span class="hljs-params">
  attemptCount: <span class="hljs-built_in">number</span>,
  config: BackoffConfig = {}
</span>): <span class="hljs-title">boolean</span> </span>{}

<span class="hljs-comment">// Calculates the remaining delay until an item is ready for retry.</span>
<span class="hljs-comment">// Accounts for elapsed time since last attempt.</span>
<span class="hljs-keyword">export</span> <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">getRetryDelay</span>(<span class="hljs-params">
  attemptCount: <span class="hljs-built_in">number</span>,
  lastAttemptAt: <span class="hljs-built_in">number</span> | <span class="hljs-literal">null</span>,
  config: BackoffConfig = {}
</span>): <span class="hljs-title">number</span> </span>{}
</code></pre>
<p>These two functions will be needed to implement the retries in out <code>startSync</code> function in the <code>Queue</code> class. I won't include the entire function implementation, but I'll add the important parts here (I'll indicate unchanged sections with comments).</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">export</span> <span class="hljs-keyword">class</span> Queue&lt;T&gt; {
  <span class="hljs-keyword">async</span> startSync(): <span class="hljs-built_in">Promise</span>&lt;SyncResult&lt;T&gt;&gt; {
    <span class="hljs-comment">// Block unchanged compared to the previous posts...</span>

    <span class="hljs-comment">// Track items already reported as failures...</span>
    <span class="hljs-keyword">const</span> failureIds = <span class="hljs-keyword">new</span> <span class="hljs-built_in">Set</span>&lt;<span class="hljs-built_in">string</span>&gt;();

    <span class="hljs-keyword">try</span> {
      <span class="hljs-comment">// Single ongoing processing loop (now with retries)</span>
      <span class="hljs-keyword">while</span> (<span class="hljs-literal">true</span>) {
        <span class="hljs-comment">// Block unchanged compared to the previous posts...</span>

        <span class="hljs-comment">// Keep track of items that are to be removed...</span>
        <span class="hljs-keyword">const</span> itemsToRemove = <span class="hljs-keyword">new</span> <span class="hljs-built_in">Set</span>&lt;<span class="hljs-built_in">string</span>&gt;();
        <span class="hljs-comment">// Keep track of items that are to be updated (we need to update attempt info)...</span>
        <span class="hljs-keyword">const</span> itemsToUpdate = <span class="hljs-keyword">new</span> <span class="hljs-built_in">Map</span>&lt;<span class="hljs-built_in">string</span>, QueueItem&lt;T&gt;&gt;();
        <span class="hljs-comment">// Keep track of items in backoff...</span>
        <span class="hljs-keyword">const</span> itemsInBackoff: <span class="hljs-built_in">Array</span>&lt;{ item: QueueItem&lt;T&gt;; delay: <span class="hljs-built_in">number</span> }&gt; = [];

        <span class="hljs-keyword">for</span> (<span class="hljs-keyword">const</span> item <span class="hljs-keyword">of</span> items) {
          <span class="hljs-comment">// Block unchanged compared to the previous posts...</span>

          <span class="hljs-comment">// Check if we can still retry...</span>
          <span class="hljs-keyword">if</span> (!shouldRetry(item.attemptCount, <span class="hljs-built_in">this</span>.config.backoff)) {
            <span class="hljs-comment">// Check if we haven't reported this item yet...</span>
            <span class="hljs-keyword">if</span> (!failureIds.has(item.id)) {
              failureIds.add(item.id);
              result.failure.push(item);
            }
            <span class="hljs-keyword">continue</span>;
          }

          <span class="hljs-comment">// Caclucate the backoff delay for an item and the current attempt...</span>
          <span class="hljs-keyword">const</span> delay = getRetryDelay(
            item.attemptCount,
            item.lastAttemptAt,
            <span class="hljs-built_in">this</span>.config.backoff
          );

          <span class="hljs-comment">// Check if we still need to wait...</span>
          <span class="hljs-keyword">if</span> (delay &gt; <span class="hljs-number">0</span>) {
            itemsInBackoff.push({ item, delay });
            <span class="hljs-keyword">continue</span>;
          }

          <span class="hljs-comment">// Otherwise, the item is ready to be processed...</span>
          <span class="hljs-keyword">try</span> {
            <span class="hljs-comment">// Block unchanged compared to the previous posts...</span>
          } <span class="hljs-keyword">catch</span> (error) {
            <span class="hljs-comment">// The item processing has failed...</span>
            <span class="hljs-comment">// We no longer mark it as failure, but update the information,</span>
            <span class="hljs-comment">// so that we can continue retries in the next loop...</span>
            itemsToUpdate.set(item.id, {
              ...item,
              attemptCount: item.attemptCount + <span class="hljs-number">1</span>,
              lastAttemptAt: <span class="hljs-built_in">Date</span>.now(),
            });
          }
        }

        <span class="hljs-comment">// All items have been processed...</span>
        <span class="hljs-keyword">await</span> <span class="hljs-built_in">this</span>.withLock(<span class="hljs-keyword">async</span> () =&gt; {
          <span class="hljs-keyword">const</span> currentItems = <span class="hljs-keyword">await</span> <span class="hljs-built_in">this</span>.loadItems();

          <span class="hljs-keyword">const</span> updatedItems = currentItems
            <span class="hljs-comment">// Remove the items marked to be removed (success)...</span>
            .filter(<span class="hljs-function">(<span class="hljs-params">item</span>) =&gt;</span> !itemsToRemove.has(item.id))
            <span class="hljs-comment">// Update the items in backoff...</span>
            .map(<span class="hljs-function">(<span class="hljs-params">item</span>) =&gt;</span> itemsToUpdate.get(item.id) ?? item);

          <span class="hljs-keyword">await</span> <span class="hljs-built_in">this</span>.saveItems(updatedItems);
        });

        <span class="hljs-comment">// Check if there are items in backoff...</span>
        <span class="hljs-keyword">if</span> (itemsInBackoff.length &gt; <span class="hljs-number">0</span>) {
          <span class="hljs-comment">// Find the shortest delay...</span>
          <span class="hljs-keyword">const</span> minDelay = <span class="hljs-built_in">Math</span>.min(...itemsInBackoff.map(<span class="hljs-function">(<span class="hljs-params">i</span>) =&gt;</span> i.delay));
          <span class="hljs-comment">// ...and wait</span>
          <span class="hljs-keyword">await</span> <span class="hljs-built_in">this</span>.sleep(minDelay);
          <span class="hljs-keyword">continue</span>;
        }

        <span class="hljs-comment">// No items processed successfully or failed...</span>
        <span class="hljs-comment">// There's nothing else to process...</span>
        <span class="hljs-keyword">if</span> (itemsToRemove.size === <span class="hljs-number">0</span> &amp;&amp; itemsToUpdate.size === <span class="hljs-number">0</span>) {
          <span class="hljs-keyword">break</span>;
        }
      }
    } <span class="hljs-keyword">finally</span> {
      <span class="hljs-comment">// Block unchanged compared to the previous posts...</span>
    }

    <span class="hljs-keyword">return</span> result;
  }
}
</code></pre>
<p>Here’s what we’ve effectively implemented in the sync loop:</p>
<ol>
<li><p><strong>We run an infinite loop with an inner loop over all queued items</strong>. This is where the “loop-in-a-loop” structure becomes useful. Previously, we processed each item exactly once. Now, we may process the same item multiple times — up to the maximum number of attempts allowed by the backoff configuration. The outer loop only exits when these conditions is met—no items were processed in a full pass and no items are currently waiting in backoff. At that point, there’s nothing left to do and the sync is finished.</p>
</li>
<li><p><strong>Before processing an item, we first check whether it’s still eligible for retries</strong>. This is where we respect the backoff configuration and the current attempt count. If the item has already exhausted all allowed attempts, we mark it as failed and exclude it from further processing. Importantly, we ensure each failed item is reported exactly once, even if the loop continues for other items.</p>
</li>
<li><p><strong>We compute the retry delay for the item</strong>. This determines whether enough time has passed since the last attempt, based on the current attempt count and the item’s last processing timestamp. If the item isn’t ready yet, we don’t try to process it. Instead, we add it to a list of items that are currently in backoff and move on to the next item.</p>
</li>
<li><p><strong>Once we’ve looped over all items, we check whether any are waiting in backoff</strong>. If so, we find the smallest remaining delay across them and wait for exactly that duration (we’ll expand on this idea in the next section). This ensures the sync loop wakes up only when the next item becomes eligible for processing, avoiding unnecessary polling or busy waiting.</p>
</li>
</ol>
<p>There's one more interesting aspect to discuss: how waiting for items in backoff interacts with sync pausing.</p>
<h2 id="heading-making-sure-syncing-can-still-be-paused">Making sure syncing can still be paused</h2>
<p>You might remember that our queue supports pausing via the <code>pauseSync</code> method. Before retries were introduced, this worked well enough. Processing was always asynchronous, but calling <code>pauseSync</code> would take effect immediately after the current attempt finished.</p>
<p>Retries change that behaviour.</p>
<p>With exponential backoff in place, the syncing function may now sit idle for several seconds, waiting for the next item to become ready for processing. If we do nothing, the effects of calling <code>pauseSync</code> would only become visible <em>after</em> that wait finishes. That’s not great — pausing should be observable immediately, even if the sync loop is currently waiting.</p>
<p>The root problem here isn’t retries themselves, but time-based waiting. Once we start awaiting delays, those waits need to be cancelable. That’s why we’ve introduced an <em>interruptible</em> <code>sleep</code> function.</p>
<p>Here’s a simplified version of how this can be implemented.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">export</span> <span class="hljs-keyword">class</span> Queue&lt;T&gt; {
  <span class="hljs-comment">// Calling results in interrupting sleep function...</span>
  <span class="hljs-keyword">private</span> interruptSleep: (<span class="hljs-function">() =&gt;</span> <span class="hljs-built_in">void</span>) | <span class="hljs-literal">null</span> = <span class="hljs-literal">null</span>;

  <span class="hljs-keyword">private</span> <span class="hljs-keyword">async</span> sleep(ms: <span class="hljs-built_in">number</span>): <span class="hljs-built_in">Promise</span>&lt;<span class="hljs-built_in">void</span>&gt; {
    <span class="hljs-keyword">return</span> <span class="hljs-keyword">new</span> <span class="hljs-built_in">Promise</span>&lt;<span class="hljs-built_in">void</span>&gt;(<span class="hljs-function">(<span class="hljs-params">resolve</span>) =&gt;</span> {
      <span class="hljs-comment">// When `interruptSleep`, the promise resolved immediately...</span>
      <span class="hljs-built_in">this</span>.interruptSleep = resolve;
      <span class="hljs-built_in">setTimeout</span>(<span class="hljs-function">() =&gt;</span> {
        <span class="hljs-comment">// Otherwise, it resolves when the timeout function is called...</span>
        <span class="hljs-built_in">this</span>.interruptSleep = <span class="hljs-literal">null</span>;
        resolve();
      }, ms);
    });
  }

  pauseSync(): <span class="hljs-built_in">void</span> {
    <span class="hljs-built_in">this</span>.isPaused = <span class="hljs-literal">true</span>;
    <span class="hljs-comment">// We interrupt sleep when syncing is paused...</span>
    <span class="hljs-built_in">this</span>.interruptSleep?.();
    <span class="hljs-built_in">this</span>.interruptSleep = <span class="hljs-literal">null</span>;
  }
}
</code></pre>
<p>Conceptually, this turns time-based waiting into a cancelable operation.</p>
<p>When the sync loop is waiting for the next retry window, calling <code>pauseSync</code> now immediately resolves the pending sleep. This allows the loop to observe the paused state right away instead of being stuck waiting for a timeout to expire. This implementation can still leave a pending timeout behind, which isn’t ideal but acceptable for a simplified, illustrative example.</p>
<h2 id="heading-when-retries-get-complicated">When retries get complicated</h2>
<p>I want to close with a few points about edge cases and broader system considerations. Our offline todo status example has several properties that make retries and backoff much simpler than they might be in other scenarios.</p>
<p>Most importantly, we can accept <strong>last-write-wins</strong> semantics. That immediately removes the need for explicit conflict resolution. On top of that, the status update itself is <strong>idempotent</strong> — calling the mutation twice has the same effect as calling it once. Given a single client (or last-write-wins), retries are safe by default.</p>
<p>These properties are convenient, but they don’t generalize to all systems. In more complex setups, there are a few additional factors you’ll likely need to account for. Most of them require explicit back-end support.</p>
<ul>
<li><p><strong>Request idempotency</strong>. For non-idempotent operations (for example, “charge a credit card” or “append to a log”), retries require extra safeguards. Common approaches include transaction tokens, client-generated IDs, or server-side deduplication.</p>
</li>
<li><p><strong>Non-retryable errors</strong>. Not all failures should be retried. Client-side errors (such as certain 4xx responses) often indicate permanent failure. Your sync logic should be able to recognize these cases and stop retrying early.</p>
</li>
<li><p><strong>Conflict resolution</strong>. Once multiple clients or concurrent updates are involved, conflict resolution becomes unavoidable. A simple approach is to refetch the latest state before mutating, which can work but is still prone to race conditions. More robust solutions require explicit conflict resolution on the back end, often combined with sending the initial state from the client so the server can detect and resolve conflicts deterministically.</p>
</li>
</ul>
<p>The right approach to retries ultimately depends on the type of mutation. If you can accept last-write-wins, retries relatively simple — no special handling is required around the edges. If you can’t, retries quickly turn into a coordination problem that spans both the client and the back end.</p>
<h2 id="heading-summary">Summary</h2>
<p>That’s it for today. We’ve covered why retries are a core requirement for offline-first syncing, not just a nice-to-have, and how to go about implementing them. Here are the key takeaways:</p>
<ul>
<li><p><strong>Submission failures can be temporary.</strong> Network hiccups and brief backend outages shouldn’t surface as user-facing errors. Automated retries let the system absorb these issues quietly.</p>
</li>
<li><p><strong>Exponential backoff is a good default.</strong> It’s simple to implement, reduces load on struggling services, and significantly improves success rates compared to immediate or fixed retries.</p>
</li>
<li><p><strong>Guardrails matter.</strong> Capping delays, limiting retries, and adding jitter prevent retries from turning into hidden performance or reliability problems.</p>
</li>
<li><p><strong>Retries change control flow.</strong> Once you introduce waiting, we must handle time explicitly.</p>
</li>
<li><p><strong>Retry safety depends on semantics.</strong> Idempotent, last-write-wins mutations make retries easy. Non-idempotent operations, permanent errors, and conflicts require explicit backend support and more careful coordination.</p>
</li>
</ul>
<p>In the next article, I’ll shift gears and look at <strong>data prefetching</strong>, and how proactive data access fits into an offline-capable architecture.</p>
<p>If you enjoyed the article or have a question, feel free to reach out on <a target="_blank" href="https://bsky.app/profile/tomaszgil.me/post/3md3zh6cvss2n">Bluesky</a>! 👋</p>
<h3 id="heading-future-reading-and-references">Future reading and references</h3>
<ul>
<li><p>Photo by <a target="_blank" href="https://unsplash.com/@mariashanina?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Maria on U</a><a target="_blank" href="https://unsplash.com/photos/selective-focus-photo-of-red-fruits-HYDL8uARCN8?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">nsplash</a></p>
</li>
<li><p><a target="_blank" href="https://unsplash.com/photos/selective-focus-photo-of-red-fruits-HYDL8uARCN8?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Retr</a><a target="_blank" href="https://encore.dev/blog/retries">ies — An interactive study of common retry methods</a> by <a target="_blank" href="https://bsky.app/profile/samwho.dev">Sam Rose</a></p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Offline Support in Web Apps: Foreground Queue for Offline Mutations — Part 3]]></title><description><![CDATA[This is the sixth post in the series about offline support in web applications and the third one focused specifically on the foreground queue. In the previous article, we introduced the concept of a f]]></description><link>https://blog.tomaszgil.me/offline-support-in-web-apps-foreground-queue-for-offline-mutations-part-3</link><guid isPermaLink="true">https://blog.tomaszgil.me/offline-support-in-web-apps-foreground-queue-for-offline-mutations-part-3</guid><category><![CDATA[software development]]></category><category><![CDATA[Software Engineering]]></category><category><![CDATA[software architecture]]></category><category><![CDATA[Web Development]]></category><category><![CDATA[offline]]></category><dc:creator><![CDATA[Tomasz Gil]]></dc:creator><pubDate>Tue, 03 Feb 2026 12:27:05 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1768130768925/e9088613-274c-4dcc-91bf-9c8e46e9375c.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>This is the sixth post in the series about offline support in web applications and the third one focused specifically on the foreground queue. In the previous article, we introduced the concept of a foreground queue, discussed the characteristics we want to achieve, and covered the key parts of the implementation that help us reach those goals.</p>
<p>Today, we’re switching gears a little and moving closer to the user interface layer. We’ll cover how to expose queue state, how UI components subscribe to it, and how to wire everything into React without turning the queue into yet another state store.</p>
<h2>Subscribing to queue state</h2>
<p>Let’s start with exposing the queue state. I want React components to react to changes in a way that doesn’t involve sharing all of the queue state with the whole React component tree.</p>
<h3>State as a derived view</h3>
<p>The source of truth for the queue is the list of items stored in IndexedDB. In practice, the UI usually needs a bit more context than just “what’s in the database”, so I expose a small, read-only snapshot. I treat this snapshot as a <em>derived view</em> — it’s mainly for display and indicators, not for driving business logic.</p>
<pre><code class="language-typescript">export interface QueueState&lt;T&gt; {
  items: Array&lt;QueueItem&lt;T&gt;&gt;;
  isProcessing: boolean;
  isPaused: boolean;
}
</code></pre>
<p>A few important constraints:</p>
<ul>
<li><p>The queue itself is the only place that reads and writes this state.</p>
</li>
<li><p>I don’t need full, reactive synchronisation like you’d expect from a state management library.</p>
</li>
<li><p>IndexedDB is async, so any snapshot can technically be stale.</p>
</li>
</ul>
<p>That last point is worth calling out. Because all reads go through async APIs, there’s always a chance the state you’re looking at is slightly out of date. For this use case—showing indicators or counts in the UI—that trade-off is most likely acceptable. Just keep it in mind and avoid relying on this state for critical logic.</p>
<h3>A simple subscription pattern</h3>
<p>I decided to go with a simple subscription pattern for publishing the queue state updates. Instead of pushing state continuously, I let the queue decide when listeners should be notified.</p>
<p>From the consumer’s point of view, it looks like so:</p>
<pre><code class="language-typescript">queue.subscribe((state) =&gt; {
  console.log(state.items.length);
});
</code></pre>
<p>The UI subscribes, reacts to updates, and doesn’t worry about <em>how</em> or <em>when</em> the state is produced.</p>
<h3>Implementing subscriptions in the queue</h3>
<p>Under the hood, this is implemented with a lightweight listener registry.</p>
<pre><code class="language-typescript">export type QueueListener&lt;T&gt; = (state: QueueState&lt;T&gt;) =&gt; void;

export class Queue&lt;T&gt; {
  private listeners = new Set&lt;QueueListener&lt;T&gt;&gt;();

  subscribe(listener: QueueListener&lt;T&gt;) {
    // Store `listener` function...
    this.listeners.add(listener);
    this.notifyListener(listener);
    // Allow to unsubscribe...
    return () =&gt; {
      this.listeners.delete(listener);
    };
  }

  private async notifyAllListeners() {
    this.listeners.forEach((listener) =&gt; this.notifyListener(listener));
  }

  private async notifyListener(listener: QueueListener&lt;T&gt;) {
    const state = await this.getState();
    // Notify a single listener with the current state
    listener(state);
  }

  private async getState() {
    const items = await this.read();
    
    return {
      items: [...items],
      isProcessing: this.isProcessing,
      isPaused: this.isPaused,
    };
  }
}
</code></pre>
<p>A few things are worth highlighting here:</p>
<ul>
<li><p><code>subscribe</code> is a public method that registers a listener and returns an unsubscribe function.</p>
</li>
<li><p>New listeners are notified immediately with the current state.</p>
</li>
<li><p>Listeners are only notified when the queue explicitly calls <code>notifyAllListeners</code>.</p>
</li>
</ul>
<p>That last point is intentional. Whenever the queue processes an item, pauses, resumes, or mutates storage, I manually trigger notifications, calling <code>notifyAllListeners</code>. You need to be aware that with this pattern, state updates won't be sent out automatically.</p>
<p>Additionally, because <code>getState()</code> is async, rapid successive notifications may resolve out of order. In my case, this hasn’t been an issue, but it’s something to be aware of if you extend this pattern.</p>
<p>This pattern makes it straightforward to build small UI features like showing “3 actions pending” or “queue is currently syncing”. In my experience, this level of state awareness strikes a good balance. It's simple and clear, yet it provides a good separation between the queue's internals and the UI.</p>
<h2>React integration via context</h2>
<p>Now we have everything we need to connect the queue to the UI layer. As mentioned in previous articles, my example will focus on React, but the same or similar principles can be applied to any other framework.</p>
<p>First of all, the queue is stateful, long-lived, and tied to a single IndexedDB key. Creating multiple instances is a fast path to race conditions and corrupted state. To eliminate this, the safest approach is to treat the queue as a singleton per feature and make it available once, for example through React context.</p>
<h3>One queue, one configuration</h3>
<p>In practice, it translates to the following requirement: there must be exactly one queue instance for any one persisted queue. That means:</p>
<ul>
<li><p>the configuration is defined once</p>
</li>
<li><p>the queue instance is created once</p>
</li>
<li><p>every component talks to the same object</p>
</li>
</ul>
<p>This doesn’t mean there can only ever be one queue in the app—only that there should be one queue <em>per storage key and feature</em>. The solution I chose was a small factory that creates a context, along with a provider component and a hook for accessing a specific queue.</p>
<pre><code class="language-typescript">const { QueueProvider, useQueue } = createQueueContext(() =&gt; ({
  name: 'todo-mutations',
  storageKey: 'todo-mutation-queue-v1',
  identityKey: (item) =&gt; item.id,
  processor: async (item) =&gt; api.post(`/todos/${item.id}`, item.status),
}));
</code></pre>
<p>This keeps feature code clean. I define the config once, and from that point on I just use <code>QueueProvider</code> and <code>useQueue</code>. With this setup, integration is straightforward. I wrap the application—or just a feature subtree—with the provider.</p>
<p>Anywhere below that, I can call <code>useQueue()</code> and interact with the queue directly:</p>
<ul>
<li><p>enqueue new items</p>
</li>
<li><p>start sync processing</p>
</li>
<li><p>subscribe to state changes</p>
</li>
</ul>
<p>There’s no extra indirection or proxy state. Components talk to the queue object itself.</p>
<h3>The context factory</h3>
<p>Here’s the factory implementation that makes this work.</p>
<pre><code class="language-typescript">export function createQueueContext&lt;T&gt;(configFactory: () =&gt; QueueConfig&lt;T&gt;) {
  // Define a context for the queue...
  const QueueContext = createContext&lt;Queue&lt;T&gt; | null&gt;(null);

  type QueueProviderProps = {
    children: ReactNode;
  }

  // Define a provider for the queue
  function QueueProvider({ children }: QueueProviderProps) {
    // Create the config using the factory function...
    const config = configFactory();
    // Create a ref to store the queue...
    const queueRef = useRef&lt;Queue&lt;T&gt; | null&gt;(null);

    if (!queueRef.current) {
      // Only create the queue once, so the reference to the object is stable....
      queueRef.current = new Queue(config);
    }

    return (
      &lt;QueueContext.Provider value={queueRef.current}&gt;
        {children}
      &lt;/QueueContext.Provider&gt;
    );
  }

  // Define a hook to access the queue...
  function useQueue() {
    const ctx = useContext(QueueContext);
    if (!ctx) {
      throw new Error('useQueue must be used within QueueProvider');
    }
    return ctx;
  }

  return { QueueProvider, QueueContext, useQueue };
}
</code></pre>
<p>A few details here are worth calling out.</p>
<ul>
<li><p>Factory owns the configuration. That makes the queue explicit and self-contained, instead of relying on global setup or hidden dependencies.</p>
</li>
<li><p>Queue instance is created exactly once. Using a ref ensures that React re-renders won’t recreate it.</p>
</li>
<li><p>Provider only renders once. It always provides a stable reference to the same queue object, <em>even if the state inside of the queue changes</em>. In this case, using context is perfectly fine because we’re not pushing changing values through it. From React’s perspective, the context value is stable; all the change happens <em>inside</em> the queue.</p>
</li>
<li><p>The hook gives direct access to the queue. From React’s point of view, the queue is just an external object. Components can call <code>queue.enqueue()</code> or <code>queue.subscribe()</code> without coupling UI renders to internal queue state.</p>
</li>
</ul>
<h2>Summary</h2>
<p>That’s it! We covered the UI-facing side of a foreground queue and how it fits into a React codebase.</p>
<ul>
<li><p>The queue remains the single source of truth, backed by IndexedDB.</p>
</li>
<li><p>The UI consumes a small, read-only snapshot of derived state.</p>
</li>
<li><p>State updates are delivered through an explicit subscription mechanism.</p>
</li>
<li><p>React context is used purely to share a single, long-lived queue instance.</p>
</li>
<li><p>Components interact directly with the queue object, keeping concerns clearly separated.</p>
</li>
</ul>
<p>This setup has worked well for me in practice. It’s simple, explicit, and avoids accidental complexity while still enabling useful UI feedback like pending counts or sync indicators.</p>
<p>In the next article, I’ll build the final piece that the queue implementation is still missing, diving into error handling and retry strategies—arguably the most subtle parts of making offline queues reliable.</p>
<p>If you enjoyed the article or have a question, feel free to reach out on <a href="https://bsky.app/profile/tomaszgil.me">Bluesky</a>! 👋</p>
<h3>Further reading and references</h3>
<ul>
<li>Photo by <a href="https://unsplash.com/@sorinetzu?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Sorina Bindea</a> on <a href="https://unsplash.com/photos/snow-covered-branch-Dpt0dkTrZrs?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Unsplash</a></li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Offline Support in Web Apps: Foreground Queue for Offline Mutations — Part 2]]></title><description><![CDATA[This is the fifth post in my series about offline support in web applications and the second one focused specifically on the foreground queue. In the previous article, I introduced a foreground queue for pending offline mutations and outlined its bas...]]></description><link>https://blog.tomaszgil.me/offline-support-in-web-apps-foreground-queue-for-offline-mutations-part-2</link><guid isPermaLink="true">https://blog.tomaszgil.me/offline-support-in-web-apps-foreground-queue-for-offline-mutations-part-2</guid><category><![CDATA[software development]]></category><category><![CDATA[Software Engineering]]></category><category><![CDATA[software architecture]]></category><category><![CDATA[offline]]></category><category><![CDATA[Web Development]]></category><dc:creator><![CDATA[Tomasz Gil]]></dc:creator><pubDate>Mon, 26 Jan 2026 11:49:54 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1767904885964/9ed5e983-71ab-431c-98b6-e0771e1f0b57.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>This is the fifth post in my series about offline support in web applications and the second one focused specifically on the foreground queue. In the previous article, I introduced a foreground queue for pending offline mutations and outlined its basic shape. In this post, I want to focus on the parts that are easy to get wrong in practice.</p>
<p>We’ll look at how to make the queue correct and predictable under concurrent access: guaranteeing atomic read–modify–write operations, handling deduplication and bounded queue size at enqueue time, and running an explicit, single-flight sync loop that can be paused safely.</p>
<h2 id="heading-atomicity-and-correctness">Atomicity and correctness</h2>
<p>Using IndexedDB for queue storage is great because it doesn't stop users from interacting with the interface. Reads and writes to IndexedDB are asynchronous. However, this can lead to potential issues when <em>reads and writes occur almost simultaneously</em>.</p>
<p>Imagine that during a long-running sync, a user might want to add another item to the queue. If we're not careful about how we read and write to the async storage, we could accidentally overwrite some user actions, which we definitely want to avoid.</p>
<p>To avoid race conditions, <strong>every persistence operation has to run inside a mutex-protected critical section</strong>. Here's a simple example of the code needed to make this work.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">export</span> <span class="hljs-keyword">class</span> Queue&lt;T&gt; {
  <span class="hljs-keyword">private</span> lock: <span class="hljs-built_in">Promise</span>&lt;<span class="hljs-built_in">void</span>&gt; = <span class="hljs-built_in">Promise</span>.resolve();

  <span class="hljs-keyword">private</span> <span class="hljs-keyword">async</span> withLock&lt;T&gt;(fn: <span class="hljs-function">() =&gt;</span> <span class="hljs-built_in">Promise</span>&lt;T&gt;): <span class="hljs-built_in">Promise</span>&lt;T&gt; {
    <span class="hljs-keyword">let</span> release!: <span class="hljs-function">() =&gt;</span> <span class="hljs-built_in">void</span>;

    <span class="hljs-keyword">const</span> previousLock = <span class="hljs-built_in">this</span>.lock;
    <span class="hljs-comment">// Create a new lock and hold on to it's `resolve` function...</span>
    <span class="hljs-built_in">this</span>.lock = <span class="hljs-keyword">new</span> <span class="hljs-built_in">Promise</span>&lt;<span class="hljs-built_in">void</span>&gt;(<span class="hljs-function">(<span class="hljs-params">resolve</span>) =&gt;</span> {
      release = resolve;
    });

    <span class="hljs-comment">// Wait for the previous lock to be released...</span>
    <span class="hljs-keyword">await</span> previousLock;

    <span class="hljs-keyword">try</span> {
      <span class="hljs-comment">// Execute the wrapped function...</span>
      <span class="hljs-keyword">return</span> <span class="hljs-keyword">await</span> fn();
    } <span class="hljs-keyword">finally</span> {
      <span class="hljs-comment">// Release the lock once the wrapped function finishes...</span>
      release();
    }
  }
}
</code></pre>
<p>Here's how the queue might use this mechanism: we read and write modified items in a single, atomic operation (at the queue abstraction level) to ensure that no other queue method can overwrite items during this process.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">await</span> <span class="hljs-built_in">this</span>.withLock(<span class="hljs-keyword">async</span> () =&gt; {
  <span class="hljs-keyword">const</span> items = <span class="hljs-keyword">await</span> <span class="hljs-built_in">this</span>.read();
  <span class="hljs-comment">// Modify items...</span>
  <span class="hljs-keyword">await</span> <span class="hljs-built_in">this</span>.write(items);
});
</code></pre>
<p>It’s sufficient for a single JS execution context, it won’t protect against multiple tabs or workers.</p>
<h2 id="heading-enqueueing-with-deduplication-and-bounded-size">Enqueueing with deduplication and bounded size</h2>
<p>Next, we'll talk about deduplication and bounded size together because they happen at the same time during the enqueue operation. First, let's update the config by adding two more fields:</p>
<ul>
<li><p><code>identityKey</code> returns a unique identifier for each item (needed for deduplication).</p>
</li>
<li><p><code>maxDepth</code> defines the maximum depth of the queue.</p>
</li>
</ul>
<p>Here is a simplified version of the <code>enqueue</code> method. Since we need to read all items to decide whether to replace or add an item, the entire operation is atomic, starting with a read and ending with a write.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">export</span> <span class="hljs-keyword">interface</span> QueueConfig&lt;T&gt; {
  name: <span class="hljs-built_in">string</span>;
  storageKey: <span class="hljs-built_in">string</span>;
  processor: <span class="hljs-function">(<span class="hljs-params">item: T</span>) =&gt;</span> <span class="hljs-built_in">Promise</span>&lt;<span class="hljs-built_in">void</span>&gt;;
  identityKey: <span class="hljs-function">(<span class="hljs-params">item: T</span>) =&gt;</span> <span class="hljs-built_in">string</span>;
  maxDepth?: <span class="hljs-built_in">number</span>;
  <span class="hljs-comment">// Other configuration fields...</span>
}

<span class="hljs-keyword">export</span> <span class="hljs-keyword">class</span> Queue&lt;T&gt; {
  <span class="hljs-keyword">async</span> enqueue(item: T): <span class="hljs-built_in">Promise</span>&lt;<span class="hljs-built_in">void</span>&gt; {
    <span class="hljs-keyword">return</span> <span class="hljs-built_in">this</span>.withLock(<span class="hljs-keyword">async</span> () =&gt; {
      <span class="hljs-keyword">const</span> items = <span class="hljs-keyword">await</span> <span class="hljs-built_in">this</span>.read();
      <span class="hljs-keyword">const</span> identityKey = <span class="hljs-built_in">this</span>.config.identityKey(item);
      <span class="hljs-keyword">const</span> existingIndex = items.findIndex(<span class="hljs-function">(<span class="hljs-params">i</span>) =&gt;</span> i.id === identityKey);

      <span class="hljs-keyword">const</span> queueItem: QueueItem&lt;T&gt; = {
        id: identityKey,
        <span class="hljs-comment">// Other queue item fields...</span>
      };

      <span class="hljs-keyword">if</span> (existingIndex &gt;= <span class="hljs-number">0</span>) {
        <span class="hljs-comment">// An item with the same key already exists - replace it...</span>
        items[existingIndex] = queueItem;
      } <span class="hljs-keyword">else</span> {
        <span class="hljs-comment">// No item with the same key found - push it to the queue...</span>
        items.push(queueItem);

        <span class="hljs-comment">// If exceeding the max depth, remove oldest items...</span>
        <span class="hljs-keyword">if</span> (items.length &gt; <span class="hljs-built_in">this</span>.config.maxDepth) {
          items.splice(<span class="hljs-number">0</span>, items.length - <span class="hljs-built_in">this</span>.config.maxDepth);
        }
      }

      <span class="hljs-keyword">await</span> <span class="hljs-built_in">this</span>.write(items);
    });
  }
}
</code></pre>
<p>Whether you want to add deduplication or reorder the items depends on the use case. In our todo list example, if a user changes the status of a single todo item multiple times, we only want to keep the last change, as this reflects <em>the user’s intent</em>. The order of the operations doesn't really matter.</p>
<p>A rule of thumb: deduplication works best for idempotent or “last-write-wins” mutations, for append-only or causal operations, this approach may be wrong.</p>
<h2 id="heading-manual-sync-with-guardrails">Manual sync with guardrails</h2>
<p>As mentioned in the previous article, syncing is explicit. I decided that the queue should have two public methods—one to start the sync and the other to pause it. This allows the clients of the queue to toggle syncing when the user's network status changes.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">if</span> (isOnline) {
  <span class="hljs-keyword">await</span> queue.startSync();
} <span class="hljs-keyword">else</span> {
  queue.pauseSync();
}
</code></pre>
<p>Before we sketch out the implementation, here are a few design decisions that matter a lot:</p>
<ol>
<li><p>Items are processed sequentially. There's no right or wrong choice here, but processing items one at a time might be a bit easier on the server.</p>
</li>
<li><p>Only one sync can run at a time. Each queue holds a single list, so it only makes sense for one sync process to happen at a time.</p>
</li>
<li><p>Sync can be paused mid-flight. This is something we already discussed above.</p>
</li>
</ol>
<p>Retries are also an important consideration, and we will discuss them in future posts. However, we will design the methods so that adding retry behaviour is straightforward.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">export</span> <span class="hljs-keyword">interface</span> SyncResult&lt;T&gt; {
  status: <span class="hljs-string">'completed'</span> | <span class="hljs-string">'paused'</span>;
  success: <span class="hljs-built_in">Array</span>&lt;QueueItem&lt;T&gt;&gt;;
  failure: <span class="hljs-built_in">Array</span>&lt;QueueItem&lt;T&gt;&gt;;
}

<span class="hljs-keyword">export</span> <span class="hljs-keyword">class</span> Queue&lt;T&gt; {  
  <span class="hljs-comment">// Fields for controling the state (could be represented as a single field)...</span>
  <span class="hljs-keyword">private</span> isProcessing = <span class="hljs-literal">false</span>;
  <span class="hljs-keyword">private</span> isPaused = <span class="hljs-literal">false</span>;

  pauseSync(): <span class="hljs-built_in">void</span> {
    <span class="hljs-comment">// Setting a flag that will be read during sync before every item is processed...</span>
    <span class="hljs-built_in">this</span>.isPaused = <span class="hljs-literal">true</span>;
  }

  <span class="hljs-keyword">async</span> startSync(): <span class="hljs-built_in">Promise</span>&lt;SyncResult&lt;T&gt;&gt; {
    <span class="hljs-comment">// Only allow for a single sync to take place...</span>
    <span class="hljs-keyword">if</span> (<span class="hljs-built_in">this</span>.isProcessing) {
      <span class="hljs-keyword">return</span> {
        status: <span class="hljs-string">'completed'</span>,
        success: [],
        failure: [],
      };
    }

    <span class="hljs-comment">// Reset the flag if the sync has previously been paused...</span>
    <span class="hljs-built_in">this</span>.isPaused = <span class="hljs-literal">false</span>;

    <span class="hljs-keyword">const</span> result: SyncResult&lt;T&gt; = {
      status: <span class="hljs-string">'completed'</span>,
      success: [],
      failure: [],
    };

    <span class="hljs-keyword">try</span> {
      <span class="hljs-comment">// Single ongoing processing loop (to accommodate for retries later)...</span>
      <span class="hljs-keyword">while</span> (<span class="hljs-literal">true</span>) {
        <span class="hljs-comment">// Read the queue items...</span>
        <span class="hljs-keyword">const</span> items = <span class="hljs-keyword">await</span> <span class="hljs-built_in">this</span>.withLock(<span class="hljs-keyword">async</span> () =&gt; {
          <span class="hljs-keyword">return</span> <span class="hljs-keyword">await</span> <span class="hljs-built_in">this</span>.read();
        });

        <span class="hljs-comment">// If the queue is empty, finish processing...</span>
        <span class="hljs-keyword">if</span> (items.length === <span class="hljs-number">0</span>) {
          <span class="hljs-keyword">break</span>;
        }

        <span class="hljs-comment">// If the sync has been paused, finish processing...</span>
        <span class="hljs-keyword">if</span> (<span class="hljs-built_in">this</span>.isPaused) {
          result.status = <span class="hljs-string">'paused'</span>;
          <span class="hljs-keyword">break</span>;
        }

        <span class="hljs-built_in">this</span>.isProcessing = <span class="hljs-literal">true</span>;

        <span class="hljs-comment">// Keep track of items that are to be removed...</span>
        <span class="hljs-keyword">const</span> itemsToRemove = <span class="hljs-keyword">new</span> <span class="hljs-built_in">Set</span>&lt;<span class="hljs-built_in">string</span>&gt;();

        <span class="hljs-keyword">for</span> (<span class="hljs-keyword">const</span> item <span class="hljs-keyword">of</span> items) {
          <span class="hljs-comment">// If the sync has been paused, finish processing...</span>
          <span class="hljs-keyword">if</span> (<span class="hljs-built_in">this</span>.isPaused) {
            result.status = <span class="hljs-string">'paused'</span>;
            <span class="hljs-keyword">break</span>;
          }

          <span class="hljs-keyword">try</span> {
            <span class="hljs-keyword">await</span> <span class="hljs-built_in">this</span>.process(item);
            <span class="hljs-comment">// The item has been processed...</span>
            <span class="hljs-comment">// Add it to `success` array and mark to be removed...</span>
            result.success.push(item);
            itemsToRemove.add(item.id);
          } <span class="hljs-keyword">catch</span> (error) {
            <span class="hljs-comment">// The item has failed...</span>
            <span class="hljs-comment">// Add it to `failure` array...</span>
            result.failure.push(item);
          }
        }

        <span class="hljs-comment">// All items have been processed...</span>
        <span class="hljs-comment">// Read the fresh items from the queue and save changes as one operation...</span>
        <span class="hljs-keyword">await</span> <span class="hljs-built_in">this</span>.withLock(<span class="hljs-keyword">async</span> () =&gt; {
          <span class="hljs-keyword">const</span> items = <span class="hljs-keyword">await</span> <span class="hljs-built_in">this</span>.read();
          <span class="hljs-keyword">const</span> newItems = items.filter(<span class="hljs-function">(<span class="hljs-params">item</span>) =&gt;</span> !itemsToRemove.has(item.id));
          <span class="hljs-keyword">await</span> <span class="hljs-built_in">this</span>.write(newItems);
        });
      }
    } <span class="hljs-keyword">finally</span> {
      <span class="hljs-built_in">this</span>.isProcessing = <span class="hljs-literal">false</span>;
    }

    <span class="hljs-keyword">return</span> result;
  }
</code></pre>
<p>This simple processing takes care of the happy path — when every item is processed exactly once, without retries (again, we’ll add them in a later post). There are a few areas worth highlighting in this implementation.</p>
<ul>
<li><p>Items are read only during the beginning of the sync. The loop operates on this in-memory snapshot. Newly enqueued items during processing are not picked up immediately. They’ll be handled in the next iteration of the loop. This is mainly for simplicity and determinism: you always know which items you’re attempting to process, and you’re not mixing in reads mid-loop.</p>
</li>
<li><p>Pausing only affects the next item after the queue is paused. This means the currently running item is always allowed to finish. Pausing stops <em>future</em> work, not in-flight work. Again, this is made for simplicity and predictability — interrupting an active mutation would require request cancellation and make error handling significantly.</p>
</li>
<li><p>Changes are saved after reading from queue again in a single operation. This avoids partial updates. Either all successfully processed items are removed together, or none are. It also ensures the queue state reflects any changes that may have happened while processing was ongoing.</p>
</li>
</ul>
<h2 id="heading-summary"><strong>Summary</strong></h2>
<p>That’s it for today! This article walks through the core implementation details of a foreground queue for offline mutations:</p>
<ul>
<li><p><strong>Atomicity and correctness</strong> are enforced by wrapping every read–modify–write cycle in a mutex. This prevents race conditions when users enqueue items during an ongoing sync.</p>
</li>
<li><p><strong>Deduplication and bounded size</strong> happen at enqueue time, using a stable identity key and optional maximum depth. This lets the queue reflect user intent rather than raw action history.</p>
</li>
<li><p><strong>Explicit sync control</strong> keeps behaviour predictable. Only one sync runs at a time, items are processed sequentially, and pausing affects future work—not in-flight requests.</p>
</li>
<li><p><strong>Deterministic processing</strong> comes from operating on a snapshot of the queue and committing changes in a single atomic write after processing completes.</p>
</li>
</ul>
<p>This gives us a robust foundation that handles the happy path and leaves room for retries, backoff, and more advanced error handling later. Next, we’ll introduce state awareness and show how to integrate the queue with React, so the UI can respond to sync progress and failures without tight coupling.</p>
<p>If you enjoyed the article or have a question, feel free to reach out on <a target="_blank" href="https://bsky.app/profile/tomaszgil.me">Bluesky</a>! 👋</p>
<h3 id="heading-further-reading-and-references">Further reading and references</h3>
<ul>
<li>Photo by <a target="_blank" href="https://unsplash.com/@aaronburden?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Aaron Burden</a> on <a target="_blank" href="https://unsplash.com/photos/closeup-photo-of-brown-leaf-tree--WIfIvpVXAM?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Unsplash</a></li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Offline Support in Web Apps: Foreground Queue for Offline Mutations — Part 1]]></title><description><![CDATA[Deciding how to handle user actions when the network disappears is one of the trickiest parts of offline support. Reads are usually manageable once you have persistence in place. Writes are where things get interesting.
This is the fourth post in my ...]]></description><link>https://blog.tomaszgil.me/offline-support-in-web-apps-foreground-queue-for-offline-mutations-part-1</link><guid isPermaLink="true">https://blog.tomaszgil.me/offline-support-in-web-apps-foreground-queue-for-offline-mutations-part-1</guid><category><![CDATA[software development]]></category><category><![CDATA[Software Engineering]]></category><category><![CDATA[software architecture]]></category><category><![CDATA[offline]]></category><category><![CDATA[Web Development]]></category><dc:creator><![CDATA[Tomasz Gil]]></dc:creator><pubDate>Mon, 19 Jan 2026 08:06:57 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1767437397021/44f7004c-3cc3-46ff-9e60-2232b4c65636.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Deciding how to handle user actions when the network disappears is one of the trickiest parts of offline support. Reads are usually manageable once you have persistence in place. Writes are where things get interesting.</p>
<p>This is the fourth post in my series about offline support in web applications. In the previous articles, I covered general approaches, app shell loading, and data persistence. In this post, I want to focus on a very practical piece of the puzzle: implementing a <strong>foreground queue</strong> for pending mutations.</p>
<p>In my experience, a well-designed foreground queue is the backbone of reliable offline writes. It keeps your app honest about what happened, what still needs syncing, and what went wrong.</p>
<h2 id="heading-introducing-queue">Introducing: <em>queue</em></h2>
<p>Before going any further, it’s worth touching on what I mean by a <em>queue</em> in this context. Formally, a queue is a data structure used to hold items that need to be processed later, in a defined order. You add items to the end, and you process them one by one from the front.</p>
<p>Most queues follow the FIFO rule — <em>first in, first out</em>. The first item you enqueue is the first one you process. In practice, the queue I describe in this article behaves this way implicitly: items are processed sequentially in the order they were added, unless deduplication replaces an older entry with a newer one.</p>
<p>This model turns out to be a very natural fit for offline mutations:</p>
<ul>
<li><p>User actions become explicit units of work</p>
</li>
<li><p>Each mutation is processed exactly once, in a predictable order</p>
</li>
<li><p>Failures are isolated to individual items instead of breaking the whole flow</p>
</li>
</ul>
<p>Finally, this is simply an application-level queue, not a message broker or background job system.</p>
<h2 id="heading-why-a-foreground-queue">Why a foreground queue?</h2>
<p>Once you allow users to mutate data offline, you need a place to put those mutations. You can’t just “fire and forget” API calls anymore.</p>
<p>Saving these pending mutations in queue gives you:</p>
<ul>
<li><p>Explicit control over when syncing happens</p>
</li>
<li><p>Clear visibility into pending and failed operations</p>
</li>
<li><p>Predictable behaviour when connectivity changes</p>
</li>
</ul>
<p>As mentioned in <a target="_blank" href="https://blog.tomaszgil.me/offline-support-in-web-apps-foreground-queue-vs-background-sync">the first article</a>, choosing between background sync and a foreground queue mainly depends on the user experience you want and the level of complexity you can handle. Compared to background sync, this approach is less complex, simpler to reason about and easier to debug.</p>
<p>I won't go into the exact details of the queue implementation I was working on, but I'll focus on the key principles and the most important parts of the implementation. Let’s dive in!</p>
<h2 id="heading-what-i-wanted-from-the-queue">What I wanted from the queue</h2>
<p>Before writing any code, I wrote down a short list of requirements and characteristics I wanted my queue implementation to have. Here’s what I considered:</p>
<ul>
<li><p><strong>Persistence</strong>: queued mutations must survive reloads, crashes, and restarts (same principles for data persistence apply as we discussed in the previous article)</p>
</li>
<li><p><strong>Retryability</strong>: failures should be retried automatically, using exponential backoff to avoid hammering the network or backend</p>
</li>
<li><p><strong>Observability</strong>: every important operation should be trackable, so I can log, measure, and reason about what the queue is doing</p>
</li>
<li><p><strong>State awareness</strong>: the application should be able to subscribe to queue state changes and react in real time</p>
</li>
<li><p><strong>Deduplication</strong>: only the latest intent for a given entity should be kept</p>
</li>
<li><p><strong>Bounded size</strong>: the queue must have a maximum depth to prevent unbounded growth</p>
</li>
</ul>
<p>These constraints strongly shaped the design, which you’ll see throughout this and the following posts.</p>
<h2 id="heading-queue-class">Queue class</h2>
<p>At its core, the queue is just a class. There’s no magic here — you create an instance, configure it, and interact with it through a small, explicit API.</p>
<p>Here's a simplified example. We'll add more details as we proceed.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">export</span> <span class="hljs-keyword">interface</span> QueueConfig&lt;T&gt; {
  name: <span class="hljs-built_in">string</span>;
  storageKey: <span class="hljs-built_in">string</span>;
  processor: <span class="hljs-function">(<span class="hljs-params">item: T</span>) =&gt;</span> <span class="hljs-built_in">Promise</span>&lt;<span class="hljs-built_in">void</span>&gt;;
  <span class="hljs-comment">// Other configuration fields...</span>
}

<span class="hljs-keyword">export</span> <span class="hljs-keyword">class</span> Queue&lt;T&gt; {
  <span class="hljs-keyword">private</span> config: QueueConfig&lt;T&gt;;

  <span class="hljs-keyword">constructor</span>(<span class="hljs-params">config: QueueConfig&lt;T&gt;</span>) {
    <span class="hljs-built_in">this</span>.config = config;
  }

  <span class="hljs-comment">// Public and private methods...</span>
}
</code></pre>
<p>I’ve found this approach works quite nice because it keeps responsibilities clear. The queue owns persistence, ordering, and retries. The rest of the application just tells it <em>what</em> to do (such as passing the <code>processor</code> function to be called on each item during syncing).</p>
<h3 id="heading-managing-items">Managing items</h3>
<p>There are a handful of public methods the queue needs. The first set of public methods handles the lifecycle of items in the queue, reading and writing to the IndexedDB using the <code>storageKey</code> property.</p>
<ul>
<li><p><code>enqueue</code> adds a new item to the queue and persists it immediately</p>
</li>
<li><p><code>dequeue</code> removes a specific item by identity, permanently discarding it</p>
</li>
<li><p><code>clear</code> removes all items from the queue (useful for resets and tests)</p>
</li>
</ul>
<h3 id="heading-controlling-syncing">Controlling syncing</h3>
<p>The second group controls when and how items are processed:</p>
<ul>
<li><p><code>startSync</code> begins processing queued items sequentially</p>
</li>
<li><p><code>pauseSync</code> stops processing after the current item finishes</p>
</li>
</ul>
<p>Syncing is always a foreground action. Only one sync operation can run at a time, and additional calls are ignored while a sync is in progress. This keeps behaviour predictable and avoids subtle race conditions.</p>
<p>The trade-off is that the client needs to decide when syncing should occur. The queue implementation doesn't rely on the network state (whether the user is online or offline); this is managed externally by the queue users. In practice, <code>startSync</code> is called when the user goes online, and <code>pauseSync</code> is called when the user goes offline.</p>
<h2 id="heading-queue-item-shape">Queue item shape</h2>
<p>To wrap up this section, let's examine the structure of a queue item. This is how entries are stored in IndexedDB. Each entry in the queue is a small, self-contained record that represents a single user action. I keep this structure intentionally simple, but every field is there for a specific reason.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">export</span> <span class="hljs-keyword">interface</span> QueueItem&lt;T&gt; {
  id: <span class="hljs-built_in">string</span>;
  payload: T;
  enqueuedAt: <span class="hljs-built_in">number</span>;
  attemptCount: <span class="hljs-built_in">number</span>;
  lastAttemptAt: <span class="hljs-built_in">number</span> | <span class="hljs-literal">null</span>;
}
</code></pre>
<p>Here’s how I think about each field:</p>
<ul>
<li><p><code>id</code> - a unique identifier for the item. This is the foundation for deduplication we’ll touch on later — if a new item is enqueued with the same id, it replaces the old one. In practice, this usually maps to the domain entity being mutated (like changing a status on a todo <em>twice</em> while offline).</p>
</li>
<li><p><code>payload</code> - the actual data needed to perform the mutation. The type here is configured by the client, which makes the whole implementation generic. The queue doesn’t care what’s inside, only that it can hand it to the processor.</p>
</li>
<li><p><code>enqueuedAt</code> - a timestamp of when the item was added to the queue, useful for monitoring.</p>
</li>
<li><p><code>attemptCount</code> - tracks how many times processing has been attempted for the item. This drives retry behaviour and determines when an item should be considered permanently failed.</p>
</li>
<li><p><code>lastAttemptAt</code> - records when the item was last processed. Combined with <code>attemptCount</code>, this allows exponential backoff and makes retry timing explicit and observable.</p>
</li>
</ul>
<p>This structure strikes a balance for me: it’s rich enough to support retries, backoff, and monitoring, while keeping it fairly compact and easy to manage (item statuses can be derived from existing fields).</p>
<h2 id="heading-summary">Summary</h2>
<p>That’s it for this article! We introduced the idea of a foreground queue as a practical foundation for handling offline mutations in web apps.</p>
<ul>
<li><p>We covered the core responsibilities of such a queue: persistence, retries, observability, state awareness, deduplication, and bounded growth.</p>
</li>
<li><p>We outlined a simple class-based design, the public API for managing items and controlling syncing,</p>
</li>
<li><p>We described the shape of a queue item — to enable retries, backoff, and monitoring.</p>
</li>
</ul>
<p>To sum it up in one sentence: in a foreground queue, treat every offline mutation as durable work that must be explicitly tracked.</p>
<p>In the next articles, I’ll dive into the most important implementation details and show how to actually satisfy the characteristics outlined at the beginning — from persistence mechanics and retry strategies to deduplication and state subscriptions in practice.</p>
<p>If you enjoyed the article or have a question, feel free to reach out on <a target="_blank" href="https://bsky.app/profile/tomaszgil.me">Bluesky</a>! 👋</p>
<h3 id="heading-further-reading-and-references">Further reading and references</h3>
<ul>
<li><p>Photo by <a target="_blank" href="https://unsplash.com/@freestocks?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">freestocks on Unspla</a><a target="_blank" href="https://unsplash.com/photos/selective-focus-of-red-flowers-e6KOcZGA9Zk?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">sh</a></p>
</li>
<li><p><a target="_blank" href="https://unsplash.com/photos/selective-focus-of-red-flowers-e6KOcZGA9Zk?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Qu</a><a target="_blank" href="https://en.wikipedia.org/wiki/Queue_\(abstract_data_type\)">eue (abstract data type)</a> on Wikipedia</p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Offline Support in Web Apps: Data Persistence]]></title><description><![CDATA[When building web applications, it’s easy to think of the client state as disposable. Refresh the page, refetch the data, and move on. That mental model works well — right up until you start caring about offline behaviour.
This is the third post in a...]]></description><link>https://blog.tomaszgil.me/offline-support-in-web-apps-data-persistence</link><guid isPermaLink="true">https://blog.tomaszgil.me/offline-support-in-web-apps-data-persistence</guid><category><![CDATA[software development]]></category><category><![CDATA[Software Engineering]]></category><category><![CDATA[software architecture]]></category><category><![CDATA[offline]]></category><category><![CDATA[Web Development]]></category><dc:creator><![CDATA[Tomasz Gil]]></dc:creator><pubDate>Mon, 12 Jan 2026 07:52:12 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1767042115523/a0abb99e-3ceb-44e6-a023-2cdaf524a73c.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>When building web applications, it’s easy to think of the client state as disposable. Refresh the page, refetch the data, and move on. That mental model works well — right up until you start caring about offline behaviour.</p>
<p>This is the third post in a short series on building offline-capable applications. Today we discuss a fundamental aspect of offline-ready software — <strong>data persistence</strong>.</p>
<p>Regardless of whether you use a foreground queue, background sync, or a hybrid approach, you need a place to store data locally. In my experience, getting data persistence right from the start saves a lot of friction later when you start adding the actual features.</p>
<h2 id="heading-why-persistence-matters">Why persistence matters</h2>
<p>From the user's perspective, offline support begins with being able to <em>see something meaningful</em>. Ideally, it should be exactly what they saw when they were still online, like a list of to-dos, a previously opened document, a screen they were just interacting with. If the app can't display any data when the network is unavailable, there's not much the user can do.</p>
<p>This post is entirely about one very specific thing: <strong>caching data fetched from the server so it’s still available offline</strong>. Without persisted server data, there is no foundation for offline support.</p>
<p>The key decision, and the one I find most important early on, is establishing what data is critical to cache. Not everything needs to be persisted, but some queries are essential: global context, core dynamic configuration, and the data that’s in the offline-capable features.</p>
<p>This decision becomes the basis for all offline functionality:</p>
<ul>
<li><p>which screens should be rendered without a network,</p>
</li>
<li><p>which actions are going to be possible offline,</p>
</li>
<li><p>and how much complexity you’ll need later for syncing and recovery.</p>
</li>
</ul>
<p>Once this boundary is clearly defined, the offline story becomes much easier.</p>
<h2 id="heading-storage-options-in-the-browser">Storage options in the browser</h2>
<p>Browsers give us a few options for storing data for offline sessions, each with very different trade-offs.</p>
<p>The simplest ones are <code>localStorage</code> and <code>sessionStorage</code>. They’re synchronous, easy to use, and available everywhere. Unfortunately, that simplicity comes with hard limits: small storage quotas, blocking APIs, and no support for structured or large data. For anything beyond feature flags or tiny bits of state, this is really not suitable.</p>
<p>Cookies are even less attractive for this use case. They’re size-constrained, sent with every request, and optimised for server communication rather than client-side persistence.</p>
<p>That leaves <strong>IndexedDB</strong>. It’s asynchronous, built for larger datasets, and designed to work well with structured data. The API itself is not particularly friendly, which is why I recommend using a library for, as the interface.</p>
<p>In my experience, IndexedDB is the most practical choice for offline persistence because:</p>
<ul>
<li><p>it doesn’t block the main thread,</p>
</li>
<li><p>it scales well as your cache grows,</p>
</li>
<li><p>and it works reliably across modern browsers.</p>
</li>
</ul>
<p>There are edge cases where other storage options make sense, but for persisted data caches and offline-first features, IndexedDB is usually the right default.</p>
<h2 id="heading-react-query-persistence">React Query persistence</h2>
<p>It's hard to imagine a single-page React application of a decent size without using React Query to manage client-side server state. It's definitely my go-to choice for this purpose. Keeping data persistence close to the library simplifies the entire mental model.</p>
<p>The approach I’ll describe here is:</p>
<ul>
<li><p>based on <code>@tanstack/react-query-persist-client</code>,</p>
</li>
<li><p>backed by IndexedDB,</p>
</li>
<li><p>and intentionally selective about <em>what</em> gets persisted.</p>
</li>
</ul>
<p>This approach is general enough to work across different apps, as long as they use React with React Query as the primary data-caching solution.</p>
<h2 id="heading-the-example-todos-with-offline-status-updates">The example: todos with offline status updates</h2>
<p>I’ll use a small todos app as a concrete example:</p>
<ul>
<li><p>the app displays a list of todos,</p>
</li>
<li><p>each todo has a <code>completed</code> status,</p>
</li>
<li><p>toggling the status should work offline,</p>
</li>
<li><p>and the list should still render after a reload.</p>
</li>
</ul>
<p>That gives us a clear persistence requirement: <em>the todos query must survive reloads and offline sessions</em>.</p>
<h3 id="heading-step-1-choosing-what-to-persist-selective-persistence">Step 1 — Choosing what to persist (selective persistence)</h3>
<p>Persisting <em>everything</em> that goes through React Query is rarely what you want. It unnecessarily increases storage usage. Instead, I recommend <strong>selective persistence</strong>:</p>
<ul>
<li><p>global queries required for the app to function,</p>
</li>
<li><p>and queries directly related to offline-capable features.</p>
</li>
</ul>
<p>In a todos app, that usually means the todos list itself.</p>
<p>A common pattern is to filter queries by key prefixes:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> persistedQueryPrefixes = [
  <span class="hljs-string">'me'</span>, <span class="hljs-comment">// Global - user context</span>
  <span class="hljs-string">'todos'</span>, <span class="hljs-comment">// Feature-specific - list of todos</span>
];

<span class="hljs-keyword">export</span> <span class="hljs-keyword">const</span> shouldPersistQuery = <span class="hljs-function">(<span class="hljs-params">query: Query</span>) =&gt;</span> {
  <span class="hljs-keyword">return</span> persistedQueryPrefixes.includes(query.queryKey[<span class="hljs-number">0</span>]);
};
</code></pre>
<p>In practice, I prefer using <a target="_blank" href="https://tkdodo.eu/blog/effective-react-query-keys">query key factories</a> so this stays type-safe and refactor-friendly.</p>
<p>The trade-off here is you need to think about query ownership. The upside is a much smaller and more predictable cache.</p>
<h3 id="heading-step-2-building-a-persister-with-indexeddb">Step 2 — Building a persister with IndexedDB</h3>
<p>You can use <code>idb-keyval</code> for read and write operations and implement React Query’s <code>Persister</code> interface.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">import</span> { set, get, del } <span class="hljs-keyword">from</span> <span class="hljs-string">'idb-keyval'</span>

<span class="hljs-keyword">const</span> STORAGE_KEY = <span class="hljs-string">'rq-cache'</span>;

<span class="hljs-keyword">export</span> <span class="hljs-keyword">const</span> persister: Persister = {
  persistClient: <span class="hljs-keyword">async</span> (client) =&gt; {
    <span class="hljs-keyword">await</span> set(STORAGE_KEY, client);
  },
  restoreClient: <span class="hljs-keyword">async</span> () =&gt; {
    <span class="hljs-keyword">return</span> get(STORAGE_KEY);
  },
  removeClient: <span class="hljs-keyword">async</span> () =&gt; {
    <span class="hljs-keyword">await</span> del(STORAGE_KEY);
  },
};
</code></pre>
<h3 id="heading-step-3-wiring-persistence-into-providers">Step 3 — Wiring persistence into providers</h3>
<p>React Query handles most of the lifecycle for you via <code>PersistQueryClientProvider</code>.</p>
<p>This is where configuration starts to matter:</p>
<ul>
<li><p><strong>maxAge</strong> — how long persisted queries are kept (for example, 3 days),</p>
</li>
<li><p><strong>buster</strong> — value to increment when you make breaking cache changes.</p>
</li>
</ul>
<pre><code class="lang-typescript">&lt;PersistQueryClientProvider
  client={queryClient}
  persistOptions={{
    persister,
    maxAge: <span class="hljs-number">1000</span> * <span class="hljs-number">60</span> * <span class="hljs-number">60</span> * <span class="hljs-number">24</span> * <span class="hljs-number">3</span>,
    buster: <span class="hljs-string">'v1'</span>,
    dehydrateOptions: {
      shouldDehydrateQuery: shouldPersistQuery,
    },
  }}
&gt;
  {children}
&lt;/PersistQueryClientProvider&gt;
</code></pre>
<p>One important piece of configuration in your regular query client is <code>staleTime</code> — this is when data becomes stale but still usable. Queries marked as stale will be fetched from the API when the app is online. Keep in mind, they won't be fetched again after a refresh because the client now lives in persisted storage.</p>
<p>Finding the right configuration is a balancing act. In my experience, longer living cache is fine for offline-friendly data, as long as you still refetch aggressively when the app is online.</p>
<hr />
<p>Once persistence in place, the next step is to record status changes while offline.</p>
<p>The easiest way would be to optimistically update the cached todos query, and React Query will automatically persist that change. The system doesn't care whether the change comes from the server or a mutation.</p>
<p>This is not enough. The tricky part is replaying the action or actually saving the state on the server. What happens next—like reconciliation, retries, and handling conflicts—depends on the offline model you choose, which I’ll discuss later in the series.</p>
<h2 id="heading-a-word-on-migrations">A word on migrations</h2>
<p>One important realisation is that with data stored on the client side, there's one additional place where data exists independently from the rest of your system. There might be a situation where you change the structure of the API being used. You update the application code to match, so everything works together. So what happens in the application?</p>
<p>When a new version of the application launches, the client loads the stored data, and it <em>breaks</em> because the code now expects different data, while the client still has the old format. You have two options.</p>
<ul>
<li><p><strong>Run a migration</strong>. This means running a script to transform the already stored client-side data into the new format expected by the updated application code. The benefit is a seamless user experience with preserved data and, most importantly, no data loss. The trade-off is added complexity, careful versioning, and the risk of bugs if migrations fail or are only partially applied.</p>
</li>
<li><p><strong>Invalidate the cache</strong>. This means discarding all previously stored client-side data and forcing the application to fetch fresh data from the server. This approach is way simpler to implement, but users may lose some offline data.</p>
</li>
</ul>
<h2 id="heading-summary">Summary</h2>
<p>Data persistence is the foundation of offline support. With React Query, you can get surprisingly far by:</p>
<ul>
<li><p>persisting only what’s needed,</p>
</li>
<li><p>using IndexedDB for storage behind a small abstraction,</p>
</li>
<li><p>and letting React Query manage cleanup and hydration.</p>
</li>
</ul>
<p>Once this is in place, higher-level offline features become much easier to reason about.</p>
<p>If you enjoyed the article or have a question, feel free to reach out on <a target="_blank" href="https://bsky.app/profile/tomaszgil.me">Bluesky</a>! 👋</p>
<h3 id="heading-further-reading-and-references">Further reading and references</h3>
<ul>
<li><p>Photo by <a target="_blank" href="https://unsplash.com/@erol?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Erol Ahmed on Unspla</a><a target="_blank" href="https://unsplash.com/photos/mountains-covered-with-snow-d3pTF3r_hwY?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">sh</a></p>
</li>
<li><p><a target="_blank" href="https://unsplash.com/photos/mountains-covered-with-snow-d3pTF3r_hwY?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Effe</a><a target="_blank" href="https://tkdodo.eu/blog/effective-react-query-keys">ctive React Query Keys</a> from <a target="_blank" href="https://tkdodo.eu/blog"><strong>TkDodo</strong></a></p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Offline Support in Web Apps: Loading the App Without a Network]]></title><description><![CDATA[Offline support in web applications often gets discussed as an all-or-nothing feature. Either the app is “offline-capable” or it isn’t. In practice, the app must check quite a few boxes to be able to function without access to a network. Before think...]]></description><link>https://blog.tomaszgil.me/offline-support-in-web-apps-loading-the-app-without-a-network</link><guid isPermaLink="true">https://blog.tomaszgil.me/offline-support-in-web-apps-loading-the-app-without-a-network</guid><category><![CDATA[Web Development]]></category><category><![CDATA[Software Engineering]]></category><category><![CDATA[software development]]></category><category><![CDATA[offline]]></category><category><![CDATA[software architecture]]></category><dc:creator><![CDATA[Tomasz Gil]]></dc:creator><pubDate>Mon, 05 Jan 2026 09:39:54 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1766924449488/90b4d3b9-cf7e-4a80-91f7-698e5dcf03a0.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Offline support in web applications often gets discussed as an all-or-nothing feature. Either the app is “offline-capable” or it isn’t. In practice, the app must check quite a few boxes to be able to function without access to a network. Before thinking about syncing, conflict resolution, or background retries, there’s a simpler question to answer: <em>can your application even load when the network is gone?</em></p>
<p>This post is the second article in a series about offline support in web apps. Here, we're defining what's needed to make the application available when the network is down, in its most basic form. This is not about full offline functionality just yet. It’s a clear baseline we can build on later.</p>
<h2 id="heading-the-offline-availability">The Offline Availability</h2>
<p>For this article, “offline” availability is intentionally scoped to a very specific and narrow case:</p>
<ol>
<li><p>The user opens the app while online.</p>
</li>
<li><p>The device loses network connectivity.</p>
</li>
<li><p>The user reloads the page.</p>
</li>
</ol>
<p>Without any support, the browser would show its default offline error screen. Instead, we want the application to load successfully. That’s it. If your app can handle this scenario, you’ve cleared the first real offline hurdle.</p>
<p>We’re not attempting to solve data persistence, handling mutations, synchronisation, conflict resolution or retries just yet. We’ll get to them in later posts.</p>
<h2 id="heading-the-default-offline-failure">The Default Offline Failure</h2>
<p>Most websites or web applications fail offline in exactly the same way.</p>
<ol>
<li><p>The user opens the app while online.</p>
</li>
<li><p>The app loads successfully.</p>
</li>
<li><p>The network disappears.</p>
</li>
<li><p>The user refreshes the page.</p>
</li>
<li><p>The browser tries to re-fetch <code>index.html</code>.</p>
</li>
<li><p>The request fails.</p>
</li>
<li><p>The browser shows a generic offline error page.</p>
</li>
</ol>
<p>Notice how nothing “breaks” in your code — the browser simply refuses to load the app at all.</p>
<p>In practice, this is often the first thing users notice — a refresh on a train, plane, or unstable connection that instantly breaks the app. Importantly, this happens even if the app was previously loaded, JavaScript bundles are cached by the HTTP cache and the app is a SPA with client-side routing.</p>
<p>The key insight here is this: <strong>without explicit offline support, the browser treats your app as unavailable</strong>, regardless of how much code it already has locally.</p>
<h2 id="heading-app-shell-availability">App Shell Availability</h2>
<p>The smallest meaningful offline capability is simple — <strong>the application shell must load without a network connection</strong>. If you achieve that, your app moves from “completely broken offline” to “offline-aware”.</p>
<p>For the purposes of this post, the app shell is the minimal set of HTML, CSS, and JavaScript required for the core user interface of a web application such as navigation, layout, and basic styling. It does not include anything that’s required to handle dynamic content.</p>
<p>If these assets load, the user will at least be able to see <em>something</em> — even if it’s just a clear “You’re offline” state. Showing <em>any</em> intentional UI is the first step towards offline-capable web app.</p>
<h2 id="heading-introducing-pwa">Introducing: PWA</h2>
<p>Browsers do not guarantee that <code>index.html</code> will be available offline via normal HTTP caching. If you want reliable offline loading of the app shell, you need explicit control over how the browser serves your application’s core files when the network is unavailable.</p>
<p>This is where <strong>Progressive Web Applications</strong> (PWAs) come into play. PWAs are web apps that opt into a set of browser features designed to improve reliability, and the most important one for offline access is the service worker. A service worker allows your app to intercept network requests and serve pre-cached assets instead, making it possible to apply a cache-first strategy for the app shell and reliably load the application even when the network is gone.</p>
<p>Here’s the minimal required PWA feature set to make this happen.</p>
<ul>
<li><p>Service worker registration.</p>
</li>
<li><p>Pre-caching of app shell assets.</p>
</li>
<li><p>Cache-first fetch handling for those assets.</p>
</li>
</ul>
<p>There are other PWA features, such as the Web App Manifest (which defines icons and allows the application to be installed) or push notifications. While these features enhance the offline experience, you can still have offline app-shell loading without install prompts or home screen icons.</p>
<h3 id="heading-using-vite-plugin-pwa">Using <code>vite-plugin-pwa</code></h3>
<p>It’s safe to say Vite is currently the go-to way of building modern single-page web applications, the example here is using that build tool.</p>
<p>For Vite-based apps, <code>vite-plugin-pwa</code> provides a low-friction way to add just enough PWA capabilities without having to hand-write or deeply understand a service worker. It integrates with the Vite build pipeline, takes care of asset hashing at build time, and gives you pre-caching out of the box.</p>
<p>Most importantly, it lets you focus on deciding <em>what</em> should be cached, instead of worrying about <em>how</em> the service worker is implemented. A conceptual configuration might look like this:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">import</span> { VitePWA } <span class="hljs-keyword">from</span> <span class="hljs-string">'vite-plugin-pwa'</span>

<span class="hljs-keyword">export</span> <span class="hljs-keyword">default</span> {
  plugins: [
    VitePWA({
      registerType: <span class="hljs-string">'autoUpdate'</span>,
      strategies: <span class="hljs-string">'generateSW'</span>,
      workbox: {
        globPatterns: [<span class="hljs-string">'**/*.{js,css,html,svg,png}'</span>],
      },
    }),
  ],
}
</code></pre>
<p>With this in place, the app shell is pre-cached at build time and served from cache when the network is unavailable, making offline page reloads possible without any deeper offline functionality. The same principles apply if you use other build tools, but Vite makes this particularly straightforward.</p>
<h2 id="heading-lazy-loaded-routes">Lazy-Loaded Routes</h2>
<p>It's important to point out that modern web applications are often split into multiple JavaScript chunks that are only loaded when they are actually needed. When code-splitting is done by route (very common in SPAs) the code responsible for rendering a specific page might only be fetched when the user navigates to that route for the first time. With offline support, this becomes a problem because that code may simply never have been downloaded before the user goes offline.</p>
<p>A typical example looks like this:</p>
<pre><code class="lang-ts"><span class="hljs-keyword">const</span> SettingsPage = lazy(<span class="hljs-function">() =&gt;</span> <span class="hljs-keyword">import</span>(<span class="hljs-string">'./pages/settings'</span>))
</code></pre>
<p>This works perfectly fine online. When the user tries to access the settings page for the first time, a new chunk of code is requested, and the user can see the page. However, if the user goes offline and then tries to navigate to <code>/settings</code>, the browser won't have the JavaScript chunk for that route. If it isn't already cached, the navigation will fail, even though the app shell itself loaded successfully.</p>
<p>The practical implication is straightforward: <strong>any route you expect users to access while offline must not depend on a chunk that was never cached</strong>. If a route must work offline, its code must be available before the network disappears. You can solve this in a couple of ways.</p>
<ol>
<li><p>Eagerly import critical routes. This increases your initial bundle size but guarantees the code is available offline.</p>
</li>
<li><p>Include route chunks in your pre-cache configuration. This preserves the benefits of code-splitting but requires a bit more awareness of what your build actually produces.</p>
</li>
</ol>
<p>This is not necessarily about making every route work offline. It’s about identifying the offline-critical path — usually landing pages, dashboards, or explicit offline states — and making sure those parts of the application are ready before the network is gone.</p>
<h2 id="heading-summary">Summary</h2>
<p>To wrap it up, for a web app to be accessible offline in the most basic sense:</p>
<ul>
<li><p>A service worker must exist</p>
</li>
<li><p>The app shell must be pre-cached and served cache-first</p>
</li>
<li><p>Critical routes must not rely on uncached lazy chunks</p>
</li>
</ul>
<p>With this baseline in place, the next step is deciding how data behaves offline — which is where things start to get more interesting. If you enjoyed the article or have a question, feel free to reach out on <a target="_blank" href="https://bsky.app/profile/tomaszgil.me">Bluesky</a>! 👋</p>
<h3 id="heading-further-reading-and-references">Further Reading and References</h3>
<ul>
<li><p>Photo by <a target="_blank" href="https://unsplash.com/@galina88?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Galina N on Unsp</a><a target="_blank" href="https://unsplash.com/photos/selective-focus-photo-of-frozen-round-red-fruits-AgWVcQz1bOA?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">lash</a></p>
</li>
<li><p><a target="_blank" href="https://unsplash.com/photos/selective-focus-photo-of-frozen-round-red-fruits-AgWVcQz1bOA?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">MDN</a> <a target="_blank" href="https://developer.mozilla.org/en-US/docs/Web/API/Service_Worker_API">— Service Worker API</a></p>
</li>
<li><p><a target="_blank" href="https://vite-pwa-org.netlify.app/">Vite PWA Plugin documentation</a></p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Offline Support in Web Apps: Foreground Queue vs. Background Sync]]></title><description><![CDATA[Offline support in web applications has been on my mind a lot lately. I’m working on adding it to one of the projects I contribute to, and I quickly learned there's a lot of complexity to this topic. Deciding how to approach this can be challenging, ...]]></description><link>https://blog.tomaszgil.me/offline-support-in-web-apps-foreground-queue-vs-background-sync</link><guid isPermaLink="true">https://blog.tomaszgil.me/offline-support-in-web-apps-foreground-queue-vs-background-sync</guid><category><![CDATA[Web Development]]></category><category><![CDATA[Software Engineering]]></category><category><![CDATA[offline]]></category><category><![CDATA[architecture]]></category><category><![CDATA[software development]]></category><dc:creator><![CDATA[Tomasz Gil]]></dc:creator><pubDate>Mon, 29 Dec 2025 11:55:14 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1766320271843/0a38a91e-d446-4cc0-b3df-ec35b207a634.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Offline support in web applications has been on my mind a lot lately. I’m working on adding it to one of the projects I contribute to, and I quickly learned there's a lot of complexity to this topic. Deciding how to approach this can be challenging, especially if you want something that works reliably without jumping straight into PWAs and service-worker-powered background features.</p>
<p>This post kicks off a short series on building offline-capable applications. I want to start with two core architectural patterns that most often form the foundation of almost every offline strategy: <strong>Foreground Queue</strong> and <strong>Background Sync</strong>.</p>
<p>I’ll explore both approaches from the client perspective—your SPA, Electron shell, or mobile web wrapper. PWAs can come later as a progressive enhancement.</p>
<h2 id="heading-what-problem-are-we-solving">What Problem Are We Solving?</h2>
<p>When the client loses connectivity, the user still wants to do things—create tasks, edit notes, send messages, update settings. At some point, those changes must reach the server. The question is <strong>how</strong> and <strong>when</strong> the sync happens, and what the <strong>user sees</strong> while it’s happening.</p>
<p>Most designs naturally fall into one of two models.</p>
<h2 id="heading-foreground-queue-explicit-traceable-user-visible">Foreground Queue — Explicit, Traceable, User-Visible</h2>
<p>The Foreground Queue model focuses on showing current state, giving clarity and predictability to the user. The idea is simple: whenever the user performs an action that needs to reach the server, the app writes that operation into a durable queue (most often IndexedDB). Those operations sit there until the app decides it’s time to replay them. Foreground Queue is about <strong>recording intent and replaying it later</strong>.</p>
<blockquote>
<p>This is an <em>application-level queue</em>, not a message broker.</p>
</blockquote>
<p>And “decides” really means <em>while the app is running</em>. Nothing magical happens in the background. If the user closes the tab or the desktop client isn’t active, the queue stops moving. Sync only resumes when the user comes back.</p>
<p>In practice, this makes the logic easy to reason about. You detect that you’re online, you pull pending items out of the queue, and you try sending them one by one. If something fails, you mark it as failed, apply your backoff strategy, and try again later. FIFO ordering works well, but you can get fancy if your domain requires it.</p>
<p>Here’s a simplified sketch of the sync function:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">async</span> <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">sync</span>(<span class="hljs-params"></span>) </span>{
  <span class="hljs-keyword">const</span> items = <span class="hljs-keyword">await</span> queue.loadItems();

  <span class="hljs-keyword">for</span> (<span class="hljs-keyword">const</span> item <span class="hljs-keyword">of</span> items) {
    <span class="hljs-keyword">try</span> {
      <span class="hljs-keyword">await</span> item.process();
      <span class="hljs-keyword">await</span> queue.markCompleted(item);
    } <span class="hljs-keyword">catch</span> {
      <span class="hljs-keyword">await</span> queue.markFailed(item);
      <span class="hljs-keyword">break</span>;
    }
  }
}
</code></pre>
<p>The UX that this approach brings is what I think of as <em>visibility-first</em>. Users know what’s pending because you show it to them—an Outbox, a little “pending” pill next to items, a count of unsent changes. You don’t pretend the server is up to date—the user understands others won’t yet see the change, and they get less surprised if a conflict appears after submission.</p>
<p>If you want straightforward reasoning and explicit status for a handful of clearly defined actions, and can accept relying on retries and making sure requests are idempotent, Foreground Queue is hard to beat.</p>
<h2 id="heading-background-sync-optimistic-continuous-seamless">Background Sync — Optimistic, Continuous, Seamless</h2>
<p>Background Sync takes a different approach. Instead of queuing operations and waiting for a coordinated replay, the app immediately updates local state and behaves as though the server has already accepted the change. Background Sync is about <strong>maintaining a local copy of server state and continuously reconciling it.</strong></p>
<blockquote>
<p>Despite the name, this isn’t the browser’s Background Sync API. It’s an architectural pattern where the app continuously reconciles local and server state while it’s running.</p>
</blockquote>
<p>This model can be incredibly pleasant to use. The whole app feels fast because the UI never waits on the network. Under the hood, though, you’re doing more work: you label objects as “dirty,” schedule periodic sync attempts, push changes whenever you detect connectivity, and then reconcile any differences between the client and server versions. A lot of things can go wrong here.</p>
<p>Because changes apply locally right away, you also have to be more thoughtful about conflict resolution. If the server disagrees with your optimistic edit, the user may see a correction or merge state later. Handling that gracefully makes or breaks the experience.</p>
<p>The core loop can look something like this:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">async</span> <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">backgroundSync</span>(<span class="hljs-params"></span>) </span>{
  <span class="hljs-keyword">const</span> dirty = <span class="hljs-keyword">await</span> store.getDirtyEntities();
  <span class="hljs-keyword">for</span> (<span class="hljs-keyword">const</span> entity <span class="hljs-keyword">of</span> dirty) {
    <span class="hljs-keyword">try</span> {
      <span class="hljs-keyword">const</span> updated = <span class="hljs-keyword">await</span> push(entity);
      <span class="hljs-keyword">await</span> store.applyServerState(updated);
    } <span class="hljs-keyword">catch</span> {
      <span class="hljs-comment">/* retry later */</span>
    }
  }
}
</code></pre>
<p>In a browser-only environment, however, it’s worth remembering that this isn’t real background sync. Nothing runs after the tab closes in the browser. Electron and native shells can give you genuine background execution, which makes this model far more practical.</p>
<p>The trade-off is complexity in maintaining a client-side replica of the server state. That means concurrency issues, periodic synchronisation, retries, and merges. And because users assume everything succeeded instantly, any visible rollback is more noticeable and must be communicated carefully.</p>
<h2 id="heading-practical-recommendations">Practical Recommendations</h2>
<p>Here are a few practical guidelines I’ve found helpful when choosing and implementing an offline strategy.</p>
<ol>
<li><p><strong>Use IndexedDB for persistent storage</strong>. With rich storage quota, ability to handle complex data (like blobs/files), asynchronous API and browser support, it’s probably the safest bet for client data storage in any approach.</p>
</li>
<li><p><strong>Design for at-least-once delivery</strong>. Use stable operation IDs and idempotent endpoints. Assume every request can be sent twice, and design endpoints accordingly.</p>
</li>
<li><p><strong>Pick UX deliberately</strong>. Users need clarity and control? Foreground Queue. Users expect seamless, fast, “everything just saves”? Background Sync. Decide upfront whether users should ever see ‘pending’ as a first-class state.</p>
</li>
<li><p><strong>Don’t treat the architecture as a binary choice.</strong> Many real-world apps mix both models—for example, using a Foreground Queue for destructive or high-risk actions, and Background Sync for low-risk, high-frequency edits.</p>
</li>
<li><p><strong>Be honest about your runtime</strong>. If your app can’t run code in the background, don’t promise background behavior. If the tab closing stops all work, reflect that in copy and product expectations.</p>
</li>
</ol>
<h2 id="heading-takeaway">Takeaway</h2>
<p>To sum it up in one rule: <strong>Foreground Queue prioritises transparency; Background Sync prioritises smoothness.</strong> Both are solid approaches, and both can coexist in the same architecture. The right choice depends on your environment, your users, and how much complexity you’re willing to take on. The real decision isn’t only technical—it’s whether you want users to see uncertainty or hide it.</p>
<p>If you have thoughts or want to share your own offline challenges, feel free to reach out on <a target="_blank" href="https://bsky.app/profile/tomaszgil.me">Bluesky</a>! 👋</p>
<h3 id="heading-further-reading-and-references">Further reading and references</h3>
<ul>
<li>Photo by <a target="_blank" href="https://unsplash.com/@rayhennessy?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Ray Hennessy</a> on <a target="_blank" href="https://unsplash.com/photos/selective-focus-photography-of-cardinal-bird-on-tree-branch-6-JIDCnZG2E?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Unsplash</a></li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Enhancing Software Engineering Workflow with Cursor Background Agents]]></title><description><![CDATA[Over the past few weeks, I’ve been experimenting with AI—especially Cursor Background Agents—to support my engineering work in a new web application we’re building. Below are some observations and tips that have helped me get better results.
Rules
On...]]></description><link>https://blog.tomaszgil.me/enhancing-software-engineering-workflow-with-cursor-background-agents</link><guid isPermaLink="true">https://blog.tomaszgil.me/enhancing-software-engineering-workflow-with-cursor-background-agents</guid><category><![CDATA[Software Engineering]]></category><category><![CDATA[software development]]></category><category><![CDATA[AI]]></category><category><![CDATA[#ai-tools]]></category><category><![CDATA[ai agents]]></category><dc:creator><![CDATA[Tomasz Gil]]></dc:creator><pubDate>Thu, 21 Aug 2025 06:45:31 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1755459336398/fad3b5a8-2f69-45f5-bab2-ba372ac915b2.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Over the past few weeks, I’ve been experimenting with AI—especially <strong>Cursor Background Agents</strong>—to support my engineering work in a new web application we’re building. Below are some observations and tips that have helped me get better results.</p>
<h2 id="heading-rules"><strong>Rules</strong></h2>
<p>One of the most important factors in making agents even directionally correct, especially early on, is establishing clear rules. At the start of a project, agents are almost useless without them.</p>
<p>As with many other things, I’ve found it best to begin with a small, simple set of rules, then expand and organize them as the project grows—first in a single file, then across multiple files, and eventually into directories when needed. A really useful mental model around agent rules is to ask, each time you correct an agent or have a discussion within the team, <em>should this become a new rule?</em></p>
<p>I also now keep most of the documentation about the code in README files within the codebase, rather than in any external services, which makes it easier for both humans and agents to stay aligned.</p>
<p>Finally, it’s worth revisiting and refining the rules periodically. One effective way to do this is by asking a model to evaluate the existing rules, suggest improvements tailored to your tech stack, and identify any gaps:</p>
<blockquote>
<p>Evaluate the rules below and suggest an improved version that works best with my tech stack.<br />If any important rules are missing, suggest adding them.</p>
</blockquote>
<h2 id="heading-prompting"><strong>Prompting</strong></h2>
<p>With the basic rules in place, the next focus should be on prompting. It's been said time and time again, but it's worth mentioning—the quality of the LLM's output directly depends on how you prompt it.</p>
<p>Over time—through trial, error, and digging around online—I’ve collected a handful of instructions that tend to cut down on unnecessary back-and-forth. These tips aren’t quite as critical when working with in-editor agents (since the feedback loop there is much tighter), but for background agents they really help keep things on track.</p>
<p><strong>Emphasize the rules:</strong></p>
<ul>
<li><em>Make sure to read the rules in the repository and follow them when implementing the feature.</em></li>
</ul>
<p><strong>Improve reasoning:</strong></p>
<ul>
<li><p><em>Think hard before starting implementation.</em></p>
</li>
<li><p><em>Plan your steps before writing code.</em></p>
</li>
<li><p><em>Prefer using existing components over creating custom ones, even if designs differ slightly.</em></p>
</li>
</ul>
<p><strong>Final checks:</strong></p>
<ul>
<li><p><em>Double-check requirements and handle any potential edge cases.</em></p>
</li>
<li><p><em>Make sure the added code follows standards defined in the codebase.</em></p>
</li>
<li><p><em>Ensure existing behavior isn’t broken.</em></p>
</li>
<li><p><em>Run formatting and linting before submitting code.</em></p>
</li>
</ul>
<h2 id="heading-workflow"><strong>Workflow</strong></h2>
<p>I’ve heard (and <a target="_blank" href="https://harper.blog/2025/04/17/an-llm-codegen-heros-journey/">read</a>) that some people spin up multiple agents at once and only orchestrate them. I can’t see myself doing that yet, for a few reasons.</p>
<p>Running multiple agents in parallel still requires a fair amount of <strong>mental overhead</strong>, since each one needs preparation and follow-up adjustments. On top of that, at the beginning of any project, you probably don’t have enough distinct areas of work to parallelize effectively. And while agent output is usually a good starting point, it always <strong>requires significant adjustments</strong>—whether that’s making the design closer to spec, improving the user experience, or restructuring the code in a way that fits better.</p>
<p>Some of these issues can be mitigated by writing better rules, but many of them only come up during code review or while actually testing the solution. Still, I think agents can be very effective—even when used one at a time.</p>
<p>Having said all that, here’s my current workflow supported by background agents.</p>
<ol>
<li><p><strong>Prep work for the agent.</strong> I usually work with very short issue descriptions, so I generate richer descriptions first to provide more context to the agent.</p>
<blockquote>
<p>Help me prepare a well-defined issue description to implement the following:<br /><code>&lt;feature_description&gt;</code></p>
</blockquote>
</li>
<li><p><strong>Create a ready-to-use prompt.</strong> With the richer description, I generate a background-agent prompt (including my prompting instructions):</p>
<blockquote>
<p>Create a prompt optimized for Cursor Background Agents based on the feature description below.<br />Additionally: <code>&lt;prompting_instructions&gt;</code><br />Feature description: <code>&lt;richer_feature_description&gt;</code></p>
</blockquote>
</li>
<li><p><strong>Run the agent</strong> and work on something else in parallel (e.g., code reviews, or writing a post like this).</p>
</li>
<li><p><strong>Review results</strong> and adjust in sequence:</p>
<ul>
<li><p>Background agents → biggest follow-up changes (e.g., missing tests, mocks).</p>
</li>
<li><p>In-editor agents → medium-size adjustments.</p>
</li>
<li><p>Inline edits → small changes like styling or readability.</p>
</li>
</ul>
</li>
</ol>
<h2 id="heading-takeaway"><strong>Takeaway</strong></h2>
<p>Right now, background agents give me about the first <strong>50% of a feature</strong>. In theory, in-editor agents could do the same, but I find that the stronger models and the more open environment of background agents produce a better starting point, faster. I hope to slowly increase that initial percentage.</p>
<p>I’m curious: does this align with your experience using similar tools? How does your workflow differ?</p>
<p>If you enjoyed the article or have a question, feel free to reach out to me on <a target="_blank" href="https://bsky.app/profile/tomaszgil.me"><strong>Bluesky</strong></a> or leave a comment here! 👋</p>
<h3 id="heading-further-reading-and-references">Further reading and references</h3>
<ul>
<li>Photo by <a target="_blank" href="https://unsplash.com/@ww_studios?utm_content=creditCopyText&amp;utm_medium=referral&amp;utm_source=unsplash">Joshua Whitney</a> on <a target="_blank" href="https://unsplash.com/photos/a-single-flower-that-is-growing-out-of-the-ground-K_Nayo69CGA?utm_content=creditCopyText&amp;utm_medium=referral&amp;utm_source=unsplash">Unsplash</a></li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Introducing Large-Scale Tooling Changes: A Software Engineering Guide]]></title><description><![CDATA[As software engineering organizations evolve, introducing new tooling changes can have a significant impact on long-term productivity, collaboration, and overall code quality. This is also not an easy task in many ways.
Recently, at Salesloft, I had ...]]></description><link>https://blog.tomaszgil.me/introducing-large-scale-tooling-changes-a-software-engineering-guide</link><guid isPermaLink="true">https://blog.tomaszgil.me/introducing-large-scale-tooling-changes-a-software-engineering-guide</guid><category><![CDATA[software development]]></category><category><![CDATA[Software Engineering]]></category><category><![CDATA[tools]]></category><category><![CDATA[guide]]></category><category><![CDATA[ci-cd]]></category><dc:creator><![CDATA[Tomasz Gil]]></dc:creator><pubDate>Mon, 30 Jun 2025 10:52:55 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1751138684018/67661b85-b034-40b6-8009-bdcd53069516.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>As software engineering organizations evolve, introducing new tooling changes can have a significant impact on long-term productivity, collaboration, and overall code quality. This is also not an easy task in many ways.</p>
<p>Recently, at Salesloft, I had the opportunity to lead such an initiative: integrating <a target="_blank" href="https://knip.dev/">Knip</a>, a dependency management and automated unused code removal tool, in our main front-end application monorepo. This guide comes directly from that project. I will walk you through <strong>a step-by-step process for introducing large-scale tooling changes in software engineering projects</strong>.</p>
<p>Before we begin, here's an important note: The approach in this document is quite general and should work with different tools and situations. However, when adding new tools, it's crucial to think about the specific needs of your project and team. While it's usually better to introduce tools gradually, some can fit smoothly into existing workflows all at once.</p>
<h2 id="heading-step-1-evaluate-the-tool">Step 1 — Evaluate the tool</h2>
<p>Before adding any new tool, it's important to check what it can do, what it can't do, and how it might affect your project.</p>
<ul>
<li><p><strong>Assess the tool's main function</strong>. Understand what the tool is meant to do and how it fits with your team's goals. If possible, give it a try.</p>
</li>
<li><p><strong>Research the tool's ecosystem</strong>. Look into the tool's community, documentation, and existing integrations to understand its maturity and the level of maintenance and support.</p>
</li>
<li><p><strong>Evaluate the tool's compatibility</strong>. Check if the tool works well with your project's current setup, including the infrastructure, programming languages, and libraries.</p>
</li>
<li><p><strong>Estimate the costs</strong>. Think about the cost of migration, possible pitfalls, and, eventually, sunsetting of the tool.</p>
</li>
<li><p><strong>Look into the license</strong>. Check the tool’s license and to what extent it can be used.</p>
</li>
<li><p><strong>Compare alternatives</strong>. See if there are other tools that can fully or partially solve the same problem. If there are, compare them and decide which one is the best fit for your situation.</p>
</li>
<li><p><strong>Consider integration options</strong>. Describe the different ways you could integrate the tool.</p>
</li>
</ul>
<h2 id="heading-step-2-start-a-discussion">Step 2 — Start a discussion</h2>
<p>Once you evaluate the tool yourself, it's time to see if the team is also interested and gather their feedback — this is the next step in evaluating the tool. You have a few options here—the choice depends on how simple or complex the tool is. As the complexity increases, you might choose a more formal approach.</p>
<p>Here are two example approaches you can take:</p>
<ul>
<li><p><strong>Informal</strong>: Create an open Slack discussion channel to share ideas and get feedback.</p>
</li>
<li><p><strong>Formal</strong>: Create a Request For Change (RFC) document that explains the proposed tool change, its benefits, and possible risks.</p>
</li>
</ul>
<p>No matter which option you choose, you should focus on a few key areas.</p>
<ul>
<li><p><strong>Collect feedback</strong>. Encourage team members to review, comment on, and give their input. Tag the right people and set a deadline for collecting feedback (e.g., two weeks), to make sure you get input on time.</p>
</li>
<li><p><strong>Get approvals</strong>. Get the necessary approvals for the tool (architecture, security, etc.).</p>
</li>
<li><p><strong>Describe the integration path</strong>. Decide how you will integrate the tool and explain the method in detail.</p>
</li>
<li><p><strong>Communicate with the teams involved</strong>. Communicate and discuss what is needed from other delivery teams.</p>
</li>
</ul>
<h2 id="heading-step-3-integrate-the-tool-into-the-repository">Step 3 — Integrate the tool into the repository</h2>
<p>Once the tool change is approved, it's time to add it to your repositories.</p>
<ul>
<li><p><strong>Install the tool</strong>. Install the tool, which may involve creating a new service or module in your repository to accommodate it.</p>
</li>
<li><p><strong>Add minimal configuration</strong>. Add the appropriate configuration and necessary scripts.</p>
</li>
<li><p><strong>Focus on local development experience</strong>. Ensure the tool is available locally for other engineers first, and save any automation for later. This allows engineers to start using the tool if they choose and provide feedback.</p>
</li>
<li><p><strong>Communicate the change</strong>. Communicate the change introduced to the team, along with the next steps, to keep everyone in the loop.</p>
</li>
</ul>
<h2 id="heading-step-4-run-as-a-part-of-automation">Step 4 — Run as a part of automation</h2>
<p>Integrating the new tool with your Continuous Integration (CI) pipeline is the next crucial step to ensure the tool is effective.</p>
<ul>
<li><p><strong>Integrate into a workflow</strong>. Add the tool to the appropriate part of the workflow. Make sure it runs on every pull request, unless that's not practical.</p>
</li>
<li><p><strong>Add a minimal low-severity ruleset</strong>. Start with a basic set of useful rules and add more later. Initially, it might be appropriate to make the workflow step non-blocking by using warnings instead of errors. This approach allows more people to notice the tool without interrupting ongoing development. For pull requests, consider running the tool only on changed files at first, and then expand later.</p>
</li>
<li><p><strong>Highlight results</strong>. For pull requests, make sure the results are highlighted, especially when they are non-blocking. You can add them as comments on pull requests because warnings in a successful pipeline will almost certainly be overlooked.</p>
</li>
<li><p><strong>Communicate the change</strong>. As always, communicate the change to the team.</p>
</li>
</ul>
<h2 id="heading-step-5-add-documentation">Step 5 — Add documentation</h2>
<p>Make sure to include all the important details in the documentation. You can do this during the earlier steps, but it's also fine to create it now, once the basic setup is complete. Any needed context can be understood from the initial discussion.</p>
<ul>
<li><p><strong>Update the changelog</strong>. Add a changelog entry if your repository maintains one.</p>
</li>
<li><p><strong>Create documentation</strong>. Make sure to cover most of the following aspects.</p>
<ul>
<li><p>Short description of the tool and its purpose.</p>
</li>
<li><p>The reason for introducing the tool.</p>
</li>
<li><p>Common CLI commands (if the tool supports them).</p>
</li>
<li><p>Guide to using the tool.</p>
</li>
<li><p>Description of the CI workflow integration.</p>
</li>
<li><p>Links to official documentation or other related resources.</p>
</li>
</ul>
</li>
<li><p><strong>Communicate the change</strong>. At this point, you most likely know what to do.</p>
</li>
</ul>
<h2 id="heading-step-6-spread-the-word">Step 6 — Spread the word</h2>
<p>After introducing the tool change with clear documentation, share your experience and get feedback from the wider team. It's usually a good idea to have a short meeting to discuss the tool with anyone whose work will be directly affected by it.</p>
<ul>
<li><p><strong>Describe the “why”</strong>. Explain the tool's function and the reason for its use.</p>
</li>
<li><p><strong>Show it in action</strong>. Prepare a demo to show engineers how to use the tool. Use the documentation and other relevant resources as needed.</p>
</li>
<li><p><strong>Highlight automation</strong>. Explain how the workflow integration works and how it affects engineers.</p>
</li>
<li><p><strong>Keep a record for future reference</strong>. Record the meeting for future reference. Share the slides and the recorded meeting in the appropriate channels.</p>
</li>
</ul>
<h2 id="heading-step-7-involve-other-teams-in-actively-adopting-the-tool">Step 7 — Involve other teams in actively adopting the tool</h2>
<p>If introducing the tool requires action from other teams (to handle the results or integrate the solution into their areas), be sure to involve them. Depending on what you can realistically handle yourself and how you define code ownership, this step will need varying amounts of convincing and persuasion.</p>
<p>Here are two example strategies you might follow.</p>
<ul>
<li><p><strong>Start and hand off</strong>. Consider starting the work for other teams and then passing it on to them.</p>
</li>
<li><p><strong>Do it yourself</strong>. Alternatively, if the change is simple, you can do all the work for other teams and then pass it on to them just for review and testing.</p>
</li>
</ul>
<h2 id="heading-step-8-refine-the-configuration-and-expand-the-rules">Step 8 — Refine the configuration and expand the rules</h2>
<p>As the repository evolves and engineers start using the tool, we should continue to refine the configuration and consider expanding the rules.</p>
<ul>
<li><p><strong>Expand the ruleset</strong>. Consider making the rules stricter or adding new rules.</p>
</li>
<li><p><strong>Run on all files</strong>. Think about changing the workflow to apply to all files, not just the ones that have been changed.</p>
</li>
<li><p><strong>Block CI</strong>. Consider blocking the workflow step by changing warnings to errors (only do this if it won't create too much friction in pull requests).</p>
</li>
</ul>
<blockquote>
<p><strong>Note</strong>: This is the final, ongoing state.</p>
</blockquote>
<h2 id="heading-conclusion">Conclusion</h2>
<p>That's it! Introducing large-scale tooling changes in software projects can be overwhelming. It involves not only doing the work but also getting everyone on the same page. No matter what tool you introduce, there are a few key areas and common themes to focus on.</p>
<ul>
<li><p><strong>Evaluate and research</strong>. Before integrating any tool, carefully check its features, limits, and how it fits with your current systems. Doing this early work helps avoid problems later and makes sure the tool meets your team's goals.</p>
</li>
<li><p><strong>Take small steps</strong>. Make changes gradually, starting with local access and non-blocking automation. This helps engineers slowly get used to the changes and give feedback, reducing interruptions to ongoing development.</p>
</li>
<li><p><strong>Over-communicate</strong>. Consistent and clear communication is very important during the whole process. Keep everyone informed about changes, collect feedback, and explain why the new tool is being used.</p>
</li>
<li><p><strong>Get everyone involved</strong>. Encourage teamwork by including the team in talks and decisions from the start. Their feedback and support are crucial for successful adoption and lasting impact.</p>
</li>
</ul>
<p>If you enjoyed the article or have a question, feel free to reach out to me on <a target="_blank" href="https://bsky.app/profile/tomaszgil.me">Bluesky</a> or leave a comment here! 👋</p>
<h3 id="heading-further-reading-and-references">Further reading and references</h3>
<ul>
<li>Photo by <a target="_blank" href="https://unsplash.com/@ajrobbie?utm_content=creditCopyText&amp;utm_medium=referral&amp;utm_source=unsplash">AJ Robbie</a> on <a target="_blank" href="https://unsplash.com/photos/photo-of-gray-elephant-on-grass-t5V1rup9DCY?utm_content=creditCopyText&amp;utm_medium=referral&amp;utm_source=unsplash">Unsplash</a></li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Repeat the Code, Not the Information]]></title><description><![CDATA[The past year has been pivotal for me and my views on repeated code. "Don't repeat yourself" (DRY) is seen as a good practice, and rightly so. However, like any good practice, it needs context to be applied correctly. Without this context, applying a...]]></description><link>https://blog.tomaszgil.me/repeat-the-code-not-the-information</link><guid isPermaLink="true">https://blog.tomaszgil.me/repeat-the-code-not-the-information</guid><category><![CDATA[Software Engineering]]></category><category><![CDATA[software development]]></category><category><![CDATA[React]]></category><category><![CDATA[TypeScript]]></category><category><![CDATA[best practices]]></category><dc:creator><![CDATA[Tomasz Gil]]></dc:creator><pubDate>Wed, 04 Jun 2025 10:33:42 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1749024959536/ff5f8da3-8ad3-43e7-b44f-7cd9a26a4266.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The past year has been pivotal for me and my views on repeated code. "Don't repeat yourself" (DRY) is seen as a <em>good practice</em>, and rightly so. However, like any good practice, it needs context to be applied correctly. Without this context, applying a good practice can often be <em>harmful</em>.</p>
<p>I find that engineers often rush to create abstractions and reuse code, trying to avoid duplication at almost any cost. There's a major problem with this approach. Just because code looks similar or even identical doesn't mean it should be shared. It might represent fundamentally different information and, as a result, evolve in different directions.</p>
<p>This article is another attempt to explain this concept—from the perspective of types.</p>
<h2 id="heading-when-types-should-be-shared">When types should be shared</h2>
<p>You might have a situation where you have several different but related components. Maybe they operate at the same level or are used together. They share the same props.</p>
<p>Let’s imagine we have a feed showing chat messages. Each message has its own error handling and a preview for extra media content. We use a small set of components to build the feed. We need to pass the <code>id</code> of the message along with it’s content and the <code>threadId</code> to which the message belongs. All these components also use the current browser <code>location</code> inside their implementation.</p>
<p>Altogether, they all require the same props.</p>
<pre><code class="lang-typescript"><span class="hljs-comment">// Example #1 - Chat Messages</span>

<span class="hljs-keyword">const</span> ChatMessage = <span class="hljs-function">(<span class="hljs-params">{ id, threadId, location, data }</span>) =&gt;</span> {}
<span class="hljs-keyword">const</span> ErrorFallback = <span class="hljs-function">(<span class="hljs-params">{ id, threadId, location, data }</span>) =&gt;</span> {}
<span class="hljs-keyword">const</span> MessagePreview = <span class="hljs-function">(<span class="hljs-params">{ id, threadId, location, data }</span>) =&gt;</span> {}
</code></pre>
<p>Here’s the main question I’ll be exploring in this article.</p>
<blockquote>
<p>When we have multiple components with the same props — should we create a common type definition for props or one for each component?</p>
</blockquote>
<p>Technically, we don't need separate interfaces if all components have the same props. However, I recommend having separate interfaces for different components unless the <em>intention</em> is for them to share the same interface.</p>
<p>Here's what I mean by this.</p>
<h2 id="heading-lets-start-with-a-single-prop">Let’s start with a single prop</h2>
<p>Let's consider a different and much simpler example. Imagine we have three components, all in one file.</p>
<pre><code class="lang-typescript"><span class="hljs-comment">// Example #2 - UI Components</span>

<span class="hljs-keyword">const</span> Button = <span class="hljs-function">(<span class="hljs-params">{ children }: ButtonProps</span>) =&gt;</span> {}
<span class="hljs-keyword">const</span> Alert = <span class="hljs-function">(<span class="hljs-params">{ children }: AlertProps</span>) =&gt;</span> {}
<span class="hljs-keyword">const</span> Dialog = <span class="hljs-function">(<span class="hljs-params">{ children }: DialogProps</span>) =&gt;</span> {}
</code></pre>
<p>How should we define the interfaces for these components? They all have the same props—should we create a single <code>Props</code> interface? They only accept <code>children</code>, so it seems like a good opportunity to reuse code. DRY, right?</p>
<p>You might feel that this approach is not quite right. Technically, it wouldn't be incorrect, since the code to write an interface for each component is exactly the same. However, <strong>it would be wrong semantically</strong>—a button is not an alert, and an alert is not a dialog. The interfaces are not truly <em>the same</em>, even if they <em>appear to be identical</em>.</p>
<h2 id="heading-lets-add-the-second-prop">Let’s add the second prop</h2>
<p>Let's expand on our example. Ask yourself, how are these components most likely to change over time? As time goes on, we might want to add more props. I asked AI, and it suggested these changes as the next most likely to occur:</p>
<pre><code class="lang-typescript"><span class="hljs-comment">// Example #2 - UI Components</span>

<span class="hljs-keyword">const</span> Button = <span class="hljs-function">(<span class="hljs-params">{ children, onClick }: ButtonProps</span>) =&gt;</span> {}
<span class="hljs-keyword">const</span> Alert = <span class="hljs-function">(<span class="hljs-params">{ children, status }: AlertProps</span>) =&gt;</span> {}
<span class="hljs-keyword">const</span> Dialog = <span class="hljs-function">(<span class="hljs-params">{ children, isOpen }: DialogProps</span>) =&gt;</span> {}
</code></pre>
<p>Each component received a new prop, and each prop is unique to the component. As expected, they all evolved differently. If we had used a single <code>Props</code> interface, we would have needed to return to separate interfaces because, fundamentally, they were never a single interface. Each interface carried different information.</p>
<p>To sum it up in one rule: <strong>repeat the code, but not the information</strong>.</p>
<p>Returning to our initial chat message example—since these three components have different purposes, serve different functions, and carry different information, it's better to declare separate prop interfaces for each component. If they accept the same props, those individual props should share a type.</p>
<pre><code class="lang-typescript"><span class="hljs-comment">// Example #1 - Chat Messages</span>

<span class="hljs-keyword">interface</span> ChatMessageProps {
  id: <span class="hljs-built_in">string</span>;
  threadId: <span class="hljs-built_in">string</span>;
  location: <span class="hljs-built_in">string</span>;
  data: Message;
}

<span class="hljs-keyword">const</span> ChatMessage = <span class="hljs-function">(<span class="hljs-params">{ id, threadId, location, data }: ChatMessageProps</span>) =&gt;</span> {}

<span class="hljs-comment">// We're creating an interface per component...</span>
<span class="hljs-keyword">interface</span> ErrorFallbackProps {
  id: <span class="hljs-built_in">string</span>;
  threadId: <span class="hljs-built_in">string</span>;
  location: <span class="hljs-built_in">string</span>;
  <span class="hljs-comment">// ...and sharing the type for individual props.</span>
  data: Message;
}

<span class="hljs-keyword">const</span> ErrorFallback = <span class="hljs-function">(<span class="hljs-params">{ id, threadId, location, data }: ErrorFallbackProps</span>) =&gt;</span> {}

<span class="hljs-keyword">interface</span> MessagePreviewProps {
  <span class="hljs-comment">// Same content as in ChatMessageProps and ErrorFallbackProps</span>
}

<span class="hljs-keyword">const</span> MessagePreview = <span class="hljs-function">(<span class="hljs-params">{ id, threadId, location, data }: MessagePreviewProps</span>) =&gt;</span> {}
</code></pre>
<h3 id="heading-horizontal-vs-vertical-slicing">Horizontal vs. vertical slicing</h3>
<p>There's an important point to consider. Here, I'm looking at the repeated information from the perspective of the domain. The vertical separation of concerns. You might argue that if you view this information from a technical level (horizontal), such a reusable type could make sense. We can create a generic type to capture this idea.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">type</span> PropsWithChildren&lt;Props <span class="hljs-keyword">extends</span> <span class="hljs-built_in">object</span>&gt; = {
  children: React.ReactNode;
} &amp; Props;
</code></pre>
<p>React even had this at the component type level—<code>React.FC</code>. This type still exists, but <a target="_blank" href="https://www.totaltypescript.com/you-can-stop-hating-react-fc">it no longer automatically includes children in the props</a>.</p>
<p>It might be useful to have such a type just for utility, especially in lower-level code where there's no business domain, like in your framework. However, when you start incorporating business, design or user experience decisions, I think it's better in the long run to avoid splitting the information horizontally. This approach allows for a clear separation without too many layers of abstraction.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1749026467250/3e3dcee4-a2f5-49aa-8d2f-337141b96733.png" alt="Diagram showing properties of Button, Alert, and Dialog components." class="image--center mx-auto" /></p>
<h2 id="heading-the-implications-of-sharing-interfaces">The implications of sharing interfaces</h2>
<p>Let's look at a different example where creating a shared interface is sensible.</p>
<pre><code class="lang-typescript"><span class="hljs-comment">// Example #3 - Error Fallback Components</span>

<span class="hljs-keyword">interface</span> ErrorProps {
  title: <span class="hljs-built_in">string</span>;
  message: <span class="hljs-built_in">string</span>;
  error: <span class="hljs-built_in">Error</span>;
}

<span class="hljs-keyword">const</span> DefaultError = <span class="hljs-function">(<span class="hljs-params">{ title, message, error }: ErrorProps</span>) =&gt;</span> {}
<span class="hljs-keyword">const</span> NetworkError = <span class="hljs-function">(<span class="hljs-params">{ title, message, error }: ErrorProps</span>) =&gt;</span> {}
<span class="hljs-keyword">const</span> <span class="hljs-built_in">TypeError</span> = <span class="hljs-function">(<span class="hljs-params">{ title, message, error }: ErrorProps</span>) =&gt;</span> {}
</code></pre>
<p>We also have components that accept the same props, but this time they are using a common interface. This approach communicates an important message: <strong>these components share the same requirements and are likely to evolve together in the future</strong>. Of course, this might not always happen, but at least there's a clear intention behind it.</p>
<p>Let’s consider the changes we might introduce to these components. If we want to control an illustration displayed for an error, we probably want to <em>apply the change to all components</em>. Similarly, if we want to add <code>retry</code> functionality, we likely want to <em>apply the change to all components</em>.</p>
<p>You see my point—sharing an interface actually <em>means</em> that these components have a common interface. They are the same now and likely will need to be the same in the future.</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>“Don’t repeat yourself” is a more nuanced concept than we might realize. It’s not just about repeated code; it’s about repeated information. The first is easy to notice, but the second is incredibly difficult to spot. My approach is to let patterns emerge, and when it becomes clear that something should be abstracted, then abstract it. Doing it the other way around can be quite painful.</p>
<p>Overall, here's the mental model I find useful when thinking about components that accept similar props.</p>
<ul>
<li><p>Components that are related and used interchangeably should share the same interface.</p>
</li>
<li><p>Components that are unique and specialized should not share the same interface because they serve different purposes.</p>
</li>
</ul>
<p>If you enjoyed the article or have a question, feel free to reach out to me on <a target="_blank" href="https://bsky.app/profile/tomaszgil.me">Bluesky</a> or leave a comment here! 👋</p>
<h3 id="heading-further-reading-and-references">Further reading and references:</h3>
<ul>
<li><p>Recently, I wrote an article about component composition that also discusses this topic: <a target="_blank" href="https://blog.tomaszgil.me/choosing-the-right-path-composable-vs-configurable-components-in-react">Choosing the Right Path: Composable vs. Configurable Components in React</a>.</p>
</li>
<li><p>Photo by <a target="_blank" href="https://unsplash.com/@willianjusten?utm_content=creditCopyText&amp;utm_medium=referral&amp;utm_source=unsplash">Willian Justen de Vasconcellos</a> on <a target="_blank" href="https://unsplash.com/photos/mountains-and-a-lake-reflect-under-a-cloudy-sky-RyoQe3BU8gI?utm_content=creditCopyText&amp;utm_medium=referral&amp;utm_source=unsplash">Unsplash</a>.</p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Rebuilding My Personal Website: 2025 Edition]]></title><description><![CDATA[Introduction
My old website was made in 2019 and hasn't been updated much since, except for career updates. I'm actually proud of that—I don't want to be someone who rebuilds their personal site every year.
But now, the site is over five years old in...]]></description><link>https://blog.tomaszgil.me/rebuilding-my-personal-website-2025-edition</link><guid isPermaLink="true">https://blog.tomaszgil.me/rebuilding-my-personal-website-2025-edition</guid><category><![CDATA[software development]]></category><category><![CDATA[Web Development]]></category><category><![CDATA[Design]]></category><category><![CDATA[portfolio]]></category><category><![CDATA[Software Engineering]]></category><dc:creator><![CDATA[Tomasz Gil]]></dc:creator><pubDate>Tue, 27 May 2025 07:46:39 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1748204762267/7429241a-77e6-47b0-8400-9b507b38e3fc.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-introduction">Introduction</h2>
<p>My old website was made in 2019 and hasn't been updated much since, except for career updates. I'm actually proud of that—I don't want to be someone who rebuilds their personal site every year.</p>
<p>But now, the site is over five years old in terms of both technology and content. It has become outdated—and it was obvious. It was built with Gatsby that relied on an old version of Node. I could still deploy it on Netlify, but getting it to work locally was a hassle. It also had a CMS, which was fun to add back then, but I hardly used it. Simply put, it was a mess.</p>
<p>I finally decided to redesign, rebuild from scratch, and make it much simpler. Here's what I did.</p>
<blockquote>
<p>Go and see it for yourself: <a target="_blank" href="https://tomaszgil.me/">https://tomaszgil.me/</a>.</p>
</blockquote>
<h2 id="heading-collecting-inspiration">Collecting inspiration</h2>
<p>I've spent a lot of time searching for great examples of personal websites from both developers and designers, and also looking at product designs I really liked.</p>
<p>Using design inspiration galleries was really helpful at first. Here are the ones that were most useful to me.</p>
<ul>
<li><p><a target="_blank" href="https://a-fresh.website/">https://a-fresh.website/</a></p>
</li>
<li><p><a target="_blank" href="https://minimal.gallery/">https://minimal.gallery/</a></p>
</li>
<li><p><a target="_blank" href="https://godly.website/">https://godly.website/</a></p>
</li>
</ul>
<p>I found plenty of stunning sites. I’ve picked a handful that were the most interesting to me, each with something I wanted to include in my design.</p>
<ul>
<li><p><a target="_blank" href="https://markhorn.dev/">Mark Horn’s Portfolio</a>. This site focuses on simplicity and balance, with a well-organized content structure. Most of this is achieved through typography. I really like the combination of sans-serif and serif fonts.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1745751841992/f3f1bdc9-795e-4884-8e67-7f1e3a88c256.png" alt="Mark Horn’s portfolio" class="image--center mx-auto" /></p>
</li>
<li><p><a target="_blank" href="https://ped.ro/">Pedro Duarte’s Personal Website</a>. It's simple, personal, and focused on storytelling. I really like how the story is divided on the homepage and the overall layout of the site's content.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1745752432419/e62179d2-35b1-437e-b6c9-e1b2eaa5f291.png" alt="Pedro Duarte’s Personal Website" class="image--center mx-auto" /></p>
</li>
<li><p><a target="_blank" href="https://yinger.dev/">Max Yinger’s Website</a>. This site combines ultimate simplicity—the homepage serves as the entire portfolio—with interactive design, featuring neat extras like a time clock.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1745753077636/5880745a-e591-41f2-b25d-071477f6d939.png" alt="Max Yinger’s Website" class="image--center mx-auto" /></p>
</li>
<li><p><a target="_blank" href="http://alistairshepherd.uk">Alistair Shepherd’s Website</a>. This website has one of the most creative theme switchers I've ever seen. Combined with a fantastic hero illustration that moves as you scroll, it looks truly impressive.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1748032090677/74673201-d3b8-423f-9e11-cbccb5434815.png" alt="Alistair Shepherd’s Website" class="image--center mx-auto" /></p>
</li>
<li><p><a target="_blank" href="https://jzhao.xyz/">Jacky Zhao’s Website</a>. Sticking with the theme of theme switchers, Jacky's digital garden offers another great example—a pure CSS implementation that mimics sunlight streaming through a window, which looks absolutely great. The effect is open source, and you can find it <a target="_blank" href="https://github.com/jackyzha0/sunlit/tree/main">here</a>.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1748032933421/6aaf274b-83b8-466e-a898-73b2e80b2b9a.png" alt="Jacky Zhao’s Website" class="image--center mx-auto" /></p>
</li>
</ul>
<p>Alongside the individual portfolios, there were also products or services where the design resonated with me—I wanted my site to have similar characteristics or aesthetics.</p>
<ul>
<li><p><a target="_blank" href="https://www.raycast.com/">Raycast</a>. My favorite productivity app for macOS. Executed perfectly, with a strong focus on full keyboard navigation. I also really like the homepage design, especially the combination of sans-serif and monospace fonts.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1748087621182/9a411814-8bf2-48e4-9bf9-3065e29dbfeb.png" alt="Raycast's Website" class="image--center mx-auto" /></p>
</li>
<li><p><a target="_blank" href="https://stripe.dev/">Stripe</a>. I'm not talking about the product or the main homepage here, but a part of the developer-facing documentation. I don't quite remember how I found it, but once I did, I fell in love right away. It's both modern and technical—thanks mainly to the use of a grayscale and sans-serif-monospace font combination (you might notice a theme here). I also like how the website can be fully navigated using a keyboard, with clear indicators showing how to access other pages.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1748087668376/60267ca3-ad35-4f6c-8ace-bb3db771ed29.png" alt="Stripe.dev Website" class="image--center mx-auto" /></p>
</li>
</ul>
<h2 id="heading-the-design">The design</h2>
<p>I started by defining the main high-level characteristics I wanted my design to follow.</p>
<ol>
<li><p><strong>Minimal and elegant</strong>. I wanted the website to be as clean as possible, so users can focus on the content. In practice, this meant using monochrome color palette, simple fonts, and minimal line icons. I reduced extra elements, mainly using white space for creating visual hierarchy. Once that was in place, I could add small details like animations and a theme-switching feature around the edges.</p>
</li>
<li><p><strong>Modern and technical</strong>. Doubling down further on simplicity, I wanted the website to have a modern and technical look and feel. This influenced my choice of font families and the design of small components like buttons and links.</p>
</li>
<li><p><strong>Theme-able and keyboard-accessible</strong>. The areas I wanted to explore were theming and full keyboard navigation. Normally, these are not the main focus of design, but I wanted to highlight them on my website. I believe they add value, not only for accessibility but also for user convenience and the overall look and feel.</p>
</li>
</ol>
<p>These characteristics directly influenced the content structure I went with.</p>
<ol>
<li><p><strong>Homepage</strong>. I wanted the homepage to have a one-sentence description of what I do—nothing more, nothing less. The rest of the content can be accessed through navigation.</p>
</li>
<li><p><strong>About</strong>. Separately, I wanted to share a bit more about my personal and professional background for those interested. This part highlights my main interests in engineering and beyond, along with a glimpse of my human side outside of tech.</p>
</li>
<li><p><strong>Work</strong>. I decided to create a separate page to list the teams I've worked with recently, along with a brief description of what I did at each place. This serves as a concise version of my résumé.</p>
</li>
<li><p><strong>Writing</strong>. Writing has become a core part of who I am as an engineer, both within the teams I've worked with and externally. I wanted to create a separate space to highlight a few of my most recent public articles, providing easy access to my blog.</p>
</li>
<li><p><strong>Contact</strong>. I wanted people to have an easy way to reach out, so I included the contact information and links to social platforms in the navigation menu, making them just one click away.</p>
</li>
</ol>
<p>With the content structure in place, I was ready to start designing. I chose two fonts—<a target="_blank" href="https://fonts.google.com/specimen/Geist">Geist</a> as the primary sans-serif font and <a target="_blank" href="https://fonts.google.com/specimen/Geist+Mono">Geist Mono</a> as the secondary monospace font. Both fonts look clean and modern, with the monospace font adding a more technical feel.</p>
<p>I made a few iterations in Figma and settled on the following design. I had a few other pages roughly sketched out, which was enough for me to start the implementation.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1748156441962/ffa5e3aa-a8e7-433a-985c-e53bf664d83a.png" alt="First iteration of Figma designs" class="image--center mx-auto" /></p>
<h2 id="heading-the-implementation">The implementation</h2>
<p>I had a few goals in mind when starting the project.</p>
<ol>
<li><p><strong>Server-side generation</strong>. I knew the website would be simple, with some interactivity, but mostly focused on the content. Server-side generation was probably going to be the right rendering approach, so I wanted my setup to support it.</p>
</li>
<li><p><strong>React and TypeScript</strong>. This is my bread and butter, the tools I use every day, and they have a great ecosystem built around them. Even though it would have been just as viable to implement this using plain HTML, CSS, and some JavaScript, I wanted to stick with my regular stack purely for ease of development.</p>
</li>
<li><p><strong>Maintainability and support</strong>. I wanted to choose tools that are well-established, have good support and resources, and—as a result—are likely to remain in good condition five years from now.</p>
</li>
</ol>
<h3 id="heading-the-framework">The framework</h3>
<p>After some research, I decided on <a target="_blank" href="https://astro.build/">Astro</a>. My friend, <a target="_blank" href="https://bsky.app/profile/raygesualdo.com">Ray Gesualdo</a>, recommended it to me and even wrote a short blog series about moving his own website and blog to Astro (you can <a target="_blank" href="https://www.raygesualdo.com/series/migrating-to-astro/">read it here</a>).</p>
<p>Even though Astro is a relatively new framework, it has excellent documentation and many integration options, including React. It allowed me to have a statically-built site with almost perfect Lighthouse scores—all by default, without any extra effort on my part.</p>
<h3 id="heading-design-system">Design system</h3>
<p>Even though I didn’t need many components, I knew it was useful to choose a component library. I have implemented enough buttons and basic typography components in the past, so now I know better. I used to work with Stitches and Radix, and I really enjoyed both.</p>
<p>I noticed that the team behind Radix recently introduced <a target="_blank" href="https://www.radix-ui.com/themes/docs/overview/getting-started">Themes</a>, a component library built on top of Radix's primitive components, offering theming options as the name suggests. This was a no-brainer.</p>
<p>It turned out to be a fantastic choice. Implementing the pages was easy, ensuring all basic design elements were consistent—from colors, typography, and spacing to the smallest components like buttons or links. It also supports a dark theme right out of the box, which I knew would be useful.</p>
<h3 id="heading-theme-switcher">Theme switcher</h3>
<p>This was an area I wanted to explore much deeper. I was impressed with <a target="_blank" href="https://sunlit.pages.dev/">Jacky’s website's animated background</a>, and since it was open source, I decided to build on it. The effect transitions smoothly from day to night but is actually more granular under the hood, moving through 6 distinct phases altogether. I wanted to expose this to users, so I introduced six themes based on the six phases of the day: dawn, sunrise, day, sunset, dusk, and night, instead of just two (light and dark).</p>
<p>I needed to make some small adjustments to the colors so each state has enough contrast and connect it to my setup with theme-switching controls. The value of the currently selected theme is stored in local storage. I've experimented with it a lot, trying to refine how it’s done with Astro—<s>it's still not perfect with the statically generated pages (even though the value loads before the page renders)</s>.</p>
<blockquote>
<p>Update: I just found a version that <strong><em>works</em></strong>—<a target="_blank" href="https://bsky.app/profile/tomaszgil.me/post/3lstw4y6lqc26"><em>read more about it here</em></a>.</p>
</blockquote>
<p>I decided to leave it as it is. The final touch was to add a subtle swoosh animation for the icon when the day changes to night.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1748158655611/f042ca9b-a15a-41c8-a9b1-b87a5d3a006b.gif" alt="Theme switcher with an animated background" class="image--center mx-auto" /></p>
<h3 id="heading-keyboard-navigation">Keyboard navigation</h3>
<p>Inspired by Stripe’s developer site, I wanted to add hotkeys for interactive or navigation elements. I used the <a target="_blank" href="https://www.radix-ui.com/themes/docs/components/kbd"><code>Kbd</code></a> component from Radix to display the hotkey next to each interactive element.</p>
<p>I didn't want the hotkey hints to always be visible, so I added a separate layer where the keyboard shortcut hints appear when the user presses <code>C</code>. This is managed through a global context provider and is passed to all hotkey components.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1748201710652/b90eec8b-840f-4ca5-bf8c-87665b818303.gif" alt="Keyboard navigation with hotkeys" class="image--center mx-auto" /></p>
<h3 id="heading-other-tools">Other tools</h3>
<p>Finally, here are some additional tools, packages, or services I used during the process.</p>
<ul>
<li><p>Icons: <a target="_blank" href="https://www.radix-ui.com/icons">Radix Icons</a></p>
</li>
<li><p>Animations: <a target="_blank" href="https://motion.dev/">Motion</a></p>
</li>
<li><p>Hosting: <a target="_blank" href="https://www.netlify.com/">Netlify</a></p>
</li>
<li><p>Analytics: <a target="_blank" href="https://umami.is/">Umami</a></p>
</li>
</ul>
<h2 id="heading-the-final-result">The final result</h2>
<p>That's it! The process took about three months of light evening coding, which was much longer than I expected, but I'm really happy with the result. A big thanks to my friends who helped me make decisions along the way and provided feedback!</p>
<p>Again, you can see the result here: <a target="_blank" href="https://tomaszgil.me/">https://tomaszgil.me/</a>.</p>
<p>If you enjoyed the article or have a question, feel free to reach out to me on <a target="_blank" href="https://bsky.app/profile/tomaszgil.me">Bluesky</a> or leave a comment below!</p>
<h3 id="heading-further-reading-and-references">Further reading and references</h3>
<ul>
<li>Photo by <a target="_blank" href="https://unsplash.com/@matthardy?utm_content=creditCopyText&amp;utm_medium=referral&amp;utm_source=unsplash">Matt Hardy</a> on <a target="_blank" href="https://unsplash.com/photos/body-of-water-under-sky-6ArTTluciuA?utm_content=creditCopyText&amp;utm_medium=referral&amp;utm_source=unsplash">Unsplash</a>.</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Knowing When Enough is Enough: Pull Request Sizing]]></title><description><![CDATA[Deciding the right size for a change in software projects can be challenging. Is a larger change necessarily bad? How do you effectively decide when to stop working on a change? How do you break it down?
In this article, we explore the intricacies of...]]></description><link>https://blog.tomaszgil.me/knowing-when-enough-is-enough-pull-request-sizing</link><guid isPermaLink="true">https://blog.tomaszgil.me/knowing-when-enough-is-enough-pull-request-sizing</guid><category><![CDATA[Software Engineering]]></category><category><![CDATA[software development]]></category><category><![CDATA[Pull Requests]]></category><category><![CDATA[code review]]></category><category><![CDATA[change]]></category><dc:creator><![CDATA[Tomasz Gil]]></dc:creator><pubDate>Tue, 12 Nov 2024 09:54:14 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1731270355713/c1ca56da-7658-4b9d-b818-fac42688d37f.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Deciding the right size for a change in software projects can be challenging. Is a larger change necessarily bad? How do you effectively decide when to stop working on a change? How do you break it down?</p>
<p>In this article, we explore the intricacies of pull request sizing, focusing on how to balance change size and scope to maintain momentum and reduce complexity.</p>
<h2 id="heading-youre-never-done">You’re never done</h2>
<p>In the world of software engineering, we’re never <em>truly</em> done. There’s always something to address. This applies at the system level, the feature level, and even to a single change introduced to a codebase. Here are a few scenarios that might sound familiar.</p>
<ul>
<li><p><em>Maybe I’ll just go and implement this next bit as well…</em></p>
</li>
<li><p><em>Ugh, this looks ugly - this could use a little refactoring…</em></p>
</li>
<li><p><em>This lacks tests - let me add some while I’m at it…</em></p>
</li>
</ul>
<p>These are all good instincts—I even wrote a <a target="_blank" href="https://blog.tomaszgil.me/leave-the-code-better-than-you-found-it">blog post about this</a>—instincts that, when shared by engineers working on a project, can significantly help maintain that project long-term. However, these instincts can also lead to a loss of focus, causing engineers to become sidetracked by improvements that, while beneficial, may not be immediately necessary and negatively impact the overall delivery.</p>
<p>At some point, we have to call it a day, let others review the change, and merge it. But is a longer change always bad?</p>
<h2 id="heading-change-size-and-change-scope">Change size and change scope</h2>
<p>First off, it’s good to clarify ways in which we can describe how large a change is. I really like the terms that the book "<a target="_blank" href="https://www.oreilly.com/library/view/software-engineering-at/9781492082781/">Software Engineering at Google</a>" uses for that. The terms introduced are size and scope.</p>
<ul>
<li><p><strong>Change size</strong> is the quantifiable measure of code modifications, typically involving in metrics like lines of code changed or number of files modified.</p>
</li>
<li><p><strong>Change scope</strong> refers to the broader impact and implications of a modification. It considers factors like the number of dependent systems affected, potential performance implications, security considerations, and the extent of testing required.</p>
</li>
</ul>
<h2 id="heading-what-to-focus-on">What to focus on</h2>
<p>Both size and scope are important, but they are not the end goal. What we are ultimately trying to assess is the <strong>change complexity</strong> - and potentials risks related to it. Change size is simple to measure through tooling, so it’s easy to assume that this is equal to change complexity. Even though it can serve as some indicator, it can be widely misleading. To give a few examples:</p>
<ul>
<li><p>A one-line change to a critical API could be small in size but have massive implications on the entire system.</p>
</li>
<li><p>Refactoring of documentation might involve many lines but carry minimal risk.</p>
</li>
<li><p>Running an automated change across dozens of files is large in size, but not complex or hard to review.</p>
</li>
<li><p>A small change that covers a few unrelated concerns makes it harder to thoroughly review the code.</p>
</li>
</ul>
<p>Change complexity isn’t equal to change size—it is a <em>combination</em> of size and scope. What we want to aim for is <strong>reducing change complexity</strong>. By doing that, we gain several benefits.</p>
<ul>
<li><p>Easier review - the review process is quicker and, because the change is more focused, it can be more accurate at the same time.</p>
</li>
<li><p>Reduced risk - there is less impact if something goes wrong, making it easier to identify and roll back issues when they occur.</p>
</li>
<li><p>Better testing - test coverage becomes more focused, making it clearer what needs manual testing and what might cause regression.</p>
</li>
<li><p>Team collaboration - more frequent code integration leads to better knowledge sharing across the team and reduces the chances of blocking other team members.</p>
</li>
</ul>
<h2 id="heading-maintaining-momentum">Maintaining momentum</h2>
<p>On a practical level, reducing change complexity is important, but so is maintaining momentum. This applies to both you and those who will review your code. There is a point of diminishing returns—splitting changes too much can slow you down as the <strong>operational costs</strong>, like creating separate branches, managing commits or introducing feature flags, may simply take too much time and, as a result, outweigh the benefits. The same goes for the reviewer—it's helpful to review small changes, but if there are multiple review requests every half hour or so, it leads to constant <strong>context switching</strong>.</p>
<p>Where this point lies depends on many factors, and there is no strict rule for it. Nevertheless, t's important not to spread yourself too thin.</p>
<h2 id="heading-practical-tips">Practical tips</h2>
<p>We discussed the reasons for reducing change complexity; now let's talk about how to do it. There are several ways to simplify changes, and you can apply these steps at any stage—before starting, while working on the change, or even after it's ready. Here are some methods that have worked well for me. The approach you take should take, however, depends on different factors, so you can choose what works best for each situation.</p>
<ol>
<li><p><strong>Assess the complexity</strong>. As we've discussed, assessing complexity involves two dimensions: size and scope.</p>
<ol>
<li><p>How many lines of code have you or will you change? How many of these is <em>effective</em> code (not autogenerated or moved)? How many files were touched? These factors will contribute to your <strong>change size</strong>.</p>
</li>
<li><p>How many different concerns have your covered? Does you change include feature implementation, introducing tests, generating translations and some related refactoring work? How many other services will depend on this change? These factors will drive up the <strong>change scope</strong>.</p>
</li>
</ol>
</li>
<li><p><strong>Decide how to run your change</strong>. Once we know the change size and scope, we should decide if the change actually needs to be split or not. Here are some guidelines that work for me.</p>
<ol>
<li><p><strong>Small in size, small in scope</strong>. It's the best and probably the most common type of change, like addressing a single issue in one part of the code. It can be handled as a single change and will likely be easy to review and deploy.</p>
</li>
<li><p><strong>Large in size, small in scope</strong>. An example of this type of change might be addressing a specific concern throughout the entire codebase, like updating imports for a reusable package. These changes can often be automated, both in implementation and possibly in review. It's fine to run them as one change, and they are often easier to manage that way.</p>
</li>
<li><p><strong>Small in size, large in scope</strong>. These are focused changes that affect critical functions or handle multiple concerns—this type of change might be worth splitting into smaller parts. It all depends on the context and circumstances. Even if you decide to keep it as a single change, it's important to consider ways to make it easier to review.</p>
</li>
<li><p><strong>Large in size, large in scope</strong>. There are changes that affect multiple concerns across different parts of the codebase—they should always be divided into smaller parts. The approach will vary depending on the specific situation, but it's always helpful to break these into smaller pieces.</p>
</li>
</ol>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1731269253483/c8306fba-01e4-4f15-8983-66d76b700b84.png" alt class="image--center mx-auto" /></p>
<ol start="3">
<li><p><strong>Document with comments</strong>. Even if you decide to keep it as a single change, it's helpful to document your change with comments. Comments in the code at the right places can be useful, but here I mainly mean comments for reviewing the change. There are two ways you can do this.</p>
<ol>
<li><p><strong>Change description</strong>. It's important to include several elements. Did you provide context for the problem you're addressing and outline the high-level approach to the solution? Did you clearly describe the scope of the change, detailing what areas it covers? Did you make sure to link to any relevant resources that can offer additional insights or background information? Lastly, if you have a recommended way to review the change, have you included that as well? This can all help reviewers to evaluate your work effectively.</p>
</li>
<li><p><strong>Change comments</strong>. Adding context for reviewing in the change description is good, but adding comments is even better—the information is placed exactly where it's most needed. I often comment on my own pull requests before submitting them for review to highlight areas with significant changes, outline the type of change, or indicate that the code has been moved. When writing these comments, it's also helpful to consider if the comment will only be useful during the review or also afterward. If it's the latter, consider moving that information into a code comment.</p>
</li>
</ol>
</li>
<li><p><strong>Think about your commits</strong>. If the comments aren't enough to provide all the necessary context for reviewing a change, it might be worth splitting the change into individual commits. You can create commits based on the files or areas affected, or by concerns, such as separating a commit for implementation from a commit for tests. The individual commits still make up a single change but offer a clear separation.</p>
</li>
<li><p><strong>Consider a follow-up pull request</strong>. Taking it a step further, you can open a follow-up pull request. This can be especially helpful for addressing optional code review feedback or making improvements. It's similar to splitting commits but provides even more separation and clarity.</p>
</li>
<li><p><strong>Split the task</strong>. To give yourself even more flexibility, you can split the task entirely. This allows you to open separate pull requests, similar to the previous point, with the added benefit of getting back to the drawing board, reconsidering requirements, and planning individually.</p>
</li>
</ol>
<h2 id="heading-conclusion">Conclusion</h2>
<p>That’s it! Deciding the right size for changes in software projects involves balancing change size and scope to reduce complexity and maintain momentum. While instincts to refine and improve code are valuable, they can distract from the main goals. Assessing change size and scope helps in managing change complexity, risks, and reviewing efficiency.</p>
<p>If you enjoyed the article or have a question, feel free to reach out to me on <a target="_blank" href="https://x.com/gil_tomasz">X</a> or leave a comment below!</p>
]]></content:encoded></item><item><title><![CDATA[Choosing the Right Path: Composable vs. Configurable Components in React]]></title><description><![CDATA[There's a topic I keep revisiting that I believe is crucial for writing and maintaining React applications: structuring UI components. There are two main approaches: composable and configurable components.
Let's explore the strengths and trade-offs o...]]></description><link>https://blog.tomaszgil.me/choosing-the-right-path-composable-vs-configurable-components-in-react</link><guid isPermaLink="true">https://blog.tomaszgil.me/choosing-the-right-path-composable-vs-configurable-components-in-react</guid><category><![CDATA[React]]></category><category><![CDATA[components]]></category><category><![CDATA[software development]]></category><category><![CDATA[Programming Blogs]]></category><category><![CDATA[APIs]]></category><dc:creator><![CDATA[Tomasz Gil]]></dc:creator><pubDate>Tue, 08 Oct 2024 11:25:53 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1727465626715/4b85066c-69ff-4cd6-a7e8-1e46efce6ea3.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>There's a topic I keep revisiting that I believe is crucial for writing and maintaining React applications: structuring UI components. There are two main approaches: composable and configurable components.</p>
<p>Let's explore the strengths and trade-offs of these approaches, and why you might prefer to actually only use one of these two types for most of your components.</p>
<blockquote>
<p>🚨 Be warned, this post is highly opinionated.</p>
</blockquote>
<h2 id="heading-example-alert-component">Example: Alert Component</h2>
<p>For the purpose of this article, we'll take a look at a small alert component with two example implementations. Let's say our application need alert components that meets the following requirements.</p>
<ul>
<li><p>It displays a title and a description.</p>
</li>
<li><p>It has one of four statuses: success, error, warning, or info.</p>
</li>
<li><p>Depending on the status, it shows a different icon and color.</p>
</li>
</ul>
<p>Here's what this component might look like.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727378955138/2262b60b-5b4d-4158-a0b4-a54a41a93255.png" alt class="image--center mx-auto" /></p>
<p>Let's compare how our component might be used in both its configurable and composable versions. I'll focus mostly on usage rather than implementation for a crucial reason: for any reusable components, the API is often more important than the underlying code.</p>
<p>You can view the <a target="_blank" href="https://codesandbox.io/p/sandbox/kf3dqf">example implementation here</a>.</p>
<h2 id="heading-configurable-components">Configurable components</h2>
<p>Configurable React components have the following characteristics:</p>
<ul>
<li><p><strong>DRY (Don't Repeat Yourself):</strong> These components often reduce code duplication by encapsulating common patterns and behaviors.</p>
</li>
<li><p><strong>Complex implementation:</strong> As they need to handle various use cases through configuration, their internal logic can become intricate.</p>
</li>
<li><p><strong>Strict output control:</strong> They provide more predictable results, allowing developers to tightly control the component's output. This can be advantageous for consistency but may limit flexibility.</p>
</li>
</ul>
<p>Using such a component might look like so.</p>
<pre><code class="lang-tsx">&lt;Alert
  status="success"
  title="Success"
  description="Your action was completed successfully."
/&gt;
</code></pre>
<h2 id="heading-composable-components">Composable components</h2>
<p>The attributes of composable React components are:</p>
<ul>
<li><p><strong>Simplified implementation</strong>: Composable components tend to be smaller, have a simpler implementation, making them easier to both read and write.</p>
</li>
<li><p><strong>Leveraging React composition mechanism</strong>: These components take advantage of React's built-in composition features, aligning well with React's core design principles.</p>
</li>
<li><p><strong>Less control over the outcome:</strong> It allows for more flexibility in how the component is used, but may also lead to less predictable results in some cases.</p>
</li>
</ul>
<p>Using such a component might look like so.</p>
<pre><code class="lang-tsx">&lt;Alert status="success"&gt;
  &lt;AlertIcon /&gt;
  &lt;AlertContent&gt;
    &lt;AlertTitle&gt;Success&lt;/AlertTitle&gt;
    &lt;AlertDescription&gt;
      Your action was completed successfully.
    &lt;/AlertDescription&gt;
  &lt;/AlertContent&gt;
&lt;/Alert&gt;
</code></pre>
<h2 id="heading-how-components-change-over-time">How components change over time</h2>
<p>We've successfully implemented the alerts and are happily using them in our application. It turns out, however, that we will need to show more alerts in new parts of the application, but they need to be slightly different.</p>
<p>Let's revisit our example. Suppose we need to implement new alerts with additional requirements on top of the existing ones:</p>
<ul>
<li><p><strong>Dismissible alerts</strong>: Alerts display an additional button to dismiss</p>
</li>
<li><p><strong>Optional icons</strong>: Not all alerts require an icon</p>
</li>
<li><p><strong>Action buttons</strong>: Alerts can display an additional action button with a label and an action</p>
</li>
</ul>
<p>Here’s how it might look like.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727379136865/58993697-d94e-4a53-9078-7e671e4a16bc.png" alt class="image--center mx-auto" /></p>
<p>Now, let's see how the APIs of our example components need to evolve to accommodate these new requirements.</p>
<h3 id="heading-configurable-component">Configurable component</h3>
<p>Configurable components grow by <strong>expanding their configuration</strong>—introducing more props. Here's how our API might evolve:</p>
<pre><code class="lang-tsx">&lt;AlertConfigurable
  status="success"
  title="Success"
  description="This alert has all features enabled."
  showIcon={true}
  dismissible={true}
  onDismiss={() =&gt; console.log("Alert dismissed")}
  actionLabel="Take Action"
  onAction={() =&gt; console.log("Action clicked")}
/&gt;
</code></pre>
<p>We've added several new props: two for the dismiss button (one to indicate if the alert is dismissible and another for the dismiss click handler), one to control the icon's visibility, and two for the action button (label and click handler).</p>
<h3 id="heading-composable-component">Composable component</h3>
<p>In contrast, the way composable components evolve is, most often, by having <strong>more components added into the mix</strong> - either by splitting existing ones or creating new ones. Here’s how it might look like.</p>
<pre><code class="lang-tsx">&lt;Alert status="success"&gt;
  &lt;AlertIcon /&gt;
  &lt;AlertContent&gt;
    &lt;AlertTitle&gt;Success&lt;/AlertTitle&gt;
    &lt;AlertDescription&gt;This alert has all features enabled.&lt;/AlertDescription&gt;
    &lt;AlertAction onClick={() =&gt; console.log("Action clicked")}&gt;
      Learn More
    &lt;/AlertAction&gt;
  &lt;/AlertContent&gt;
  &lt;AlertDismissButton onDismiss={() =&gt; console.log("Alert dismissed")} /&gt;
&lt;/Alert&gt;
</code></pre>
<p>We need to create two more components - one for alert dismiss button and one for alert action button. Their APIs will cover requirements relevant to these elements. Optional icon we get for free - if you don’t want this element, just don’t render it.</p>
<p>So, which one should we choose?</p>
<p>Your component implementation doesn't matter as long as existing alerts remain unchanged and new ones follow the initial requirements. However, in large-scale applications, the odds of this happening are, in my estimation, lower than winning the lottery.</p>
<p>I've worked on numerous projects of various sizes. In my experience, regardless of the project's size, scope, or initial assumptions, there's always <em>one more case to handle</em>. There's always that one page that needs to be different, that one component instance we want to tweak, or that one user flow that escapes the initial requirements.</p>
<p>We already went through one round of changes—now imagine we go through a few more iterations. What happens to the configurable component?</p>
<h3 id="heading-apropcalypse">Apropcalypse</h3>
<p>There is a funny term called apropcalypse, coined by <a target="_blank" href="https://twitter.com/gurlcode">Jenn Creighton</a>, to describe components with dozens of props. This happens when there are too many props to cover all possible configurations and ensure reusability, but instead, it makes the component inflexible and leads to a cluttered API.</p>
<p>This is what configurable components often turn into, as you can see in our example—we made only a few changes, the number of props increased to 8.. We tend to have a <em>just one more prop</em> mindset when making changes—it's often easier to add to an existing abstraction rather than rethink the original design and potentially break it up into smaller pieces.</p>
<p>Speaking of abstractions...</p>
<h3 id="heading-patterns-patterns-everywhere">Patterns, patterns everywhere</h3>
<p>Configurable components can provide more immediate value due to their opinionated nature. They're often quicker to develop with, at least initially. You can create a single component that encapsulates an observed pattern, use it wherever needed, and adjust props to achieve the desired behavior. It's fast and effective.</p>
<p>However, the question remains: <strong>Is the pattern you've observed truly a pattern</strong>? How do you know? With composable components, you don't need to answer these questions at all. The trade-off is more code—often repeated code—but that's by design. You're allowing each piece of UI using this component to evolve independently in the future. There's value in this flexibility, though it's not immediate. It's the value of adaptability to change.</p>
<h4 id="heading-a-word-on-dry">A word on DRY</h4>
<p>This concept directly relates to DRY—Don't Repeat Yourself. Having two pieces of code look similar isn't sufficient reason to abstract them. You need one more crucial piece of information.</p>
<p>The key question is: <strong>Will these code segments <em>change</em> together in the future</strong>? If you have strong evidence for that, then by all means, create an abstraction that expresses this relationship. This is where more opinionated, configurable components truly shine.</p>
<p>The challenge lies in answering that second question—it's difficult, if not impossible, to predict the future with certainty.</p>
<blockquote>
<p>I highly recommend reading <a target="_blank" href="https://twitter.com/Swizec">Swizec</a>'s <a target="_blank" href="https://swizec.com/blog/dry-the-common-source-of-bad-abstractions/">insightful post on this topic</a>.</p>
</blockquote>
<h3 id="heading-optimize-for-change">Optimize for change</h3>
<p>Composition is at the heart of building applications with React. It's one of the main reasons why React has become so popular.</p>
<p>Design systems and component libraries are a good example of this. Since the exact use cases for the components are fundamentally unknown to the library providers, they need to optimize for extensibility and composability. There is very little certainty about how these components will end up being used and arranged together. As a result, design system libraries tend to be built heavily relying on composable components.</p>
<p>But your application code is in a components library. The number of use cases you need to support is not infinite—it's probably just a few. So, you might wonder, why bother creating smaller, composable components?</p>
<p>To start, these components are fundamentally <strong>optimized for change</strong>, which is <a target="_blank" href="https://overreacted.io/optimized-for-change/">one of the signs of a good API</a>. They utilize React's composition mechanism, aligning with the framework's nature, and demonstrate that JSX is simply the right abstraction for most cases. If it works for most of the industry, it's likely a good fit for the UI you're trying to implement. I would say make sure you have strong reasons if you decide to do something different.</p>
<p>When you build your components to be composable, changes usually require less work (or sometimes come for free, like in our example)—both in implementation and regression testing. Implementation is simpler because you’re changing smaller pieces at a time and don’t have to deal with many dependencies. Regression testing is also simpler because instances of the components are more isolated.</p>
<p>It's perhaps easier to justify composable components for low-level UI elements like buttons, form elements, menus, or—as in our example—alerts. These need to be flexible because they're used frequently. However, change isn't limited to low-level UI elements—it happens across all levels.</p>
<p>With that in mind, I believe <strong>composable components are your best bet</strong>.</p>
<h2 id="heading-what-about-using-both">What about using both?</h2>
<p>There’s a case for having both composable components and configurable versions of components and use the right one depending on the use case. Here’s how you could approach this.</p>
<ul>
<li><p>Create composable components.</p>
</li>
<li><p>Create opinionated configurable components that use foundational composable components.</p>
</li>
<li><p>Use configurable components for the most common scenarios.</p>
</li>
<li><p>For more complex scenarios, create custom implementations using composable components instead of extending configurable ones.</p>
</li>
</ul>
<p>Where this approach gets tricky is avoiding the <em>just one more prop</em> tendency and not extending the configurable components' API over time. Additionally, it requires everyone to stick to this approach and have a shared understanding of when to use a configurable component and when to switch to a custom, composable implementation.</p>
<p>When done correctly, <strong>we get the best of both worlds</strong>: composable components provide flexibility, while configurable components offer quick implementation. However, in my experience it's not easy to achieve this with real projects that have many engineers working in parallel.</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>That’s it! We discussed how to structure components in React using two approaches: composable and configurable components. We also explored how these components might change over time.</p>
<p>Configurable components provide immediate value and strict control, but they can become cluttered and inflexible as requirements change. Composable components are more flexible and easier to adapt in the long run, especially in large-scale applications.</p>
<p>This flexibility is worth optimizing for, so I recommend choosing composable components for better adaptability, using configurable components only when there's a strong need. Understanding these concepts will help you make better decisions when designing your components.</p>
<h3 id="heading-further-reading-and-references">Further reading and references</h3>
<ul>
<li><p>Full code available in this <a target="_blank" href="https://codesandbox.io/p/sandbox/kf3dqf">CodeSandbox</a><strong>.</strong></p>
</li>
<li><p><a target="_blank" href="https://www.epicreact.dev/soul-crushing-components">Avoid soul-crushing components</a> by <a target="_blank" href="https://twitter.com/kentcdodds">Kent C. Dodds</a>.</p>
</li>
<li><p><a target="_blank" href="https://swizec.com/blog/dry-the-common-source-of-bad-abstractions/">DRY – the common source of bad abstractions</a> by <a target="_blank" href="https://twitter.com/Swizec">Swizec Teller</a>.</p>
</li>
<li><p><a target="_blank" href="https://overreacted.io/optimized-for-change/">Optimized for Change</a> by <a target="_blank" href="https://twitter.com/dan_abramov2">Dan Abramov</a>.</p>
</li>
<li><p>Photo by <a target="_blank" href="https://unsplash.com/@karsten_wuerth?utm_content=creditCopyText&amp;utm_medium=referral&amp;utm_source=unsplash">Karsten Würth</a> on <a target="_blank" href="https://unsplash.com/photos/pathway-between-fence-and-grasses-HiE1bIIoRqQ?utm_content=creditCopyText&amp;utm_medium=referral&amp;utm_source=unsplash">Unsplash</a>.</p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Navigating the Tech Job Market: Engineer's Insights from Landing a Job in 2024]]></title><description><![CDATA[The tech industry in the last two years has been tough, with more layoffs than we've seen in decades. Large companies, often as a result of excessive hiring sprees in the prior years, were executing cost-cutting initiatives, closing entire department...]]></description><link>https://blog.tomaszgil.me/navigating-the-tech-job-market-engineers-insights-from-landing-a-job-in-2024</link><guid isPermaLink="true">https://blog.tomaszgil.me/navigating-the-tech-job-market-engineers-insights-from-landing-a-job-in-2024</guid><category><![CDATA[interview]]></category><category><![CDATA[job search]]></category><category><![CDATA[Software Engineering]]></category><category><![CDATA[jobs]]></category><category><![CDATA[recruitment]]></category><dc:creator><![CDATA[Tomasz Gil]]></dc:creator><pubDate>Tue, 09 Apr 2024 07:29:07 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1712483827791/87d7b186-21e7-45ad-9ea4-08cc39883927.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The tech industry in the last two years has been tough, with <a target="_blank" href="https://layoffs.fyi/">more layoffs than we've seen in decades</a>. Large companies, often as a result of excessive hiring sprees in the prior years, were executing cost-cutting initiatives, closing entire departments. Small companies, facing much harder funding conditions due to rising interest rates, often change course, trying to maintain liquidity and wait for better times.</p>
<p>Like many others, I was directly impacted by this situation - I found myself without a job, facing the challenge head-on. This situation pushed me into the job market again, looking for new opportunities. Throughout the journey, I've had some interesting observations and thoughts that I plan to share with you in this article. ✍</p>
<h2 id="heading-what-this-article-is-not-about">What this article <em>is not</em> about</h2>
<p>Before we jump to that, I want to outline a bit more what this article is about. I'll do it slightly differently, by first inverting the question - I'll start with what this article is <em>not</em> about.</p>
<p><strong>This is not a definitive guide to landing a job</strong>. There are plenty of articles covering that topic broadly and a plethora of write-ups diving deep into various aspects of interviewing, both technical and non-technical. I'll be linking to some of the resources I used at the end of the article. What you need to successfully go through an interview process varies a lot based on the role you're applying for, the size and maturity of the company, as well as your experience and skills.</p>
<p><strong>This isn't a recipe for dealing with being laid off either</strong>. Even though this event was unexpected and stressful, fortunately, it was manageable. I can only imagine how hard it might be for other people, with a different background, sets of circumstances, or overall life situation.</p>
<p>To give you a complete picture, I'll be sharing some basic stats like the number of positions I applied for or the number of offers I received. <strong>This isn't a way for me to brag</strong> - instead, I want to give you an honest picture of what was needed to increase my chances of success.</p>
<p>Ok, so what this article <em>is</em> about?</p>
<h2 id="heading-what-this-article-is-about">What this article <em>is</em> about</h2>
<p>It is merely a recap of what occupied my time in the last months, alongside some thoughts and observations of the entire process. It is a list of those things that have worked in my favor and those that have worked against me. Even though this is specific to my situation and professional experience, I suspect that some elements might prove to be generally applicable and you'll find some useful insights - whether you're currently looking for new opportunities or not. 🙌</p>
<h2 id="heading-timeline-and-numbers">Timeline and numbers</h2>
<p>To give you a broad picture, I'll start with the timeline and some numbers, which should illustrate the interview processes I took part in.</p>
<ul>
<li><p><strong>The sad news</strong>. I got the sad news in the second week of September. We all knew the situation in the industry, but it nevertheless came as a surprise. I gave myself a couple of days to rest and then got to work.</p>
</li>
<li><p><strong>Research and interview preparation</strong>. I listed about 60 companies that roughly fit my criteria of places I would want to join. Some of these companies had open positions that matched my experience and skill set, and some only had their talent pools open. I prioritized the list, selecting half a dozen of the positions that I had the most interest in, and decided to apply for first. As the weeks went by, I went down the list and ultimately applied for a total of 30 positions.</p>
</li>
<li><p><strong>Reaching out to the network</strong>. Outside of cold applications, seeing how responsive companies are in general (or rather unresponsive, more on that later), I decided to tap into my network to increase my luck surface area. Apart from publishing a typical update on LinkedIn and Twitter, a few weeks into the process I decided to send out a message directly to every recruiter who had messaged me within the two years prior. I always reply to recruiters on LinkedIn, kindly declining new offers, so sending this kind of message wasn't completely out of the blue. I was already in contact with them, technically. I felt justified. Altogether, I sent out messages to 500+ recruiters and got responses from about 100 of them - I found 10 open positions that looked interesting to me, but I dropped out of most of these processes after the initial call with the recruiter. Even though most of that didn't have any material results, it contributed a lot to my sense of security. At the end of the day, I saw there were still plenty of opportunities out there.</p>
</li>
<li><p><strong>Interviews</strong>. I ultimately participated in 5 interview processes start to finish. I had the first interview at the very end of September, and the last one in the second week of November - a total of 34 meetings (some were just short calls, some a couple-hour-long interview sessions). Most of these processes had 3-4 steps, some included an extensive async coding exercise, and most had technical interviews with live coding and experience and background interviews.</p>
</li>
<li><p><strong>Offers and decision</strong>: These 5 interview processes resulted in 4 offers I could choose from. I would be fairly happy signing any of them, which had made the decision-making process quite comfortable. I made the final decision in the middle of November, making the entire job hunt last about 2 months.</p>
</li>
</ul>
<p>All of that translated to the following numbers.</p>
<ul>
<li><p>number of recruiters that I spoke to: <strong>500+</strong></p>
</li>
<li><p>number of applications filed: <strong>40</strong></p>
</li>
<li><p>number of companies that rejected my application: <strong>16</strong></p>
</li>
<li><p>number of companies I haven't heard back from: <strong>14</strong></p>
</li>
<li><p>number of processes I dropped out of: <strong>5</strong></p>
</li>
<li><p>number of processes I participated in: <strong>5</strong></p>
</li>
<li><p>number of offers received: <strong>4</strong></p>
</li>
<li><p>duration of the entire interviewing process: <strong>~2 months</strong></p>
</li>
</ul>
<p>Looking back, it was a lot within a fairly compressed timeframe. That said, the number of processes I took part in was about right - not enough to wear me down too much, but enough to give me great options to choose from. 🤝</p>
<h2 id="heading-thoughts-on-the-market">Thoughts on the market</h2>
<p>I'll start with general market considerations. <strong>The tech market has changed</strong>. We all know it. We hear about it left, right, and center. Whether or not this change is persistent, only the future will tell.</p>
<p>Could I feel this during the interview process? Definitely. It's no longer truly an employees' market, as it mostly has been for the last decade or so. We have to take into account, however, that we're stepping down from a really high horse. As I mentioned in the beginning, the massive rounds of layoffs and the corresponding change in the hiring market, are in substantial part related to the excessive hiring that took place in the last few years, or to the outpour of money flowing into higher-risk ventures. 💸</p>
<p><strong>Most companies at some point in the interview process referred to the apparent change in the market</strong>. Often that's normal - some of the people I talked with were completely honest and transparent about how they simply don't have as many resources available or are constrained by the uncertainty of their profits going into the future. Some people however used the change in the market as a negotiating tactic to subtly nudge me into making a decision - as at this moment I should really consider accepting less, but in a safer place. Yes, that was <em>fun</em>. If you're interviewing, be prepared for that.</p>
<p>Ultimately, your interviewing experience will vary based on what you bring to the table, but there are plenty of good opportunities out there, and it seems to slowly be getting better and better with each month. 📈</p>
<h2 id="heading-know-what-youre-looking-for">Know what you're looking for</h2>
<p>Even though it might seem that nowadays the market conditions overshadow any internal factors when interviewing, I believe what you control is still far more important than what's going on on the outside.</p>
<p>Speaking about things that are in one's control, you have to <strong>know what you're looking for</strong>. Picking the next company to join should never be a unidimensional choice. There are many factors at play. Here are some of the things I paid attention to, in no particular order.</p>
<ul>
<li><p>Challenges you can help solve.</p>
</li>
<li><p>Level of ownership and impact of your role.</p>
</li>
<li><p>Company's mission and values.</p>
</li>
<li><p>Mix of process and flexibility.</p>
</li>
<li><p>Approach to software engineering (code quality, testing, best practices).</p>
</li>
<li><p>Learning opportunities and knowledge sharing.</p>
</li>
<li><p>Technologies used.</p>
</li>
<li><p>Team size and organization structure.</p>
</li>
<li><p>Clear career path and feedback culture.</p>
</li>
<li><p>Company's business model and financial stability.</p>
</li>
<li><p>Compensation (both cash and equity sharing).</p>
</li>
</ul>
<p>Compensation is important, but so are what you'll be working on and who you'll be working with. <strong>No one factor inherently trumps the others</strong>. Figure out what's the right mix for you at this moment and let this be your north start not only at the very end of the process when you make the final decision, but at every step of the way. 🌃</p>
<h2 id="heading-brutal-honesty">Brutal honesty</h2>
<p>The type of position one is looking for together with one's skill set and experience will define the breadth of opportunities available. It is important to be aware of that and be honest about it. It might sound obvious, but <strong>it's worth checking if our expectations can be reflected in reality</strong>.</p>
<p>In my case, there were a few overarching themes that have largely defined my opportunity set, as they were mostly non-negotiables.</p>
<ul>
<li><p>I was looking for a Frontend Engineer position, working with React and TypeScript.</p>
</li>
<li><p>I wanted to work for a company that creates and maintains its own product - not a software house or consulting agency.</p>
</li>
<li><p>I was looking for a senior role or above, with a high degree of ownership, in a place with a culture of continuous learning.</p>
</li>
<li><p>I was looking for a remote-only position.</p>
</li>
</ul>
<p>All of that combined inevitably meant that my target market was quite broad. React nowadays is the most frequently used frontend framework. There is a plethora of companies building their software systems in-house, and it appears to me that companies prefer hiring more experienced engineers, especially in recent years with the prevalence of remote work. Speaking about working remotely, looking for such positions exposed me to a global market, which is a major leap in quantity compared to the opportunity set available in my local market in Poland.</p>
<p>Even though I consider myself a valuable candidate, with substantial experience and depth of knowledge, there are many other fantastic engineers with similar skills. <strong>There's not much that makes me truly unique within the entire market</strong>, at least at face value. 🤷</p>
<p>To better illustrate what I mean, I'll bring up a family member of mine, who works as a Design Engineer. If you're like me, you'd ask what the heck does that mean. He works across design and engineering, solving user experience challenges across the entire stack. If your company has a deeply technical product, where you need engineering experience to figure out the best user experience, and then design it and implement it, he’s probably one of a few candidates available to you around the world. Now, <em>that’s</em> unique. His opportunity set is much more limited than mine, but once he finds a company with problems in his circle of competence - it's instantly a match.</p>
<p>All of that led me to another conclusion, that <strong>the wider your target area is, the larger role luck plays in the mix</strong>. As I mentioned before, I vastly overestimated the responsiveness of companies to my applications. In the first two weeks, I applied for only 10 positions and waited for their responses. As you might guess based on the number of positions I ultimately applied for, that was a mistake. Most companies have so many candidates knocking on their door, that the most you can expect is a generic confirmation, letting you know that they will contact you again <em>only if</em> they want to continue the process. Some companies will let you know that they declined your application, but a lot of them will outright ghost you. 👻</p>
<p>An advice that I got pretty early on was very simple - <strong>find and apply to more companies</strong>. Many more than you might initially assume. It turned out to be effective, especially when interviewing globally, which the next part is all about.</p>
<h3 id="heading-global-recruitment-considerations">Global recruitment considerations</h3>
<p>I was looking for a remote-only position, which opened me to a market beyond the companies in my local area or even my home country of Poland. It expanded the opportunities available massively, at the same time making the interviewing dynamic quite unique in subtle, but important ways.</p>
<p>I’m based in Poland. <strong>Many companies with a global presence won’t even consider me for all sorts of reasons</strong>. Sometimes it’s the timezone - for teams located on another continent, this could be an issue. Sometimes it's an internal policy - hiring outside of the company's home country is not straightforward, so they might prefer to hire locally and keep their existing HR process. Sometimes it's regulation - companies creating projects in certain industries or for government entities are often legally limited to only hiring within their home country. Finally, some companies are not even aware that hiring people as contractors is easier and cheaper for them than having full-time employees.</p>
<p>I said that there's not much that's unique in my experience on a global scale, but there's always something you can focus on that differentiates you from other candidates applying for a particular role. For me, in most cases, it was a combination of engineering and design background and deep front-end knowledge and experience. <strong>Figure out and clearly define what your edge is and structure your interviews around that</strong>. 🏗</p>
<h2 id="heading-get-used-to-rejection">Get used to rejection</h2>
<p>Only 20% of recruiters responded to my messages, and 2% had open positions that seemed interesting to me. Only 25% of my applications resulted in interview processes and only 10% resulted in an offer being extended.</p>
<p>There's no other way around it - for most applications and interview processes <strong>the default response is rejection.</strong> You just have to make the math work in your favor - for me, it meant that to have an offer on the table, I had to apply for roughly 10 positions. And I think it's critical to have a few offers to choose from, so the best thing you can give yourself during that process is patience. 🧘‍♂️</p>
<h2 id="heading-the-bar-is-sometimes-surprisingly-low">The bar is sometimes surprisingly low</h2>
<p>Once I was over the initial hurdle, things started falling into place. Given how tough it is sometimes to get your foot in the door, it's almost surprising, in contrast, what companies highlight as the factors that positively differentiate you from other candidates.</p>
<p>Some interviewers pointed to the fact that I wrote a thoughtful cover letter. Some interviewers pointed to the fact that they can clearly tell I've read the job description. Some interviewers pointed to the fact that I came prepared, and had familiarised myself with what the company does, its mission, and values before the interview.</p>
<p>None of this is too hard to do. It does require a bit of time but increases your chances of success. Don't skip these steps during your interview preparation. I always thought of them as obvious preconditions but turns out they can sometimes work as a differentiating factor. 🎯</p>
<h2 id="heading-choosing-your-perspective">Choosing your perspective</h2>
<p>Another idea I had in mind when interviewing was the perspective one has on what they're trying to achieve. I believe there are two, distinct perspectives there.</p>
<p>The first perspective is that interviewing is <strong>a way to get a job</strong>. This implies that the company has a resource you want to secure. Resources are often scarce, hard to come by and obtain - the fact that something is scarce has a significant influence over humans' psychology. This perspective might not benefit you during the process.</p>
<p>The second perspective is that you want to <strong>provide a service for a company</strong>. This is the exact opposite perspective, as this implies you possess the resource the company is after. The scarcity principle becomes a tailwind, as you're there to provide value to whichever company ends up being the most interesting. This perspective will likely benefit you during the process.</p>
<p>Having that said, <strong>neither perspective alone reflects reality</strong>. Of course, you're after the job, and the company is after your expertise. The level of scarcity will depend on how high the bar the company has to meet is, and on the other side, what you offer as an engineer and how broad is your competition. But I still strongly believe it's worth primarily adopting the perspective that works in your favour. 🌄</p>
<h2 id="heading-negotiation-starts-the-minute-you-enter-the-door-or-perhaps-even-sooner">Negotiation starts the minute you enter the door (or perhaps even sooner)</h2>
<p>Much has been written about negotiation, especially job negotiations and even <a target="_blank" href="https://haseebq.com/my-ten-rules-for-negotiating-a-job-offer/">job negotiations in the tech industry</a>. I highly recommend reading these particular articles I linked to, they have been a fantastic guide through offer negotiations.</p>
<p>One surprising aspect of the interview process, however, was <strong>how fast some companies initiated the negotiations</strong>.</p>
<p>There are arguably very few aspects that can give you an edge in negotiations as the interviewee. It's the company that controls the structure of the interview, they control whether they are going to extend an offer to you and what it ends up looking like - in most cases - almost entirely. They are in the driver seat, but they don't control everything. <strong>Information is your negotiating power</strong>. You can decide what information to share with the company about your preferences and when. I tend to retain as much of it as possible mostly because it gives me time to get more information about each company I'm interviewing at. ⏰</p>
<p>I don't typically share my desired compensation during the initial stages of the interview process. I don't think that makes sense - I know little about the company and the position. I don't know whether I'm the right fit for the company or how much I would want to work there. I also don't know what the company values and how they structure their compensation. And on the other hand, the company has no way of knowing how valuable of a candidate I am. Throwing numbers around at this point seems pointless to me.</p>
<p>To that, some might say - it's a waste of time, it's best to get alignment on compensation as soon as possible. <em>Perhaps</em>. You have to be aware though that once you share any numbers, you immediately narrow the negotiation space, likely to your disadvantage. The company would never share the <em>actual</em> compensation targets, especially during the first steps, and I believe neither need you. 🤷‍♂️</p>
<blockquote>
<p>You should be able to get an idea of what the compensation for any particular position might be online. This won't be precise information, of course, but in most cases, it is <em>good enough</em> - the actual compensation the company can offer is always a range, likely changing throughout the process. You only need to have some directionally correct information.</p>
</blockquote>
<p>I got asked the infamous <em>"So, tell me, Tomasz, what are your salary expectations"</em> multiple times. Often right after the first interview with the recruiter. The reality is that this is a normal practice. To that I often respond, competely honestly, that the opportunity sounds exciting, and once both parties determine that this is the right fit, I will be willing to explore any package, as long as it's competitive. Some recruiters will push beyond that, but it's infrequent.</p>
<p>However, during this series of interviews, I received hard pushback on this. In one process the recruiter insisted that I had to give a number, so we could move forward after the initial call. In response, I referred to general statistics, like the average total compensation of a senior software engineer in Europe, indicating at the same time, that I'm very much interested in moving forward. That wasn't enough - they said the company wouldn't be able to meet that expectation (even though I was merely pointing to statistics, not expressing my expectations) and asked how much I would be <em>actually willing to accept</em>.</p>
<p>It is straight-up attempt to make me commit to specific numbers and close negotiations before the process even started. It was especially striking to me, as this was a lead engineer role. I can understand that a company might have a good idea of what junior or regular engineers' compensation would be before any individual process, but for senior engineers and above the bands are typically wider and depend much more heavily on what the candidate brings to the table. This is impossible to estimate at the first step in the interview. I replied that I do agree that it's important that we're on "the same page", but I'm simply unable to provide precise numbers at this point, beyond what I already mentioned. The recruiter finally accepted my response and we moved on.</p>
<p>This situation would probably look different from my side, or I wouldn't be comfortable evgaging in negotiations in general, had I not had other processes lined up. But I had other options. This was extremely important during negotiations, mostly for my own psychological safety. Also, you might be surprised how widely the offers differ in their numbers. In base compensation alone, the distance between the lowest and the highest offer I received was nearly 2 times. <strong>The stronger your other options are, the more you can afford to risk in any one process</strong>. You're likely not the only candidate the company is interviewing, so the company shouldn't be the only one at which you're interviewing either. That's an easy way to force yourself into a terrible negotiating position.</p>
<p>Finally, it's worth remembering that even though negotiating might make it seem as if you and the company are at odds with each other, <strong>you ultimately have the same goal - to reach an agreement that benefits both parties</strong>. You want to find the best place for yourself to thrive and the company seeks quality service from the best engineer they can find. I found this thought helpful when moving through the negotiations.</p>
<h2 id="heading-relationship-with-your-recruiter-is-paramount">Relationship with your recruiter is paramount</h2>
<p>Last but not least, I wanted to touch upon the relationship with probably the most important person in the interview process. <strong>That person, I believe, is your recruiter</strong>.</p>
<p>Picture your recruiter as your backstage pass to any one opportunity - a person who not only opens doors but also has your back when it comes to sealing the deal. This is a person who will be the proxy to you getting all of the information and any offer negotiation you might have most often will be handled with them. As much as they want you in their corner, you want them in yours. Ultimately, this collaboration will directly impact how well you know what you're about to step into and the degree to which your efforts will be rewarded.</p>
<p>This series of interview processes was a stark example of how important this particular relationship is. In three out of the four successful processes, I had close contact with the recruiters. We've had a few video calls throughout the journey. I kept them updated on what was going on on my side and they periodically checked in to see how everything was going and how I felt. Contrary to that, during the last process, my interactions with recruiters were limited to emails only. Additionally, one recruiter was handling the first part of the process, and another one the second part. There was no time or space to build any form of relationship. 🍂</p>
<p>The outcome speaks for itself. When the relationship with the recruiter was good - which was the case in the first three processes - I got all of the information I asked for and then some. The recruiters often went the extra mile to provide me with additional details, reaching out to other teams in the company to get some more context for me. I was also able to improve each of these offers - often meaningfully and virtually without any tension. In the last interview process, I almost literally hit a wall - I received pretty vague strands of information throughout the process, and my attempt to negotiate was turned down in a single email, in a fairly impolite way.</p>
<p>Looking back at the situation I can share a few simple suggestions I found to be helpful to get you on the right path with your recruiter. 🛤️</p>
<ul>
<li><p><strong>Show them that you mean business</strong>. They should be able to tell that you are serious about the opportunity and ready to engage in negotiations.</p>
</li>
<li><p><strong>Express enthusiasm</strong>. Show that you're excited about the opportunity, while moving towards reaching the final decision.</p>
</li>
<li><p><strong>Be cooperative</strong>. If you want a good spouse, deserve one. This rule applies in any relationship, not only in marriage. You want to be as collaborative and helpful to your recruiter as possible - they will most likely return the favor.</p>
</li>
<li><p><strong>Be likable</strong>. No one wants to advocate for someone they don't like, so simply be kind. Positivity and simple kindness towards others are seldom overrated.</p>
</li>
</ul>
<p>I'll mention for the record, that all of the above should come from a place of genuine interest and honesty. <strong>If you're not interested in joining a particular company, don't waste anyone's time</strong> - drop out of the process early and look for something that is a better fit.</p>
<p>Making sure your relationship with the recruiter is good is extremely important. It might seem like it's just wasting time on meetings, but it's absolutely not - and it goes both ways. There's as much in it for them as there is for you. 🤝</p>
<h2 id="heading-the-outcome">The outcome</h2>
<p>This was a long journey, which required a lot of preparation, attention, and effort. I can say now, however, already a couple of months into my new position, that it all ended exceptionally well. I'm super happy to report that <strong>I joined Salesloft as a Senior User Interface Engineer</strong>. I'm excited about the challenges that lie ahead and to be a part of a terrific team. 🥳</p>
<p>All of this would have been much more difficult, if not impossible, if it wasn't for the support I received. I'm especially thankful to my wife, for being there for me during the whole process. And to my engineering friends, for their help and multiple pieces of often invaluable advice. Thank you!</p>
<p>If you liked the article or you have a question, feel free to reach out to me on <a target="_blank" href="https://twitter.com/gil_tomasz">Twitter</a>‚ or add a comment below!</p>
<h3 id="heading-further-reading-and-resources">Further reading and resources</h3>
<p>Here are some of the resources I found useful during the entire process.</p>
<ul>
<li><p>What to look for in a company:</p>
<ul>
<li><a target="_blank" href="https://blog.pragmaticengineer.com/pragmatic-engineer-test/">https://blog.pragmaticengineer.com/pragmatic-engineer-test/</a></li>
</ul>
</li>
<li><p>Interview preparation:</p>
<ul>
<li><p><a target="_blank" href="https://github.com/learn-co-curriculum/bootcamp-prep-answering-non-technical-interview-questions">https://github.com/learn-co-curriculum/bootcamp-prep-answering-non-technical-interview-questions</a></p>
</li>
<li><p><a target="_blank" href="https://gist.github.com/nonsie/d07ebc343e23e5b5cd544609d1767f93">https://gist.github.com/nonsie/d07ebc343e23e5b5cd544609d1767f93</a></p>
</li>
<li><p><a target="_blank" href="https://github.com/viraptor/reverse-interview">https://github.com/viraptor/reverse-interview</a></p>
</li>
</ul>
</li>
<li><p>Salary negotiations:</p>
<ul>
<li><p><a target="_blank" href="https://haseebq.com/my-ten-rules-for-negotiating-a-job-offer/">https://haseebq.com/my-ten-rules-for-negotiating-a-job-offer/</a></p>
</li>
<li><p><a target="_blank" href="https://haseebq.com/how-not-to-bomb-your-offer-negotiation/">https://haseebq.com/how-not-to-bomb-your-offer-negotiation/</a></p>
</li>
</ul>
</li>
<li><p>How to value equity:</p>
<ul>
<li><a target="_blank" href="https://blog.pragmaticengineer.com/equity-for-software-engineers/">https://blog.pragmaticengineer.com/equity-for-software-engineers/</a></li>
</ul>
</li>
<li><p>Notes from other engineers' job hunts:</p>
<ul>
<li><a target="_blank" href="https://szymonkaliski.com/newsletter/2023-04-03-q1-2023/">https://szymonkaliski.com/newsletter/2023-04-03-q1-2023/</a></li>
</ul>
</li>
<li><p>General career advice:</p>
<ul>
<li><p><a target="_blank" href="https://kentcdodds.com/blog/business-and-engineering-alignment">https://kentcdodds.com/blog/business-and-engineering-alignment</a></p>
</li>
<li><p><a target="_blank" href="https://swizec.com/collections/seniormindset/">https://swizec.com/collections/seniormindset/</a></p>
</li>
</ul>
</li>
</ul>
<p>Other resources:</p>
<ul>
<li>Photo by <a target="_blank" href="https://unsplash.com/@sakulich?utm_content=creditCopyText&amp;utm_medium=referral&amp;utm_source=unsplash">Sergei A</a> on <a target="_blank" href="https://unsplash.com/photos/pine-trees-field-near-mountain-under-sunset--heLWtuAN3c?utm_content=creditCopyText&amp;utm_medium=referral&amp;utm_source=unsplash">Unsplash</a></li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Lessons from Software Engineering at Google: Part 10 - Continuous Integration]]></title><description><![CDATA[This is the tenth and last article in a series where we cover the book Software Engineering at Google by Titus Winters, Tom Manshreck, and Hyrum Wright. 📕 We will go over various aspects of software engineering as a process, including the importance...]]></description><link>https://blog.tomaszgil.me/lessons-from-software-engineering-at-google-part-10-continuous-integration</link><guid isPermaLink="true">https://blog.tomaszgil.me/lessons-from-software-engineering-at-google-part-10-continuous-integration</guid><category><![CDATA[Software Engineering]]></category><category><![CDATA[Google]]></category><category><![CDATA[book summary]]></category><category><![CDATA[ci-cd]]></category><category><![CDATA[Continuous Integration]]></category><dc:creator><![CDATA[Tomasz Gil]]></dc:creator><pubDate>Thu, 18 Jan 2024 12:52:39 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1705529923915/0f47e669-9c7e-4fe7-a3c5-0d030a398a88.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>This is the tenth and last article in a series where we cover the book Software Engineering at Google by Titus Winters, Tom Manshreck, and Hyrum Wright. 📕 We will go over various aspects of software engineering as a process, including the importance of communication, iteration and continuous learning, well-thought-out documentation, robust testing, and many more.</p>
<p>Today we cover continuous integration and delivery. These are systems and processes that define how members of engineering teams bring their work together, how the software is built, tested, and finally, delivered to your users. Let's dive in!</p>
<h2 id="heading-shift-left">Shift left</h2>
<p>The fundamental goal of continuous integration systems is to catch problematic changes as early as possible. As with most of the things we've discussed in this series, this becomes virtually impossible to do manually as projects grow. CI systems become progressively <strong>more necessary as your codebase ages and grows in scale</strong>. 📈</p>
<p>Furthermore, finding problems earlier in the developer workflow usually reduces costs. Bugs caught by static analysis and code review before they are committed are much cheaper than bugs that make it to production. Here's where the general rule related to CI systems comes into play - <a target="_blank" href="https://about.gitlab.com/topics/ci-cd/shift-left-devops/">Shift Left</a>. ⏪</p>
<blockquote>
<p><strong>Shift left</strong>: enable faster, more data-driven decision-making earlier on all changes through CI and continuous deployment.</p>
</blockquote>
<p>The purpose of testing is to gather information. Information about problems in your systems. Having this information earlier in the workflow allows having shorter iteration cycles, which means fewer bugs introduced and better quality features.</p>
<h2 id="heading-minimise-human-decisions">Minimise human decisions</h2>
<p>There are certain things humans excel at and there are things humans are inherently bad at. Consistently enforcing rules at scale arguably falls into the latter category. 🙃</p>
<p>One decision you need to make with every new change is which tests should be run against the changes being introduced. These decisions should be made consistently and repeatedly. Because of that, the book makes the case that this should never be up to individual engineers. A <strong>CI system decides which tests to use, and when</strong>. That way we always follow explicit rules, and we can reach the desired balance between deployment confidence and speed of development.</p>
<p>The book also suggests that <strong>CI should optimize for quicker, more reliable tests on presubmit and slower, less deterministic tests on post-submit</strong>. That way we can keep a reasonable pace of development while making sure we don't break things when releasing. 🧘‍♂️</p>
<h2 id="heading-ship-often-ship-fast">Ship often, ship fast</h2>
<p>The book makes an interesting observation about how the speed of delivery impacts the safety and confidence in changes being released.</p>
<blockquote>
<p><strong>Faster is safer</strong>: ship early and often, and in small batches to reduce the risk of each release and to minimize time to market.</p>
</blockquote>
<p>There are a few important steps you might want to take to ensure fast and effective releases. 👇</p>
<ul>
<li><p><strong>Optimise for team velocity</strong>. Velocity is a team sport. The optimal workflow for a large team that develops code collaboratively requires modularity of architecture and near-continuous integration.</p>
</li>
<li><p><strong>Evaluate changes in isolation</strong>. The only way to be sure what broke is to isolate changes. A typical way to achieve that is to flag guard any features to be able to isolate problems early.</p>
</li>
<li><p><strong>Make reality your benchmark</strong>. Use a staged rollout to address device diversity and the breadth of the user base. Release qualification in a synthetic environment that isn't similar to the production environment can lead to late surprises.</p>
</li>
<li><p><strong>Ship only what gets used</strong>. Monitor the cost and value of any feature in the wild to know whether it's still relevant and delivering sufficient user value.</p>
</li>
</ul>
<h2 id="heading-conclusion">Conclusion</h2>
<p>That's it for today. CI systems become necessary for growing teams and codebases, making it possible to efficiently and safely integrate work and deliver your applications. Here's a short summary of things we went through:</p>
<ul>
<li><p>CI systems become more necessary as your codebase grows in scale</p>
</li>
<li><p>Shift left: enable faster and data-driven decision-making earlier on all changes</p>
</li>
<li><p>A CI system decides what tests to use, and when</p>
</li>
<li><p>Faster is safer: ship early, often, and in small batches</p>
</li>
<li><p>Optimise for team velocity, evaluate changes in isolation, make reality your benchmark, ship only what gets used</p>
</li>
</ul>
<p><strong>Congratulations</strong>! 🥳 We've just reached the end of the series where we covered lessons learned from the book Software Engineering at Google. We've touched on a lot of aspects of software engineering as a process, but the book still covers a much wider array of topics. I hope you found this series useful and that you learned something that you will use in your work in the future. 🚀</p>
<p>If you liked the article or you have a question, feel free to reach out to me on <a target="_blank" href="https://twitter.com/gil_tomasz">Twitter</a>‚ or add a comment below!</p>
<h3 id="heading-further-reading-and-references">Further reading and references</h3>
<ul>
<li><p>The original <a target="_blank" href="https://twitter.com/gil_tomasz/status/1524477468995997698">Twitter thread</a> with notes from the book.</p>
</li>
<li><p>Link to purchasing <a target="_blank" href="https://www.oreilly.com/library/view/software-engineering-at/9781492082781/">the book</a>.</p>
</li>
<li><p>Photo by <a target="_blank" href="https://unsplash.com/@chuttersnap?utm_content=creditCopyText&amp;utm_medium=referral&amp;utm_source=unsplash">CHUTTERSNAP</a> on <a target="_blank" href="https://unsplash.com/photos/brown-cardboard-boxes-on-white-metal-rack-BNBA1h-NgdY?utm_content=creditCopyText&amp;utm_medium=referral&amp;utm_source=unsplash">Unsplash</a>.</p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Lessons from Software Engineering at Google: Part 9 - Dependency Management]]></title><description><![CDATA[This is the ninth article in a series where we cover the book Software Engineering at Google by Titus Winters, Tom Manshreck, and Hyrum Wright. 📕 We will go over various aspects of software engineering as a process, including the importance of commu...]]></description><link>https://blog.tomaszgil.me/lessons-from-software-engineering-at-google-part-9-dependency-management</link><guid isPermaLink="true">https://blog.tomaszgil.me/lessons-from-software-engineering-at-google-part-9-dependency-management</guid><category><![CDATA[Software Engineering]]></category><category><![CDATA[Google]]></category><category><![CDATA[book summary]]></category><category><![CDATA[dependency management]]></category><category><![CDATA[Programming Blogs]]></category><dc:creator><![CDATA[Tomasz Gil]]></dc:creator><pubDate>Wed, 10 Jan 2024 08:48:16 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1704368669988/4cadc8a5-74bd-4a56-9318-531ea8f5a9bf.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>This is the ninth article in a series where we cover the book Software Engineering at Google by Titus Winters, Tom Manshreck, and Hyrum Wright. 📕 We will go over various aspects of software engineering as a process, including the importance of communication, iteration and continuous learning, well-thought-out documentation, robust testing, and many more.</p>
<p>Today we cover dependency management. The management of networks of libraries, packages, and dependencies that we don’t control is one of the most challenging problems in software engineering. We will discuss how we update between versions of external dependencies and how to decide whether to depend on someone else's code. Let's dive in!</p>
<h2 id="heading-hidden-costs-of-dependencies">Hidden costs of dependencies</h2>
<p>One of the best features of the software engineering industry is the availability of open source solutions. For virtually any problem that might creep up in various software applications, there's most likely an open-source solution. You need to format or manipulate dates? There's a library for that. You need to keep track of the state of a form in a web application? There are open solutions for that too. This allows you to focus for the most part on the unique business problems that your software is solving. 🎯</p>
<p>However, as the book mentions, <strong>adding a dependency isn't free for a software engineering project</strong>, and the complexity of establishing an "ongoing" trust relationship is challenging. Importing dependencies into your organization needs to be done carefully, with an understanding of the ongoing support costs.</p>
<p>A <strong>dependency is a contract</strong>: there is a give and take, and both providers and consumers have some rights and responsibilities in that contract. Providers should be clear about what they are trying to promise over time - but that might not always be enough. The book brings up an interesting observation, that goes under the name of <a target="_blank" href="https://www.hyrumslaw.com/">Hyrum's Law</a>. 👇</p>
<blockquote>
<p>With a sufficient number of users of an API, it does not matter what you promised in the contract: all observable behaviors of your system will be depended on by somebody.</p>
</blockquote>
<p>By using external dependencies, your application relies on code you don't control, code that is released on an independent schedule, and can be updated in ways that you don't expect. It doesn't mean you shouldn't depend on it. It does mean, however, that you have to be careful how you use the dependency (and what you promise on the other side) and how you manage the relationship.</p>
<h2 id="heading-reducing-the-problem-complexity">Reducing the problem complexity</h2>
<p>In a large organization dependency management doesn't only refer to external dependencies. You might have different teams working on distinct parts of your system in separate repositories, which are then used in other parts of your organization. Synchronizing package versions of these solutions across separate repositories is also a dependency management problem.</p>
<p>According to the book, one way to reduce the complexity is by having all the teams working in a single mono repository, effectively <strong>replacing dependency management problems with source control problems</strong>. If you can get more code from your organization to have better transparency and coordination, those are important simplifications. 📉</p>
<h2 id="heading-the-holy-grail">The Holy Grail</h2>
<p>The overarching goal of dependency management is to be able to use the newest versions of packages and perform upgrades easily. Getting to the point at which you can <strong>reliably stay current when it comes to project dependencies</strong> going forward, is the essence of long-term sustainability for your software. 🏆</p>
<p>The book mentions that the only way to achieve this at scale is, as with many other things, through automation. You need to build the processes around your software in a way that <strong>infrastructure upgrades over time can be performed by the same number of engineers</strong>, even as the codebase grows. That's key. Otherwise, the cost to your dependencies increases not only increasing number of dependencies themselves but also the overall codebase growth. 😬</p>
<p>There are a few elements that are essential to automate dependency management process.</p>
<ul>
<li><p><strong>Keep track of dependency versions</strong>. The first step of external dependency management is keeping track of the versions in use. Most programming languages and their respective ecosystems have a common way of defining these versions, often using <a target="_blank" href="https://semver.org/">Semantic Versioning</a>.</p>
</li>
<li><p><strong>Use notifications when new versions are released</strong>. Make sure engineers know about new releases of packages their systems depend on. Ideally, when a new version of a library gets released, it should trigger the opening of a merge request in all relevant repositories. That gives engineers the easiest way to act.</p>
</li>
<li><p><strong>Auto-generate changes</strong>. The best and biggest open-source libraries often feature scripts that allow for automatic code modifications (commonly referred to as Codemods) with major version releases. They can speed up the upgrade process drastically and decrease the number of omissions.</p>
</li>
<li><p><strong>Test and release</strong>. In large projects with dozens of dependencies, it's not possible to manually test every upgrade. Instead, your infrastructure should decide, for the most part, if a change is ready to release. Write your tests in a way that allows them to be leveraged in such situations.</p>
</li>
</ul>
<h2 id="heading-conclusion">Conclusion</h2>
<p>That's it for today! Dependency management is often an underappreciated aspect of software engineering, however being able to upgrade to the newest versions is a cornerstone of long-term sustainability for your software. Here's a short summary of things we went through:</p>
<ul>
<li><p>adding a dependency isn't free for a software engineering project</p>
</li>
<li><p>dependency is a contract: there is a give and take, and both providers and consumers have some rights and responsibilities in that contract</p>
</li>
<li><p>Hyrum's Law: all observable behaviors of your system will be depended on by somebody with a sufficient number of users</p>
</li>
<li><p>replacing dependency management problems with source control problems often reduces the complexity</p>
</li>
<li><p>for long-term sustainability, project dependencies need to reliably stay current</p>
</li>
<li><p>infrastructure upgrades over time should be performed by the same number of engineers</p>
</li>
</ul>
<p>In the last part, we will cover continuous integration. We will touch on the inevitable nature of such solutions in growing systems and highlight what's most important for having a successful integration and release process. See you! 👋</p>
<p>If you liked the article or you have a question, feel free to reach out to me on <a target="_blank" href="https://twitter.com/gil_tomasz">Twitter</a>‚ or add a comment below!</p>
<h3 id="heading-further-reading-and-references">Further reading and references</h3>
<ul>
<li><p>The original <a target="_blank" href="https://twitter.com/gil_tomasz/status/1524477468995997698">Twitter thread</a> with notes from the book.</p>
</li>
<li><p>Link to purchasing <a target="_blank" href="https://www.oreilly.com/library/view/software-engineering-at/9781492082781/">the book</a>.</p>
</li>
<li><p>Photo by <a target="_blank" href="https://unsplash.com/@christopher__burns?utm_content=creditCopyText&amp;utm_medium=referral&amp;utm_source=unsplash">Christopher Burns</a> on <a target="_blank" href="https://unsplash.com/photos/person-holding-tool-during-daytime-8KfCR12oeUM?utm_content=creditCopyText&amp;utm_medium=referral&amp;utm_source=unsplash">Unsplash</a>.</p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Lessons from Software Engineering at Google: Part 8 - Software Maintenance]]></title><description><![CDATA[This is the eighth article in a series where we cover the book Software Engineering at Google by Titus Winters, Tom Manshreck, and Hyrum Wright. 📕 We will go over various aspects of software engineering as a process, including the importance of comm...]]></description><link>https://blog.tomaszgil.me/lessons-from-software-engineering-at-google-part-8-software-maintenance</link><guid isPermaLink="true">https://blog.tomaszgil.me/lessons-from-software-engineering-at-google-part-8-software-maintenance</guid><category><![CDATA[Software Engineering]]></category><category><![CDATA[Google]]></category><category><![CDATA[book summary]]></category><category><![CDATA[engineering]]></category><category><![CDATA[maintenance]]></category><dc:creator><![CDATA[Tomasz Gil]]></dc:creator><pubDate>Thu, 21 Dec 2023 15:12:53 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1703008736480/eb102438-53af-4fdb-a81d-98308a3e3fcb.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>This is the eighth article in a series where we cover the book Software Engineering at Google by Titus Winters, Tom Manshreck, and Hyrum Wright. 📕 We will go over various aspects of software engineering as a process, including the importance of communication, iteration and continuous learning, well-thought-out documentation, robust testing, and many more.</p>
<p>Today we cover software maintenance. Any successful system will face, sooner than later, some form of a maintenance burden. Unattended, it might turn into technical debt, which, as any other type of debt, is a double-edged sword. It can be an effective tool as long as it is treated with carefulness. Let's dive in!</p>
<h2 id="heading-assets-and-liabilities">Assets and liabilities</h2>
<p>In the financial realm, the most prevalent categorization of financial resources are assets and liabilities. Assets are things that you own and that have economic value. Liabilities can be contrasted with assets, they refer to things that you owe and are a drag on economic value.</p>
<p>This distinction is pretty clear. Financial resources are either assets or liabilities. It's also an interesting lens through which we can look at code. However, code escapes this categorization. It's as if it had aspects of both assets and liabilities - almost always directly contributing to creating the former, but eventually highly likely to turn into the latter. 🤷‍♂️</p>
<p>As the book says, <strong>code itself doesn't bring value</strong>: it is the <em>functionality</em> that it provides that brings value. That functionality is an asset if it meets a user need: the code that implements this functionality is simply a means to that end. We use it to create assets.</p>
<p>Once our code no longer provides the functionality users need, it causes significant maintenance problems or we're migrating to a newer solution, it instead starts creating liabilities. We have to maintain it, paying the costs overtakes reaping the rewards, as it no longer provides the value it was there to create for us. 💸</p>
<p>Scalably maintaining complex software systems over time is more than just building and running software: <strong>we must also be able to recognize and remove systems that are obsolete</strong> or otherwise unused.</p>
<h2 id="heading-system-migration">System migration</h2>
<p>Let's consider a scenario where you observe the first signs of deterioration. Your internal library all of a sudden has many more responsibilities than initially designed for. A key service behind your application is no longer supported. You have a feature on your roadmap that you know will be impossible to implement given the current architecture of your system. 🏗️</p>
<p>One potential answer to these problems might be to migrate the part of the system that causes issues. The book mentions however that <strong>migrating to entirely new systems is extremely expensive</strong> and the costs are frequently underestimated. It is largely the opposite of <a target="_blank" href="https://blog.tomaszgil.me/lessons-from-software-engineering-at-google-part-2-iterating-on-software">the iterative approach to software engineering</a>.</p>
<p>Instead, they suggest a more incremental approach to system migration, relying heavily on <em>deprecation</em>. Incremental deprecation efforts accomplished by in-place refactoring can keep existing systems running while making it easier to deliver value to users. 💎</p>
<p>This won't be easy, as a complete <strong>deprecation process involves managing social and technical challenges through policy and tooling</strong>. You've got to make sure people are moving away from what they are used to, and such change is never easy. However, deprecating in an organized and well-managed fashion is often overlooked as a source of benefit to an organization, but is essential for its long-term sustainability.</p>
<h2 id="heading-cost-benefit-analysis">Cost-benefit analysis</h2>
<p>Going back to our assets and liabilities analogy, software systems have continuing maintenance costs that should be weighed against the cost of removing them. We should be <strong>contrasting the value of the assets that our code produces and the liabilities that it creates</strong> for us as often as possible. ⚖️</p>
<p>Some of the best modifications to a codebase are deletions. Getting rid of dead or obsolete code is one of the best ways to improve the overall health of the codebase. You probably know it deep down - it just feels great to remove code. 😌</p>
<p>But as with deprecation, <strong>removing things is often more difficult than building them</strong> to begin with. Your job here is to ensure the code you're about to remove is not used. That's hard. Finding out that a piece of code <em>isn't</em> used is always more expensive than finding that it <em>is</em> used, as you have to search the entire problem space. On top of that, existing users are often using the system beyond its original design. Discovering all of these implicit dependencies makes your job even trickier, so you've got to pay extra attention before hitting the delete button.</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>That's it for today. Maintaining complex software systems at scale requires effort but it is critical for keeping them running. Here's a short summary of things we went through:</p>
<ul>
<li><p>code itself doesn't bring value: it is the functionality that it provides that brings value</p>
</li>
<li><p>migrating to entirely new systems is extremely expensive</p>
</li>
<li><p>incremental deprecation with in-place refactoring can keep existing systems running while making it easier to deliver value to users</p>
</li>
<li><p>some of the best modifications to a codebase are deletions, it is one of the best ways to improve the overall health of the codebase</p>
</li>
<li><p>removing things is often more difficult than building them</p>
</li>
</ul>
<p>Next, we will cover dependency management - the costs, and benefits associated with introducing dependencies, how to manage them the right way, and avoid the most common pitfalls. See you! 👋</p>
<p>If you liked the article or you have a question, feel free to reach out to me on <a target="_blank" href="https://twitter.com/gil_tomasz">Twitter</a>‚ or add a comment below!</p>
<h3 id="heading-further-reading-and-references">Further reading and references</h3>
<ul>
<li><p>The original <a target="_blank" href="https://twitter.com/gil_tomasz/status/1524477468995997698">Twitter thread</a> with notes from the book.</p>
</li>
<li><p>Link to purchasing <a target="_blank" href="https://www.oreilly.com/library/view/software-engineering-at/9781492082781/">the book</a>.</p>
</li>
<li><p>Photo by <a target="_blank" href="https://unsplash.com/@neom?utm_content=creditCopyText&amp;utm_medium=referral&amp;utm_source=unsplash">NEOM</a> on <a target="_blank" href="https://unsplash.com/photos/the-sun-is-setting-over-a-desert-landscape-va9218QJFAk?utm_content=creditCopyText&amp;utm_medium=referral&amp;utm_source=unsplash">Unsplash</a>.</p>
</li>
</ul>
]]></content:encoded></item></channel></rss>