Skip to main content

Command Palette

Search for a command to run...

Enhancing Software Engineering Workflow with Cursor Background Agents

Updated
4 min read
Enhancing Software Engineering Workflow with Cursor Background Agents
T

I help product teams build quality software and lead engineering efforts. Currently working at OpenSpace as a Senior Software Engineer.

Over the past few weeks, I’ve been experimenting with AI—especially Cursor Background Agents—to support my engineering work in a new web application we’re building. Below are some observations and tips that have helped me get better results.

Rules

One of the most important factors in making agents even directionally correct, especially early on, is establishing clear rules. At the start of a project, agents are almost useless without them.

As with many other things, I’ve found it best to begin with a small, simple set of rules, then expand and organize them as the project grows—first in a single file, then across multiple files, and eventually into directories when needed. A really useful mental model around agent rules is to ask, each time you correct an agent or have a discussion within the team, should this become a new rule?

I also now keep most of the documentation about the code in README files within the codebase, rather than in any external services, which makes it easier for both humans and agents to stay aligned.

Finally, it’s worth revisiting and refining the rules periodically. One effective way to do this is by asking a model to evaluate the existing rules, suggest improvements tailored to your tech stack, and identify any gaps:

Evaluate the rules below and suggest an improved version that works best with my tech stack.
If any important rules are missing, suggest adding them.

Prompting

With the basic rules in place, the next focus should be on prompting. It's been said time and time again, but it's worth mentioning—the quality of the LLM's output directly depends on how you prompt it.

Over time—through trial, error, and digging around online—I’ve collected a handful of instructions that tend to cut down on unnecessary back-and-forth. These tips aren’t quite as critical when working with in-editor agents (since the feedback loop there is much tighter), but for background agents they really help keep things on track.

Emphasize the rules:

  • Make sure to read the rules in the repository and follow them when implementing the feature.

Improve reasoning:

  • Think hard before starting implementation.

  • Plan your steps before writing code.

  • Prefer using existing components over creating custom ones, even if designs differ slightly.

Final checks:

  • Double-check requirements and handle any potential edge cases.

  • Make sure the added code follows standards defined in the codebase.

  • Ensure existing behavior isn’t broken.

  • Run formatting and linting before submitting code.

Workflow

I’ve heard (and read) that some people spin up multiple agents at once and only orchestrate them. I can’t see myself doing that yet, for a few reasons.

Running multiple agents in parallel still requires a fair amount of mental overhead, since each one needs preparation and follow-up adjustments. On top of that, at the beginning of any project, you probably don’t have enough distinct areas of work to parallelize effectively. And while agent output is usually a good starting point, it always requires significant adjustments—whether that’s making the design closer to spec, improving the user experience, or restructuring the code in a way that fits better.

Some of these issues can be mitigated by writing better rules, but many of them only come up during code review or while actually testing the solution. Still, I think agents can be very effective—even when used one at a time.

Having said all that, here’s my current workflow supported by background agents.

  1. Prep work for the agent. I usually work with very short issue descriptions, so I generate richer descriptions first to provide more context to the agent.

    Help me prepare a well-defined issue description to implement the following:
    <feature_description>

  2. Create a ready-to-use prompt. With the richer description, I generate a background-agent prompt (including my prompting instructions):

    Create a prompt optimized for Cursor Background Agents based on the feature description below.
    Additionally: <prompting_instructions>
    Feature description: <richer_feature_description>

  3. Run the agent and work on something else in parallel (e.g., code reviews, or writing a post like this).

  4. Review results and adjust in sequence:

    • Background agents → biggest follow-up changes (e.g., missing tests, mocks).

    • In-editor agents → medium-size adjustments.

    • Inline edits → small changes like styling or readability.

Takeaway

Right now, background agents give me about the first 50% of a feature. In theory, in-editor agents could do the same, but I find that the stronger models and the more open environment of background agents produce a better starting point, faster. I hope to slowly increase that initial percentage.

I’m curious: does this align with your experience using similar tools? How does your workflow differ?

If you enjoyed the article or have a question, feel free to reach out to me on Bluesky or leave a comment here! 👋

Further reading and references

More from this blog

T

Tomasz Gil - Software Engineer | Blog

53 posts

I help product teams build quality software and lead engineering efforts. Currently working at OpenSpace as a Senior Software Engineer.