AI

Run multiple AI encoding agents in parallel with Dagger’s containers

In AI-driven development, coding agents have become an indispensable collaborator. These autonomous or semi-autonomous tools can write, test and refactor code, greatly accelerating the development cycle. However, as the number of agents working on a single code base increases, the challenges can also increase: dependency conflicts, state leaks between agents, and the difficulty of tracking the behavior of each agent. Container usage projects from Dagger solve these challenges by providing a containerized environment for coding agents. By isolating each agent in its container, developers can run multiple agents simultaneously without interruption, check their activity in real time, and intervene directly if necessary.

Traditionally, when the encoding agent performs tasks such as installing dependencies, running a build script, or starting a server, it happens in the developer’s local environment. This approach quickly leads to conflicts: one agent can upgrade a shared library that breaks the workflow of another agent, or script errors may leave artifacts that mask subsequent runs. Containerization gracefully solves these problems by encapsulating the environment of each proxy. Instead of rotating a brand new environment one by one, you can safely rotate a brand new environment, experiment safely, and discard the fault immediately, while keeping the actions performed on each agent.

Additionally, container usage will be seamlessly integrated into existing workflows because containers can be managed through familiar tools, Docker, Git, and standard CLI utilities. Instead of locking in proprietary solutions, teams can leverage their preferred technology stack, whether it’s a python virtual environment, a node.js toolchain, or a system-level package. The result is a flexible architecture that allows developers to capitalize on the full potential of coding agents without sacrificing control or transparency.

Installation and Setup

Getting started with containers – very simple. The project provides a GO-based CLI tool “Cu” that you can build and install with the simple “Make” command. By default, the build target is the goal of your current platform, but cross compensation is supported through the standard “TargetPlatform” environment variable.

# Build the CLI tool
make

# (Optional) Install into your PATH
make install && hash -r

After running these commands, the “Cu” binary will be available in your shell, and can start a container session for any MCP compatible agent. If you need to compile a different architecture, for example, ARM64 for a Raspberry Pi, just prefix the build with the target platform:

TARGETPLATFORM=linux/arm64 make

This flexibility ensures that you are developing on any flavor of Linux for MacOS, Windows subsystems, or Linux, and can easily generate environment-specific binary.

Integrate with your favorite agents

One of the advantages of container usage is its compatibility with any agent that speaks Model Context Protocol (MCP). This project provides sample integration for popular tools such as Claude Code, Cursor, Github Copilot, and Goose. Integration usually involves adding “Container Use” as an MCP server in the proxy’s configuration and enabling it:

The Claude code uses the NPM assistant to register the server. You can merge Dagger’s suggestions into your “Claude.md” so that running “Claude” will automatically spawn the proxy into orphaned containers:

  npx @anthropic-ai/claude-code mcp add container-use -- $(which cu) stdio
  curl -o CLAUDE.md 

Goose is a browser-based proxy framework that is read from ‘~/.config/goose/config.yaml’. Add a “Container Use” section to instruct the goose to start each browsing agent in its own container:

  extensions:
    container-use:
      name: container-use
      type: stdio
      enabled: true
      cmd: cu
      args:
        - stdio
      envs: {}

The cursor of the AI ​​code assistant can be hooked by placing the rule file into your project. Using “curls”, you can get the recommended rules and put them in .cursor/rules/container-use.mdc’.

VSCODE and GITHUB COPILOT users can update their “settings.json” and “.github/copilot-Instructions.md” respectively, indicating that the “CU” command is an MCP server. Copilot then executes its code completion in the encapsulated environment. Kilo Code integrates with JSON-based settings file, allowing you to specify the CU command and any required parameters under ‘McPservers’. Each of these integrations ensures that no matter which assistant you choose, your agent will run in its sandbox, eliminating the risk of cross-contamination and simplifying cleanup after each run.

Hands-on example

To illustrate how container usage can revolutionize your development workflow, the dagger repository includes several ready-made examples. These demonstrate typical use cases and highlight the flexibility of the tool:

  • Hello World: In this minimal example, the Agent scaffolding uses a simple HTTP server, for example, using Blask or Node’s “HTTP” module and boots it in its container. You can click “localhost” in your browser to confirm that the code generated by the agent is running as expected and is completely isolated from the host system.
  • Parallel Development: Here, two agents rotate different changes in the same application, one using a flask and the other using a FastApi, each on its own container and a separate port. This situation demonstrates how to evaluate multiple approaches side by side without having to worry about port collisions or dependency conflicts.
  • Security Scan: In this pipeline, the agent performs routine maintenance, updates fragile dependencies, rebuilds to ensure anything breaks, and generates patch files that capture all changes. The whole process expands in a discarded container, and your repository is in its original state unless you decide to merge the patches.

Running these examples is as simple as feeding sample files into your proxy commands. For example, use the Claude code:

cat examples/hello_world.md | claude

Or with the goose:

goose run -i examples/hello_world.md -s

After execution, you will see each agent submitting its work to a dedicated GIT branch representing its container. Check out these branches with “Git Checkout”, and you can review, test, or merge changes to your terms.

A general concern when delegating tasks to agents is knowing what they do, not just what they claim. Containers use a unified record interface to resolve this issue. When you start a session, the tool records every command, output, and file change to your repository’s “.git” history, called “Container Usage”. You can follow when the container rotates, the agent runs commands and environments.

If the agent encounters an error or goes off track, you do not have to watch the log in a separate window. A simple command brings an interactive view:

This live view shows you which container is active, the latest output, and even gives you the option to fall into the proxy’s shell. From there, you can debug manually: check environment variables, run your own commands, or edit files at any time. This ability to directly intervene ensures that the agent remains a collaborator, rather than an incomprehensible black box.

Although container usage provides default container images that cover many nodes, Python, and system-level use cases, you may have specialized needs such as custom compilers or proprietary libraries. Fortunately, you can control the Dockerfile that supports each container. By placing a “containerfile” (or “Dockerfile”) at the root of the project, the “Cu” CLI will build a tailored image before launching the agent. This approach allows you to pre-install system packages, clone private repositories, or configure complex toolchains without affecting the host environment.

A typical custom Dockerfile might start with the official base, add OS-level packages, set environment variables, and install dependencies for a specific language:

FROM ubuntu:22.04
RUN apt-get update && apt-get install -y git build-essential
WORKDIR /workspace
COPY requirements.txt .
RUN pip install -r requirements.txt

Once the container is defined, by default, any agents you call will run in that context, inheriting all the preconfigured tools and libraries you need.

In short, as AI agents perform increasingly complex development tasks, the need for robust isolation and transparency is connected in parallel. The use of dagger containers provides a pragmatic solution: a containerized environment that ensures reliability, repeatability and real-time visibility. Built on standard tools including Docker, Git, and Shell scripts and providing seamless integration with popular MCP-compatible agents, it reduces the barriers to secure, scalable, multi-agent workflows.


Sana Hassan, a consulting intern at Marktechpost and a dual-degree student at IIT Madras, is passionate about applying technology and AI to address real-world challenges. He is very interested in solving practical problems, and he brings a new perspective to the intersection of AI and real-life solutions.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button