Question about Dynamic Build Agents

Hey, I’m new here! I hope everyone is having a wonderful day!

I’ve been researching a little bit about Jenkins Dynamic Build Agents. It seems to be that most of the recommendations are to use Kubernetes to run them. While running Kubernetes seems great, I have a rather unique set of requirements that makes it undesirable for my team.

A little context about our setup:

  • Everything must be run on-premise due to security restrictions preventing usage of AWS, GCP, etc.
  • We currently build apps that run in docker containers, so in the case of a docker dynamic build agent, we would be running docker in docker (I suppose). Is this even possible/stable?

I don’t think the team wants to manage a Kubernetes cluster, so that’s really out of the realm of possibilities as a solution.

As far as VM management, I guess we can use vmware or libvert, or some VM plugin for jenkins. Not really picky here.

I think the main things we’re looking for is a way to reset the entire environment after each test. We’re having some issues managing docker containers hanging and being left running, which becomes challenging when executing multiple builds at the same time. Perhaps we’re just doing docker wrong and maybe jenkins has something for managing docker directly???

Does anyone have any opinions here on if it’s better to run Dynamic Build Agents in docker or VMs?

Hello @mbrooks and welcome to this community :wave:

Running dynamic build agents in either Docker or VMs can be a viable solution depending on your requirements and preferences. Here are some pros and cons of each option:

Docker: Pros:

  • Lightweight: Containers are lighter than VMs, which means that they require less resources to run.
  • Isolation: Containers are isolated from each other and the host system, which provides an extra layer of security.
  • Easy management: Docker makes it easy to manage and deploy containerized applications.

Cons:

  • Complexity: Running Docker in Docker can be complex and prone to errors.
  • Limited isolation: Containers share the host kernel, which means that there is a possibility of a container breaking out of its confinement.
  • Limited flexibility: Containers are designed to run a single process, which means that they may not be suitable for some use cases.

VMs: Pros:

  • Strong isolation: VMs provide strong isolation from the host system and other VMs.
  • Flexibility: VMs are flexible and can run multiple processes, operating systems, and architectures.
  • Mature technology: VMs have been around for a long time and are a mature technology.

Cons:

  • Resource-intensive: VMs require more resources than containers to run.
  • Longer boot time: VMs take longer to start and stop than containers.
  • More complex management: Managing and deploying VMs can be more complex than managing containers.

As for resetting the entire environment after each test, both Docker and VMs provide options for doing so. In Docker, you can use the --rm flag to automatically remove containers after they exit. In VMs, you can use tools like Vagrant or Ansible to provision and destroy VMs.

Ultimately, the choice between Docker and VMs comes down to your specific use case and requirements. If you need lightweight and easy-to-manage containers, Docker may be the better choice. If you need strong isolation and the ability to run multiple processes and architectures, VMs may be the better choice.

1 Like