Access to workspace with docker build agent?

We have Jenkins installed on a Windows server and the docker host on a separate Linux server.
We have successfully configured Docker cloud templates and it properly creates a new container, executes the job and tears it down.
I would like to be able to access the workspace (and parser rules) after the container is torn down.
I believe binding or volumes are the proper method, but I’m missing something.

I’m also using Dockerfiles and not presently docker compose, so the random instructions out there get confusing.

  1. Is there a proper way to access the workspace in this configuration?
  2. Is there a proper way to mount the workspace to a volume …and how does jenkins know that that is the workspace?
  3. …or am I totally missing something with how logs and workspaces are expected to be used?

Any help would be greatly appreciated.

Thanks,
Eric

1 Like

Does anyone else have this problem? I feel like this problem would be more common unless I’ve gone down a different path than most.

Hello Eric, and welcome to this community :wave: .

I’m sorry but I don’t have the answer to your question.
I could test your configuration though…

Recommend also explaining why you need to access the workspace of a ephemeral build agent. By nature they are designed to be thrown away pretty quickly.

For example, if you want to keep an build artifact, you might want to use archive artifact step

1 Like

The primary reason for this is we’re using the ‘logParser’ plugin and the project rule option
e.g

logParser projectRulePath: 'concatenated_log_parser_rules.txt', showGraphs: true, \
                    failBuildOnError: true, unstableOnWarning: false, useProjectRule: true, parsingRulesPath: '' 

I could switch to a global rule, but was hoping to still be able to use the project-specific one in some way. It doesn’t seem like there is a way to configure this to a location not within the workspace…

Secondarily, it has been convenient on occasion to view all files in the workspace… To your point, I can export the convenient artifacts.

Did you ever get an answer to this? I don’t like the “you aren’t going to need it because it’s an ephemeral agent” answer. So I created a volume and mounted it inside the container which “works”, but has a flaw in that the UI doesn’t allow me to view the workspace.

Is there a way to keep project directories for docker agents that are launched from docker cloud? my jenkins agent is a docker container that is launched from a docker cloud, but once the build has completed the project directory is vaporized along with the container. Is there a way to persist these project directories and containers for a while?

Maybe what I could try to do is mount a volume in the container, and use that as the Remote File System Root (when configuring the Docker Cloud Docker Agent Template) If the container is launched with a docker volume as the Remote File System Root, I guess that’s where the projects are cloned to. That would be persistent across launches of the container, so maybe the project directory structure is maintained. I’ll try that.

I meant “project workspace” of course. Project Directory may be a confusing misnomer.

Specifying the docker volume to mount in the container. In my case probably bind would work OK

--mount type=bind,source=/path/on/host,destination=/path/in/container

IT WORKED! Yeah, just do that. ^

Now my ephemeral workspaces are persistent; however, the GUI says

Error: no workspace

Which is a lie. How do I make it not lie?

I guess the agent.jar that ran that project’s build was on the container. It’s gone, so there’s no way to tell what’s in the workspace from Jenkins’s perspective. oh well. if anyone has an alternate solution to this, please hit me up!

  1. “you aren’t going to need it” or “please explain why you need it” isn’t the answer i was looking for.

i have legacy projects that i need to retain the workspaces for because there are so many files in it i wouldn’t know what to archive. imagine having a very large project that was cobbled together without such a nice system as jenkins where everything gets built at the top level. all the log files are splattered around the file structure. all the build output is literally everywhere and you can’t know where it came from. you can’t even expect that it all appears on stdout. this is my hell, i don’t want to get too detailed, but suffice to say that it’s as bad and as confusing as it can be.

i would like to create build agents that are ephemeral (and reproducible) because i think they’re the most flexible and robust for my build environment. i would like to make incremental improvements to the current build. one improvement is that i do (Edit: not) like to have Precious Pet build systems that we have today.

I’m surprised this question isn’t getting more attention, and I’m open to other possibilities. Maybe I should create a docker build agent that is long lived? Maybe that’s what’s required. At least it’s a docker container that’s long lived and can be spun up again somewhere else with a little work to add the agent to Jenkins.

(edit: I do not like precious pet build systems)

I think that is a worthy goal, but I think it is too large of a step from your current situation. Your current situation uses static directories. Those static directories are a reasonably good match for Jenkins static agents. I’d replicate the current environment with static agents first, then use a series of small steps to move from static agents to ephemeral agents.

There are other techniques to retain workspaces, like external workspace manager plugin

The clone workspace scm plugin might be helpful

Eventually, I suspect that you will reach a point where you’ll be using archived artifacts and ephemeral agents, but I think that there are steps you can take along the way that will make your life easier.

1 Like

I’ll give that a go! Cheers!!

We need this for moving off VM or bare metal agents and moving to container agents. When things fail, having access to workspaces for the team is invaluable. This is the only real reason to have it.

It looks like CDRO has it built in, but I doubt it will be back ported to anything else.