Kubernetes plugins default workspace-volume to emptyDir (hostnode?)

I am running the kubernetes plugin for jenkins i k8s in AKS. When I run a build I see that the agent defaults this workspace dir to:

name: "my-jenkins-agent"
tty: true
- mountPath: "/home/jenkins/agent"
  name: "workspace-volume"
  readOnly: false
env:
- name: "JENKINS_AGENT_WORKDIR"
  value: "/home/jenkins/agent"
volumes:
  - emptyDir:
      medium: ""
    name: "workspace-volume"

As I understand that means the files/folders used in the workspace during the build is stored in the emptyDir on the host node confirmed by (default):

  • workspaceVolume The type of volume to use for the workspace.
    • emptyDirWorkspaceVolume (default): an empty dir allocated on the host machine
    • dynamicPVC() : a persistent volume claim managed dynamically. It is deleted at the same time as the pod.
    • hostPathWorkspaceVolume() : a host path volume
    • nfsWorkspaceVolume() : a nfs volume
    • persistentVolumeClaimWorkspaceVolume() : an existing persistent volume claim by name.

Before experimenting with using a PVC or e.g. hostPathWorkspaceVolume I guess using the default (emptyDir) should be just fine? Or are there some best practices when it comes to configuring the workspace dir?

The reason I am asking is that I am seeing increased pod (jenkins agent) evictions errors like:

Status:       Failed
Reason:       Evicted
Message:      The node was low on resource: ephemeral-storage. Container jnlp was using 156Ki, which exceeds its request of 0. 
Container docker was using 56Ki, which exceeds its request of 0.

and I assume that might have to do with using emptyDir as workspace location for all my builds (I regularly purge unused docker tags/volumes from the host nodes).

I would like to make sure that when a build starts its using a clean workspace but I would also like to avoid exhausting the disk capacity on my worker nodes (eventually leading to the pod evictions).

And looking at the defaults:

volumes:
  - emptyDir:
      medium: ""
    name: "workspace-volume"

does the medium field correspond to emptyDir.medium described here (and “” string being the value meaning its NOT using RAM-backed filesystem):

Depending on your environment, emptyDir volumes are stored on whatever medium that backs the node such as disk or SSD, or network storage. However, if you set the emptyDir.medium field to "Memory" , Kubernetes mounts a tmpfs (RAM-backed filesystem) for you instead. While tmpfs is very fast, be aware that unlike disks, tmpfs is cleared on node reboot and any files you write count against your container’s memory limit.

?

Any input/recommendations?

This is a very long post and I’m not really sure what your asking.

Jenkins controller writes everything to $JENKINS_HOME (which is often default to /var/lib/jenkins)
Jenkins agents write to whatever directory you specify, I guess $JENKINS_AGENT though I don’t know if that’s a helm specific thing.

Nothing on agents need to be kept, it’ll be setup on demand as needed so you can use whatever filesystem you want and change it as you feel like.

The controller persists all the data. It is possible to have non persistent disk with configuration as code + jobdsl, but I’ve never found it works 100%, so I persist my controller’s directory.

Sorry for the long post/read.

So you use a single PVC for everything related to the jenkins master (controller?), like plugins, expanded war file, jobs?

And just host node storage (emptyDir) for diskspace needed during agent execution like workspace-dir, maven cache etc?

Yea essentially. Though I have plugins backed into my image so I can roll forward and back.

Ok so you have a docker build of the master image that downloads the necessary plugin in the Docker file so they are all ready when the master is started?

Right now I have this in the init script of the master:

jenkins-plugin-cli --war "/usr/share/jenkins/jenkins.war" --plugin-file "/app-config/plugins.txt" --latest false --verbose

using: plugin-installation-manager-tool

so when jenkins starts it will download the plugins if the are not already available in a PVC dedicated to the plugins.

Not sure if there are any benefits of having them as part of the docker image compared to the above?