Incorrect space size for built in node on Kubernetes

Hi,

I am deploying jenkins on Kubernetes using the latest version of the official image.

I have an issue with how /computer route reports the size for built-in-node. Currently, I provide a 4GB PVC for the deployment and here is the deployment script:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: testing-jenkins
  namespace: jenkins
spec:
  replicas: 1
  selector:
    matchLabels:
      app: testing-jenkins-server
  template:
    metadata:
      labels:
        app: testing-jenkins-server
    spec:
      securityContext:
            fsGroup: 1000
            runAsUser: 1000
      containers:
        - name: testing-jenkins
          image: jenkins/jenkins:lts
          env:
            - name: JAVA_OPTS
              value: "-Djenkins.install.runSetupWizard=false"
          resources:
            limits:
              memory: "1000Mi"
              cpu: "250m"
            requests:
              memory: "500Mi"
              cpu: "125m"
          ports:
            - name: httpport
              containerPort: 8080
            - name: jnlpport
              containerPort: 50000
          livenessProbe:
            httpGet:
              path: "/login"
              port: 8080
            initialDelaySeconds: 90
            periodSeconds: 10
            timeoutSeconds: 5
            failureThreshold: 5
          readinessProbe:
            httpGet:
              path: "/login"
              port: 8080
            initialDelaySeconds: 60
            periodSeconds: 10
            timeoutSeconds: 5
            failureThreshold: 3
          volumeMounts:
            - name: testing-jenkins-data
              mountPath: /var/jenkins_home
      volumes:
        - name: testing-jenkins-data
          persistentVolumeClaim:
              claimName: testing-jenkins-pvc

However, this is what I see when I view the nodes.

That is how much free space Kubernetes storage node has in total, but I would rather seeing the 4GB persistent volume that I assigned to the deployment.

I am not sure if this is a bug but I would be grateful if someone has a resolution.

Could you run kubectl get pv and send the prints? Also kubectl describe pv {the_pv_jenkins_claimed}

Result of “kubectl get pv”

NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM                                                STORAGECLASS                    REASON   AGE

testing-jenkins-pv                         4Gi        RWO            Retain           Bound       jenkins/testing-jenkins-pvc                          testing-jenkins-sc                       47s

Result of “kubectl describe pv testing-jenkins-pv”

Name:              testing-jenkins-pv
Labels:            <none>
Annotations:       pv.kubernetes.io/bound-by-controller: yes
Finalizers:        [kubernetes.io/pv-protection]
StorageClass:      testing-jenkins-sc
Status:            Bound
Claim:             jenkins/testing-jenkins-pvc
Reclaim Policy:    Retain
Access Modes:      RWO
VolumeMode:        Filesystem
Capacity:          4Gi
Node Affinity:     
  Required Terms:  
    Term 0:        kubernetes.io/hostname in [worker1]
Message:           
Source:
    Type:  LocalVolume (a persistent volume backed by local storage on a node)
    Path:  /data/jenkins-deployments
Events:    <none>

Hi, I believe the reason for this issue is your PV type is LocalVolume.

To eliminate this I set the storage of the deployment as ephemeral:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: testing-jenkins
  namespace: jenkins
spec:
  replicas: 1
  selector:
    matchLabels:
      app: testing-jenkins-server
  template:
    metadata:
      labels:
        app: testing-jenkins-server
    spec:
      securityContext:
            fsGroup: 1000
            runAsUser: 1000
      containers:
        - name: testing-jenkins
          image: jenkins/jenkins:lts
          env:
            - name: JAVA_OPTS
              value: "-Djenkins.install.runSetupWizard=false"
          resources:
            limits:
              memory: "1000Mi"
              cpu: "250m"
              ephemeral-storage: "4Gi"
            requests:
              memory: "500Mi"
              cpu: "125m"
              ephemeral-storage: "4Gi"
          ports:
            - name: httpport
              containerPort: 8080
            - name: jnlpport
              containerPort: 50000
          livenessProbe:
            httpGet:
              path: "/login"
              port: 8080
            initialDelaySeconds: 90
            periodSeconds: 10
            timeoutSeconds: 5
            failureThreshold: 5
          readinessProbe:
            httpGet:
              path: "/login"
              port: 8080
            initialDelaySeconds: 60
            periodSeconds: 10
            timeoutSeconds: 5
            failureThreshold: 3

However, that didn’t fix it either. It’s not just the space, the fact that it’s also displaying the operating system of Kubernetes node as built-in node’s architectue tells me that there is a place in the code that connects the Node system information to this.

I think ur doing the right thing using ephemeral cause if you're using the local PV type in Kubernetes, it will effectively use all available local storage on the node without being constrained by the specified size in the PV definition.. However, I am not sure what u r trying to explain here.

Are u saying that even if u used ephemeral, u still got 306G storage shows on ur jenkins?

Also, is it ok to set memory and CPU with ephemeral-storage at the same time? maybe try to leave the ephemeral-storage only?(I am not sure about this cause I did not used ephemeral before, just googled and all examples have only ephemeral-storage)

The initial problem still persists.

Even when using ephemeral storage, the Built-In Node is reporting Kubernetes node’s information rather than Jenkins pod’s. I do not want the user to see how much free space I have in Kubernetes node because I have already assigned 4GB space to the Jenkins pod and it’s supposed to be running in a container.

I have even tried assigning the ephemeral storage based on the official instructions here.

          volumeMounts:
            - name: test-hello-jenkins-data
              mountPath: /var/jenkins_home
      volumes:
      - name: test-hello-jenkins-data
        ephemeral:
          volumeClaimTemplate:
            metadata:
              labels:
                type: testing-jenkins-volume
            spec:
              accessModes: [ "ReadWriteOnce" ]
              storageClassName: "testing-jenkins-sc"
              resources:
                requests:
                  storage: 4Gi

still no luck!