We already have a build system which only consists of static agents on our local physical machines.
Now, we are designing a new build system based on dynamic agents which can be deployed on both local physical machines and virtual machines on our GCP Cloud.
We are in the testing phase to see which plugin among Docker, Docker Swarm, or Kubernetes would be helpful for our purposes.
But we encountered a big challenge which is scheduling of agents. We actually still don’t know who would be responsible for scheduling. Jenkins controller or (for example) Kubernetes scheduler?
In the current build system, we use a combination of “Scoring Load Balancer” plugin and a groovy script to do scheduling in a way that is the best for our builds. However, when we use dynamic agents in the new build system, probably this method is useless, since the nodes or agents are created dynamically. Also we don’t know how to map the groovy script to the new scheduling process.
Note that we have physical machines with different specs in terms of CPU, Memory, and Disk. The groovy script collects some information from the previous builds (such as the amount of storage used from the disk) and we use this information for scheduling new builds.
Could anyone help us in this regard? The questions are:
Are we in the right direction for a combined architecture including physical machines and virtual machines?
Who would be the scheduling responsible?
Is there a way to keep using the “Scoring Load Balancer” plugin on dynamic agents that are running on (for example) Kubernetes?
How can we use the groovy script results about the previous builds in the scheduling process of the new build system?