I have Jenkins server (Controller) and some machines that can be agents or in depended Jenkins servers.
Jenkins on prime is requirement.
I will implement lot of jobs in this Jenknis, the jobs can be organize by logical groups, like CI jobs, CD jobs, Automation test and ext.
I’m hesitating to put all jobs in one Jenkins + agents, organized by folders. or build Jenkins servers to each group.
The groups are not dependent technically (means: CI jobs build artifacts and deploy them to maven repository, and CD jobs download and use them, for example).
advantage to separated servers:
Maintenace time is not paralyzing all groups of jobs.
Permissions are easier to manage.
Credentials defined just where they used.
Shorter list of plugins make updated easier
Reuse by shared library or external SCM for scripts
advantage to controller + agents:
manage Jenkins version and plugin version in one place.
Is there best practice to answer this question?
Do you think about other considerations I miss?
To your questions, In general independent of your setup, you should always have agents on your controller and offload the work to them. It is not recommended to have executors on the controller.
Then what you mean with lots of jobs do you speak of <100, 100-1000, 1000-10000, > 10000?
How many agents you need to attach?
Who has permissions to create jobs on the controller?
Good pointer! I took the opportunity and configured discourse to replace the deprecated terms upon typing, to prevent confusion of new Jenkins users, given these terms have been replaced in our documentation.
In my company we have some Jenkins instance with more than 10000 jobs and partly over 100 agents. Those are operated by a central team and usually restarted on the weekend. All jobs are generated and only admins are able to create jobs.
But we also an offering where each team can get it’s own Jenkins instance with full control. It’s just requesting an instance and a couple minutes later you can start using it.
from your advantages of separated servers I only see #1 as valid. For the other points the efforts are nearly the same.
2. It depends on which auth strategy you want to use and how you want to handle permissions. E.g. when you want to give build permissions only to certain people on certain jobs you must use project scoped permissions. And then the effort is the same if you have one Jenkins or many, it might be even less when you use role-strategy and make use of permission templates (that’s new)
3. Have the different groups in different folders and assign credentials to the folder which are for this group only. Credentials which are needed everywhere need to be maintained only once when you have only one Jenkins (password rotation)
4. If the jobs of the different teams do more or less the same, then the set of plugins will not differ much
5. shared libraries would make no difference if used in one of many instances. It strongly depends on how the shared library is implemented if can be used on many Jenkins instances (e.g. avoid hard coded credential ids) or if it is specific to one instance.
I correct the original post to the term of controller and agents.
I will have 100-1000 jobs, 50-500 old builds should be saved for each job.
In the option of some instances of Jenkins, I can still use controllers and agents. but i will use some machines as controllers, and lose them as executors.
I have 15 machines to play with.
Devops team are admins in my Jenkins, other users can read and some groups have permissions to create some jobs. I’m using the permission template.
If I will use some instances, I will define the permission by basic definitions, else - I will use permissions template, which is a little bit more difficult to define and manage.
I get your answer about credentials. I will use it anyway.
I think to separate the instances by the things they do, not by teams. CI jobs, CD jobs Automation testing and others. so maybe the plugins will be different.
That is an unsual way of splitting things up. Consider that the people then need to remember then more than one Jenkins instance. Also when you want to further automate e.g. after CI run you want to trigger the automation test and if successful the CD jobs, this is much easier when you have all those jobs that belong to the same source project on the same jenkins. You can even put that all in a single pipeline job.