I am using Jenkins Version 2.479.2, and it is not running inside a Docker container.
I want to know how I can change the build result of a completed build from SUCCESS to FAILURE.
Here is my situation:
I have an upstream job that starts a downstream job.
The downstream job depends on the upstream job because it gets the current build number from it.
The downstream job takes about 45 minutes to finish (it is a testing job).
Because of this delay, a queue is building up for my upstream job, even though it has already finished.
I thought of two possible solutions:
Using wait: false when starting the downstream job.
Making the downstream job watch the upstream job and then execute.
But in both cases, the upstream job’s final status does not depend on the downstream job’s result. I need a way to change the upstream job’s status after the downstream job finishes.
I’m assuming you run pipeline jobs and use the build step.
The default of the step is to wait for the completion of the job and to propagate errors.
So I assume that you intention is that no queue is building up. The question is if you allow to concurrent build for the upstream or not.
If you allow concurrent builds then what might be your problem is that you execute the build step inside a node block or you use declarative pipeline with a global agent definition. When you do this the executor for the upstream job on that node will be occupied. If you move the build step outside of the node block you will not have any blocking
A sample for scripted pipeline could be
node("mynode") {
// build the application
sh ...
}
// test it without blocking an executor
build("downstream")
We do this with declarative pipelines where the parent job is calling several child jobs. The parent job does not consume a worker since it only calls other pipelines. The parent job is ‘running’ but without consuming a worker, and will be marked as failed if any of the child jobs fails. I don’t think you can easily do that with a classic job as the parent job. The child job could be a classic job, but I have not tried that use case.
I have a pipeline job where the entire job runs on different agent, not on the main built-in job.
In my pipeline, there is a testing stage after the build process. This testing stage is a different job that runs on a different agent, and it is a freestyle job.
With my current setup, when the downstream job fails, it does fail the upstream job. But since the testing (downstream) job takes a long time, the next build has to wait a lot before starting.
If I don’t wait for the downstream job to finish, then I have no way to change the result of my upstream job afterwards.
I do not have the concurrent job option enabled. I think Jenkins creates a new workspace for a concurrent job because the existing workspace is already in use.
For example, Jenkins would create build-workspace and build-workspace-1. My understanding is that it is something like this. Please correct me if I am wrong.
My concern is that this would use a lot of memory to store redundant information. That’s why I decided not to enable the concurrent job option for my pipeline.
As I had mentioned, my entire pipeline runs in a different node and not the built-in node. Also, the test stage alone runs in a separate agent, where it copies the artifacts from the upstream job.
Do you think there is way that I can either make a next build start if my current build has started the test stage?
If a new build starts also depends on the number of executors you have on the your agents.
Yes if a job runs in parallel on the same agent (when the agent has 2 or more executors) then this will create a separate directory on the agent for the second run (usually it ends with @2).
Pipeline jobs by default run concurrently, you have to explicitly disable concurrent execution.
We would need more information about how many agents you have in total in your Jenkins and what you want to achieve exactly. Setting up agents and the number of executors you assign them strongly depends on what you are doing.
I have a Jenkins where I have 30 agents all with a single executor, the agents all have the same label and build one huge C/C++ project. The agents have 120 CPUs and 360GiB memory. We can’t run 2 builds in parallel on them and a single build lasts between 1 and 3 hours usually. Then I have a also 2 agents that have 15 executors each on a machine with just 8 CPUs. On those agents are running small script that don’t take very long and don’t much memory.
I’m not entirely sure what you mean about your workflow and what you mean by the jobs having to wait.
In the example I gave we have multiple parent and child jobs running in parallel every single day. The parent pipelines do not take any executor (not even own the controller) and just wait in the background until the child jobs finish.
Say the parent Job is P and the two childs are D and T.
We will see through the day Bob starting P#42 (build number 42), which in its log will start D#111 wait for it to complete and when it finishes it starts T#50. If D#41 had failed, T would not be started and P#42 would have been marked as failed.
While this was happening Alice might have started P#43, starting D#112 but it was faster and it triggered T#49 before T#50 was trigger for Bob’s workload.
All jobs run in parallel of each other, except D and T are sequential within a workflow. But you can make them run in parallel with a parallel {} stage block if you want.
The child jobs are properly linked to their parent jobs. This kind of tracking is not possible if the parent job is a classic job AFAIK but works fine with Declarative pipelines.
Yes, starting a declarative pipeline job in parallel will create multiple workspaces subfolders (@1, @2, …), if the jenkinsfile has steps that would actually run on the controller. But if all you do is call build() then it should not actually create a full workspace folder on the controller.
On the other hand, if will create a folder for each pipeline start under the controller’s ~workspace directory which can be very inefficient. As long as your pipeline starts are staggers (which you can do by using Throttle Build, a setting of 6/minute would wait 10s) so that a few seconds have always passed between each pipeline start (per job).
If you do not parameterize your branch names you can also use the “Lightweight checkout” option. If you do use a parameter variable $BRANCH then there is a git plugin bug which will not expand the variable. Our workaround is to massage the git client on the controller through a shell wrapper to perform a sparse clone so that the git folder and the expanded workspace has a lot less files.
I know it sucks but our controllers ran out of disk space and inodes because of these plugins bugs, and the maintainer said that these bugs are unlikely to get fixed. Using a MultiBranch pipeline would fix “Lightweight checkout”, but at the cost of having hundreds, if not thousands of duplicate workspace folders and would make the disk space issue much worse.
Jenkins is very clunky in some areas, especially around Git, and especially bad if you have a mono repo. We have a bunch of custom workarounds that make things viable and I understand it might be hard for many.
Unfortunately these behaviors under the hood are not clearly documented in the user land documentations (I’m sure it is clear for those that read the java source code), and these tricks have to be learned the hard way.
Remember that Jenkins was designed initially around Perforce, which gave WORKSPACE terminology to Jenkins. So the support for newer SCM like Subversion and Git did not always translate well, and there are still issues today. I’m digressing because this is beyond your original topic, but I’m clarifying why you are bending over backward to NOT run a specific job in parallel to avoid crashing your controller.
I will explain my situation in more detail. Let’s call my main job “J-main” and the test job “T-job”.
I have two agents:
Agent M (0/3) → It has 3 executors
Agent T (0/1) → It has 1 executor
When I start J-main, it runs on Agent M. After reaching the Test stage, the J-main job calls or triggers the T-job, which runs on Agent T.
What I need:
Once J-main passes control to Agent T for running T-job, my Agent M is waiting until T-job is complete. The problem is that J-main takes only 20 minutes before the test stage, but the test stage takes another 20 minutes.
This means that my next J-main build has to wait for T-job to finish, delaying the process. I want to start the next J-main build while the previous T-job is still running.
Yes you can achieve this with Jenkins. But only with pipeline jobs.
It is not possible to achieve this with freestyle job.
You must ensure that the J-Main job allows concurrent execution (this is the default for pipeline jobs).
This is basically what @sodul and I already wrote before.
Then the following declarative pipeline do what you want:
With 3 executors you’re already able to run up to 3 J-main jobs. When you have only 1 executor for agent-t this means that you will get queueing on that agent when you start more than 1 run per 20 min for a longer time. This can be ok but when you start too much then it might delay feedback to developers significantly.
If the problem is that you can’t run the tests in parallel on one machine you might consider adding a second agent to execute tests. Both agents should then have the same label and you refer to that label in the T-Job pipeline and not the agent name itself.