Community reported issues

What’s new in 2.296 (2021-06-01)

66 sunny13 cloudy14 storm
Community reported issues: 1×JENKINS-69

One thing that confused me a bit while working with Jenkins upgrade recently was the list of “community reported issues” on the changelog

At times, like with that occurred in 2.292 it was extremely useful and helped me to guess the possible culprit; but very often it refers to random numbers on the bug tracker.

I hoped there was at least come connection between the JIRA issue and the release; only to find out today that this is basically garbage-in, garbage-out system.

I think we should improve this to somehow let others to remove completely bogus entries - this is a highly visible place I guess?

Yes, this place is highly visible. We were talking about some kind of moderation tools together with @danielbeck and @timja, but it has never been a top priority. The consensus of the last discussion was that we’d rather rework the rating system completely to provide better UX and to support plugin releases

1 Like

The first thing came to my mind is to provide a simple search interface, something like

(this uses non-free Algolia indexing, contains insane amount of trackers)

For the LTS releases, I’m also wondering about the “Community feedback”.

  1. The numbers seemingly exploded.
    2.249.1 to .263.4 had ~~ 300 votes.
    Suddenly 2.277.1 has 700 votes, 2.289.1 even 1237 !
    Are these numbers realistic ? Or is there some sort of spam involved ?
    (Meaning: Can I trust these numbers ?)
    I understand that there had been BIG changes recently. But why should the sums go up so massively ?
    (I checked: It doesn’t look like the users are switching from weekly releases to LTS, given the numbers on the weekly releases.)

  2. The ratio of “no issues” to “issues” + “rollback” drastically worsened.
    We had (and partly have) our problems with 2.277.4. But again: Is the ration realistic ? Compared to older releases ?

  3. Lots of votes but no (or nearly no) “Community reported issues:”.
    I would have expected that the “not ok” votes and the reported issues should roughly match ?
    I understand that I can “force” my vote without giving a issue - but I would expect that users tell, WHY they have a problem.
    This helps me to understand, whether I’m affected by the reports or not.

  4. Some of the reported issues are oooooolld.
    I’ve seen that also in previous releases: Some of the reported issues are years old, referring to a muuuch older release. Can this be ?

I’m wondering if the community has a problem here or how to interpret the numbers (and issues) in the future.

Any comments/observations from other users ?


I think those numbers only reflected the number of user agents that submitted the form. Nothing more, nothing less…

Maybe someone could look into the database entries for it…

the voting system is one of the oldest parts of jenkins still running - rating/submit.php at 702e5026064977c6de86f8816896d7e2e5ac03e8 · jenkins-infra/rating · GitHub

There’s really no restrictions on it, anyone can submit anything.

Sadly of the billion things people spend time on, this system isn’t one of them.

If someone was to design a new system, it would be very tricky. If you put restrictions on it, you’ll get little data. Nobody is going to want to create accounts and sign in to report issues. If you don’t put many restrictions on it, you get bad data. Its a hard problem.

@halkeye can I try to add Algolia search there to let people search and select the issues?

I’m sorry I don’t understand why your asking me. Are you asking cause I know algolia? It’s doable because we could feed all jira issues into algolia either by custom script or they might be able to do it via doc search or other thing they have. I don’t think algolia is required. Jira partial matching is pretty good too.

So what problem are you trying to solve? Make it easier for people to find the issue they want to report? What’s the current flow look like?

I am asking you because that would require changes to that PHP part you have linked to before.

I tried to describe the problem I am trying to solve in the first post in this thread. I was confused by bogus issue numbers entered by the users and then I realized the numbers are meant to be simply entered by hand.

Suggested improvement is to provide a feature to search for the issue by entering number and/or text. Two approaches possible here:

  1. Import issues from Jira into Algolia (quick’n’dirty import done at and attach the Algolia search interface to the form.

  2. Import issues from Jira into Postgres (or rather use foreign table to keep it automatically in sync) and implement search there. A bit more work, but no cloud subscription needed. Happy to work on this too.

One possibility could be also to build some additional feedback loop from Jira to the form (for those who decided to create the account to submit/comment/vote on issues). But I have only vague idea how to do this, so I’d rather start with solution 1 or 2 above.

OH OH OH! i see what your asking now. The script isn’t mine, its KK’s from like 10+ years ago. I only know about it because at one point I tried to dockerize and put it in k8s.

I thought the problem was that people were finding it funny to submit JENKINS-69 over and over again. How does one actually legit submit to that form? is there a form somewhere?

I don’t think we need to import issues, the jira query is really easy. It could do a simple curl call to (you can login first, or use basic auth) to see if the issue is a real ticket.

We talked about this problem in the governance meeting. I think someone needs to come up with the rules first, before we worry about implementation. ex: Is the issue closed? is it a valid issue? has the issue been updated in X amount of time? Whatever the rules might be. Just need to figure out the rules.

Here is a code that seems to “validate” it, i.e. add to the database

I think a valid JIRA issue should be good enough. Closed should be fine (for regressions, maybe). Maybe if one needs to enter “View as plain text” to get their favourite number it would become less fun? But people will need to be able to enter numbers anyway, so “69” problem is not solvable this way (if this is THE problem indeed).

funny enough it came up on reddit today -

1 Like

good public relations piece :slight_smile: people care about changelog

ok, what do I need to do to actually improve things? pull requests to the rating repo? how can I test those things? can this be deployed relatively quickly locally?

ok, what do I need to do to actually improve things?

Part of the issue. @MarkEWaite might have other ideas on how to help, but part of the issue is trying to figure out what to do.

pull requests to the rating repo?

Yea to get things deployed thats a minimum

how can I test those things?

Its a PHP script, it shouldn’t be hard to run locally, but apparently there’s a make file and docker image according to the readme

can this be deployed relatively quickly locally?

I can’t see why not. Its not in the k8s, but deployed to one of the older puppet servers, but shouldn’t be hard to deploy for those with access

I think the rule should be that we ignore any report that references an issue that is closed or resolved. If it is a regression, it won’t be detected as a regression unless the submitter also reopens the bug.

I’d rather not mislead users to believe that their mentioning a closed or resolve bug in the ratings system will cause someone to take action.

Personally I would probably just block 69 this person has had their fun.

GitHub now as reactions on their changelog but there’s no negative emojis and no way to add extra data

I think part of what is overlooked in the discussion is the potential simple misinterpretation by the user of the page.

The page healine says
Community feedback:

  • no major issues
  • notable issues
  • required rollback

Then, if one clicks on the symbol to give feedback, they are suddenly prompted for a JIRA issue. As a user, I am not sure if there is a JIRA issue, I know I had a problem / had to rollback and did not see anything about the issue in the update guide (which I forgot to read) or the change log (which I skimmed over). I still want to provide feedback, so MUST enter something.

I also distinctly recall a long time ago, when I clicked, I was only presented with a JIRA field and OK, no Cancel, or when I tried to Cancel, it said must enter a JIRA, so I entered a bogus one.

The page also says “Community reported issues:” and a list. As a user, evaluating whether to upgrade, I have no idea if the listed issues (if valid) relate to “rollback required”. If there are 200 notable issues (PS: What’s that mean? Where are the mundane issues? Other issues? Perhaps abetter descriptor ?) and 200 rollback issues, how can I tell what the impact of the 3 listed issues was to assess proceeding?

A better approach is if User clicks sunny, record +1 vote.

Click cloudy or rainny, present a list of reported issues w/links, whether notable or rollback, plus vote count AND then options to:

  • re-report (+1) any one of the above listed issues,
  • search for an existing issue (new tab/popout) and enter #
  • create a new issue, and have it added to list
  • or even report, “not sure if there’s a JIRA, but I had issues” and if rolled back
  • plus Cancel.

Finally, report issues w/rollback reported on one line (or first), then notable issues.

Also, the answer is always 42, not 69.

1 Like

@Ian_W I like what you proposed.Hope we would manage to make it very simple.

Do you have any insight how to integrate this better with JIRA? Something like if the issue gets reported on JIRA first for the particular version it gets a vote (or at least it ends up on a “quick candidate list”)?
There is a also a version link on the issue but many people refer to LTS versions there so we kind have duality there.

I have not done any JIRA integration, but I would think the most basic step would be an action which ran whenever a (new) issue was entered on the Changelog site, a tag (eg: “community-reported”) would be added to the chosen JIRA, plus LTS if it also was reported there. It could also add the release # in a "version link’ reference.

How often issue is reported is not generally reflective of how widespread, how severe or how readily fixable it is, so not sure what constitute “quick candidate list”. Having the “community-reported” tag would allow the JIRA maintainers to create a Dashboard, similar to the table-to-divs Dashboard for review.

There’s presenetly no requirement to log in to provide Jenkins release feedback A +1 one the JIRA vote or adding the user to following the JIRA both require the use to be logged into JIRA,so probably a non-starter.