Request for advice on accessing a GitHub repo within a declarative pipeline

Hi, I am new to Git and so would appreciate some advice.
I need my declarative pipeline to clone a GitHub repo and to do some work on the files it contains. My Jenkinsfile is stored in Subversion because:

  1. Our company uses Subversion, not Git
  2. I can’t stored the Jenkinsfile in Github for security reasons.

Currently the pipeline is very simple. It uses the ‘checkout’ directive to check out the GitHub repo into the WORKSPACE. The next steps are:

  1. Detect whether there have been changes to the repo since the last time the job ran
  2. If there are changes, run a Python script to act on those changes
  3. Add files generated by the Python script to the GitHub repo.

Please will you give me some guidance on how to perform these 3 steps?

Also, just for my understanding, is the checkout the same thing as a ‘clone’?

  1. Detect whether there have been changes to the repo since the last time the job ran

If you can use github-webhook to trigger the pipeline in a pull/merge event, it’s better than looking for changes every X minutes,

Otherwise, you can archive a status file with the latest commit hash (ex. commit=3cd0a527) and compare it on next builds with the current hash, and update the status file at the end of a successful run,

  1. If there are changes, run a Python script to act on those changes

For declarative pipeline use stage block - pipeline-stage - ex:

stage('Build') {
    steps {
        script {
            sh "python /path/to/my/python/script.py"
        }
    }
}
  1. Add files generated by the Python script to the GitHub repo.

with stored credentials and sshagent - stackoverflow-example

Also, just for my understanding, is the checkout the same thing as a ‘clone’?

Yes