[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Bug#973313: lintian: Salsa CI jobs fail for many sources hosted there



On Tue, Nov 02, 2021 at 11:18:34PM +0800, Shengjing Zhu wrote:
> On Tue, Nov 2, 2021 at 10:04 PM Colin Watson <cjwatson@debian.org> wrote:
> > Ah yes, thanks for finding that.  So I guess the plausible choices
> > (without having checked feasibility) are:
> >
> >  * cherry-pick the docker-default profile into buster's docker.io
> >    package as a stable update
> >  * backport the docker.io package wholesale from bullseye to
> >    buster-backports
> >  * ask Salsa admins to upgrade our runners to bullseye
> >
> > Does anyone have opinions on this?  I've CCed the docker.io package
> > maintainers in case they have any preferences.
> 
> For the docker.io package part, I'm not aware that salsa infra is
> using this package.
> The shared runners are created by docker-machine and the base vm is
> also provisioned by docker-machine, which doesn't install the
> docker.io package.

Interesting, thanks.  Is it possibly just a matter of configuring
docker-machine to use bullseye, then, something like this?

diff --git a/roles/gitlab-runner/templates/gitlab-runner.toml.j2 b/roles/gitlab-runner/templates/gitlab-runner.toml.j2
index 173975c..adec15e 100644
--- a/roles/gitlab-runner/templates/gitlab-runner.toml.j2
+++ b/roles/gitlab-runner/templates/gitlab-runner.toml.j2
@@ -35,7 +35,7 @@ listen_address = ":9252"
       "google-project={{ instance.machine_google_project }}",
       "google-zone={{ instance.machine_google_zone }}",
       "google-machine-type={{ instance.machine_google_machine_type }}",
-      "google-machine-image=debian-cloud/global/images/family/debian-10",
+      "google-machine-image=debian-cloud/global/images/family/debian-11",
       "google-disk-size=30",
       "google-disk-type=pd-ssd",
       "google-network=build",

(I'm not deluding myself that I know what I'm doing here, though.)

-- 
Colin Watson (he/him)                              [cjwatson@debian.org]


Reply to: