[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Where to upload the Octavia image for Bullseye? Should I continue within the team?

Hi Noah,

Thanks for your answer.

On 4/27/21 6:26 AM, Noah Meyerhans wrote:
> I don't think it was unilateral.  There was never concensus within the
> team that we should be offering application-specific images in the first
> place, and then there were technical issues with the implementation.

As I recall, we did decide at MIT to do the Octavia images. Debating
this *again* is highly demotivating, frustrating, and giving me more
reasons to do things elsewhere, which is exactly what I'm debating...

Anyway, I am not asking the team for the permission to build Octavia
images and publish them. It's going to happen, even if the team doesn't
like it. Nobody has the rights to forbid me doing it, as you wrote (below).

The question is only how. How much will this be in the team, where
should it be built, how should I publish it, etc.

At this point, publishing my work has been denied twice, so I believe it
is very legitimate to ask the team what it think I should do.

>> I probably should just leave the team, and do my work on an unofficial
>> debian.net domain. It probably will be a lot more restful, and I'll be
>> free to do what I want. This feels bitter to write I should move to a
>> debian.net, knowing that I was the first person in Debian building cloud
>> images, and that I've been doing so for the last 3 releases. Then, will
>> I still have the rights to call the then generated images "official"?
>> Please let me know, that's important for me. I don't think I'm left with
>> many options. The more I think about it, the more it feels like that's
>> the easy path for me.
> Obviously that's something that you're free to do, but I'd argue
> strongly doing so would ultimately create more work for you and more
> confusion for our users.

Yes, agreed for the confusion of our users part.

I don't agree with the "more work for you", at least for the Octavia
images, where I want to produce automation to avoid doing some manual work.

And then, if I do it for the Octavia images, it is just a few minutes to
also reproduce it for the "normal" OpenStack images... so it is also

> This team would become "cloud except
> openstack", and your OpenStack work would naturally diverge from what
> we produce. The experience would remain fairly consistent across the
> non-OpenStack environments, but OpenStack users may experience something
> entirely different. Or worse, some OpenStack users will run our images
> (I assume we would continue generating them) and others would run your
> alternative images, resulting in a different experience even for people
> running within ostensibly the same cloud environment.


Which is why I forced myself to continue within the team for so long.

>> I'm also thinking about continuing to generate my own OpenStack images
>> on my own (not only the Octavia ones), as I am strongly in the opinion
>> that everything is becoming *A WAY* over-engineer in the current images,
>> to the point that the Python code of the team has became unreadable.
>> Just look at the numbers. We're up to:
>> - 68 .py files
>> - 5k lines of python
>> - 93 files in the config_space folder, so having cryptic content
>> - I haven't dug into FAI, but itself it's ... big! 254 files...
>> Compare this to the simplicity of just a single shell script of 2k
>> lines, out of which 500 is just argument parsing, and another 645 are
>> there to just handle weirdo networking for bare metal installation...
>> (so basically, 800 lines of shell script).
> I don't think that's remotely a fair comparison.  The code in the
> debian-cloud-images provides extremely useful integration with GitLab,
> allowing us to generate images for multiple Debian releases for multiple
> architectures targeting multiple different cloud environments (including
> OpenStack!) on a daily basis.  It publishes these images to multiple
> commercial clouds, while also making them available for direct download
> via cloud.debian.org.  It provides a simple "1 click" interface for us
> to publish new release images to several commercial clouds, including
> updating marketplace listings where feasible.  It provides a testing
> framework that lets us validate the contents of the images we generate
> before we publish them.  That's more than a shell script's worth of
> work, and I don't think it's over engineered at all.

I have no doubts that what you say above is truth.

Though what improvement, compared to pre debian-cloud-images world, do
you see for the OpenStack world?

Also, I still don't see myself contributing to what this has become.
I do take it as a huge personal failure, btw, especially on the
technical level. Though hopefully, some team member will agree that the
source code isn't exactly simple-clear, and can be a cause of
de-motivation. And believe me, I've read a lot of Python code
maintaining OpenStack...

>> I also increasingly distrust that this team is going on the direction of
>> promoting free software, and free cloud.
> Franky, I think you're out of line with statements like that.  Everybody
> involved in the Debian cloud team has committed significant portions of
> their lives to Free Software, cloud and otherwise.  There are multiple
> members of the team whose debian.org accounts date to the previous
> century.  Collectively, the people in regular attendance at the team
> meetings probably have 75 years of committment to Debian and Free
> Software.

I am convince that individual team members of the cloud team has
"committed significant portions of their lives to Free Software" as you
put it, and it's been great to know you all. I never contested that, and
indeed, I'd be "out of the line with a statement like that" if that was
it. Though that's not what I wrote. I've been a critic of the team
output, not about individual team members.

What the team produces is something different. The "cloud team" is now a
"big 3 providers" team, where every effort is on the direction of making
things good for them. I never saw myself continuing on that direction,
and increasingly, I feel a miss-fit here. Yes, it's probably all my
fault, as I should have put more effort and time on this.

Though remember: I was here first, and had all done within Casulana with
Steve. If nothing had changed since, it'd be a better world for me...
Why should I be forced into rethink everything from scratch, and relearn
tooling from others, when everything used to work so perfectly? It's not
motivating at all.

>> I'm not sure why the word
>> OpenStack seems completely banned from our team. Just a simple fact
>> probably will make the team think a little bit in what kind of mindset I
>> am. If you grep "openstack" on our debian-cloud-images (master), here's
>> what you get:
>> debian-cloud-images (master)$ grep -r -i openstack *
>> debian/control:  * OpenStack
>> doc/details.md:Example 2 genericcloud (OpenStack):
>> The first entry where it's written "OpenStack", is useless because the
>> package is gone from Bullseye. 
> It's really not relevant that the package isn't it bullseye. It never
> made sense there, and was impossible to keep up-to-date.

That's the way the team sees it, as we had completely different
approach. My idea was to provide a build app within Debian itself, and
use that for the lifetime of the release. I love the fact that OpenStack
images for Stretch were built with openstack-debian-images for Stretch,
and the images for Buster with openstack-debian-images for Buster and so
on. I very much dislike the fact that our tooling may evolve, and
possibly change the way the stable or oldstable images are built.

So yeah... the way the cloud image team is doing, it probably does make
sense. But that's not how I wanted things to be done, I do not agree
with that way. And since I'm probably the only one in the team feeling
that way, I should just shut up, and ... do things separately my way.

By the way, the reason I mentioned the package was just to say the
occurrence of the word "OpenStack" in the package can be discarded,
because the package isn't in Bullseye, so then we are down to a *SINGLE*
occurrence of the word in the whole repository... And it doesn't even
explain what Generic images are.

Again, yeah, I probably should have invested time to fix that. But
that's not fair at the same time. The OpenStack images was there first,
on cdimage.d.o, with the name OpenStack. It worked out very well for 3
releases. Then someone else decided (for good or wrong reasons, I'm not
debating this...) to call the new image "generic". Shouldn't that
someone else automatically be also in charge of doing the work of
explaining and documenting that fact to our users? What's been done is
forcing me into this, and loading me with work to do what I don't have
time (or the motivations) for.

> People who
> want to build their own images will have a better experience if they use
> the same code and configuration that this team uses, which is available
> on salsa.  This was discussed on debian-release.  

It would have been a way nicer if our users could have the Bullseye
version of the tooling within bullseye, rather than something that may
change over time. Aren't our CD image tooling are following this pattern
(Steve should know)?

>> Is the 2nd one published anywhere? We're
>> gone from a "cdimage.debian.org/cdimage/openstack" to a thing called
>> http://cdimage.debian.org/cdimage/cloud where the word "OpenStack" is
>> never even mentioned anywhere. OpenStack is *not* a swear word. It's ok
>> to call our OpenStack images the way they should...
> OpenStack is mentioned in multiple places on
> https://{cloud,cdimage}.debian.org/cdimage/cloud

Please point where it's explained that OpenStack users should use the
generic image. It is nowhere! 2 years into it, and it does not show. If
one goes at https://{cloud,cdimage}.debian.org/cdimage/cloud, then
what's seen there, is the link to the "OpenStack" folder for "my" legacy

I ACK that probably should have been the person contributing that.
Though again, it's not motivating...

>> Our Debian users do not understand it, and I have to (very often)
>> explain on IRC... An occurrence of this explanation happened just today,
>> and that pushed me to write this mail.
> Is the issue that the image files themselves have "generic" rather than
> "openstack" in their name? If so, this is literally the first time that
> I can recall you raising this as an issue. If it has been a problem for
> so long, why have you not opened a bug, a salsa issue, or (better yet) a
> merge request proposing an alternative name?
> At this point, we've been generating and documenting the "generic"
> images for long enough that I'm not sure changing their names is a great
> idea. It may be best to clarify our documentation around what images
> target OpenStack.

I'm not completely sure what's wrong and how to fix, but I know there's
wrongness, as nobody understands it. Remember at the MIT in Boston? Our
host himself didn't understood...

I agree that renaming the "generic" and "genericcloud" shouldn't happen
at this point, but at least it must be explained somewhere.

>> What should I do for Bullseye? Ask Steve to continue to generate "my"
>> OpenStack images so they stay in the debian.org domain, and maybe add
>> images for appliances like Octavia? Or open a new debian.net hostname,
>> and do it unofficially? Or generate only unofficial Octavia images? As
>> Bullseye is coming, I'm leaning toward doing everything by my own.
>> Your thoughts? Nothing is written into a stone yet, I will try (again)
>> to listen to advice, as I really feel lost in the team at the moment.
> To reiterate what I said earlier, you are of course free to build
> whatever you want.  However, in the interest of providing Debian users
> with the best possible experience, I don't think that's the best course
> of action for you to take.
> Following the most recent team meeting, Bastian implemented one of the
> features that you've long requested for OpenStack users in
> https://salsa.debian.org/cloud-team/debian-cloud-images/-/merge_requests/249/diffs
> (Note that the submodule in the project that actually runs this code
> hasn't been updated yet, so the symlinks aren't being created yet, but
> we can fix that soon).  So I don't think your despair is entirely
> warranted.

Many thanks for it.

I have to admit I looked at the merge request, and didn't understand it.

> Regarding the Octavia images, I remain unconvinced that we need to be
> generating them, but if you truly believe it's important, then I suggest
> that you do two things: 1. Document exactly what is needed in the image
> and why it's necessary to generate an application specific image rather
> than have users install after the fact (or generate their own images,
> which isn't hard). 2. Create a change to the debian-cloud-images
> repository and run the build on salsa to generate the images.  Show that
> it works and that it generates something of reasonable quality that is
> targeted to a particular need.  As I recall, the issues with it in the
> past were that the proposed images ended up including lots of
> unnecessary packages, making them large and complex.  If you can provide
> a compelling reason and a good implementation, then I think you'll find
> it easier to convince people.
> noah

Let me explain.

Octavia images (load balancer as a service images), like the normal
image, needs to be updated, for security bugfixes. Typically, it is the
responsibility of a cloud provider to do so.

As a public cloud provider myself, I do not see myself constantly
looking at the security fixes in the current release, to know what's
going on, and then manually:
- build an Octavia image
- upload the image
- do the "openstack loadbalancer failover" dance

What I want (and hopefully will very soon) to implement is:
- automatic build of the Octavia image on a public place, with updates
*ONLY* when a package is updated
- a script to do automatic detection of new versions of the image. I
thought about using something like uscan to do the work... When such
image is detected, automatic upload to the public cloud deployment, and
failover of the Octavia amphora VMs (that is: spawn new VMs with the new
image, and get them to take over the VRRP port gracefully to avoid any
down time)

This is something that I need *now* (or rather, within a few months from
now, before we declare Infomaniak public cloud as "in production").

This need isn't unique to myself. Anyone deploying Octavia will need
this. And I'd like to share all of this as free software.

I already sent TWICE a merge request. This second time, I haven't been
told that the merge request is wrong, but that the octavia-agent package
has wrong dependencies. I do not contest this, I'm saying that the
timing is very bad, and that at this point in the Bullseye release
cycle, I will *not* be able to fix this. Apart from the "hey too many
dependencies", I haven't heard any other critics.

I cannot afford to wait for another Debian release cycle, so either this
happens within the team (then great), or I *must* (as in: my boss wants
this...) fix it another way. I do not want that "other way" to be done
in private, and I will share it as free software.

Again, I feel very uncomfortable to write all the things in this
message, and feel like it's showing personal failure. I hope I could
figure out how to fix things, but years are passing, and I'm not. So
it's probably the time to investigate taking other actions. Please rest
assured that I have only good intentions, and that the years with things
stalled regarding what my objectives is showing it.


Thomas Goirand (zigo)

Reply to: