[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Debian-based product build system





On Fri, Apr 22, 2016 at 4:00 AM Jeremiah Foster <jeremiah.foster@pelagicore.com> wrote:
On Thu, Apr 21, 2016 at 7:15 PM, Bill Gatliff <bgat@billgatliff.com> wrote:

This is true, it is an extremely complex tool. But it is used to do extremely complex builds on huge code bases that are also modular. For that use case it is actually quite good. This is the area that needs to be compared with ISAR, or one of the areas.


Agree, we may be assessing merit from different perspectives.

But, if a code base is truly modular, why would you need to rebuild en masse? And, how would you confirm the result? Are you presuming a relatively thorough automated testing environment?

 
Definitely not. In automotive use cases, you can have *routinely* 100 million lines of code.

I worked for five years for a major diesel engine manufacturer, and I routinely consult for a Fortune 50 one, so I understand those issues too. I'm pretty sure we're talking about different things, at this point.  But I'll keep going anyway...  :-)

An automotive ECU might have 100+ million lines of code on it, but it's just a part of a larger, multi-vendor system...the majority of which you don't have build control over. So the system is already "modular" in the sense that there are well-defined information interfaces behind which the details don't (can't) matter to the other side.

Given that you've already got to deal with the "issues" of modularity in the system as a whole, e.g., you code to the protocol, and you can't just change the implementation of that protocol because it's another vendor's deliverable, why break that model inside your own box? Why force developers in five different buildings across three time zones to all stay in lockstep because their code might get snatched up into a product at any moment?

A better way is for them to release in blocks as functionality becomes ready, the same way each developer has a "working" branch on their desk that, when he's happy with it, gets published to the outside world. The outside world doesn't even WANT to see his incremental, ten-times-a-day changes---they want to see the code after he's "finished" it (whatever that means in their context). 
 
In some cases you get up to double that. You get a heterogeneous software stack that starts with an RTOS, moves to Linux, then to virtualization, then to a mix of GNU/Linux with foreign frameworks, and finally a diverse userland often in mutiple levels of containment.

You mean, kinda like a Linux distribution?

The virtualization doesn't really throw a wrench into the works, they're just recursive universes with the same issues as the enclosing universe but with a VERY hard boundary between them.

 
When there is a code change anywhere you *often* need to rebuild everything.

So the system isn't modular after all?

The only exception I can think of here is related to things like TZ, where the crypto keys are changing because they're based on something that changes elsewhere in the system: silicon, signatures of other code/images/libraries, etc. I don't have enough experience solving these problems to know how to deal with them well. A complete rebuild may be the only way for all I know.

 
In addition, there is a fair amount of invention, so continuous integration is key and then rebuilding from scratch every day is a huge benefit because it finds bugs early.

It helps you find fail-to-build-from-source bugs early, unless you've got a bitchin' HIL automated test setup alongside.

Continuous integration has its place, for sure. But from what I've seen it's way oversold and, unless carefully managed, lets you idle tens of developers rather than just one or two when a problem sneaks in.  And when you need a five-digit build number and a repo manifest to backtrack to the git commits that tell you the code that's in the build, then you're just brute-forcing modularity at that point.
 
In addition, the layering and recipes provided by OE / Yocto allow you to arbitrarily switch out silicon which in product development can save millions of dollars. Lastly, you need the modularity of recipes and layers so that you can have mutliple, distributed software development teams that each integrate with arbitrary git repos behind firewalls and out on the internet.
 
Yocto is a seriously good tool for this.

For a certain management style and set of engineering problems and objectives, yes---Yocto is a good tool.

I just happen to think those styles are wrong, those objectives are suspicious, and those problems are best solved by avoiding them in the first place---which is the approach Debian takes.

Debian doesn't prevent you from doing a complete rebuild from source code, it just provides a way to avoid it: aim for library-style compatibility instead, and use dpkg's meta tools to sort those libraries out at bootstrap time.

 
I agree with you wholeheartedly. And I'm deeply impressed with your work with Pragmatux. But I have come to realize, reluctantly, that very large software projects with a lot of development and hard deadlines doesn't work well with Debian. There just aren't enough resources in Debian to handle the flow of code and having to wait for bug fixes, new package uploads, build runs, etc. is not feasible, sadly.

There's a quiet assumption in your prose here that's worth calling out: you seem to be looking to the upstream maintainers to fix problems for you. The whole point of working with community source code is that you can fix problems yourself when the need arises.

Yes, the Debian distribution proper is a slow-moving code base. But just like with Yocto, you've got all the source code and so you can fix bugs and then confer with upstream outside of your development cycle if necessary.  You don't HAVE to wait on Debian.

Viewed from another way, Pragmatux is essentially just stealing Debian's dpkg concept. Yeah, about 90% of mkos output is straight-up mainline .debs, because we'd use all that code anyway: sshd, systemd, etc. but we'd rather use the same images that the rest of the community is abusing^Ktesting every day, rather than Yocto-izing that code so that we can sit around and watch it build elsewhere and be eternally unsure of the results.  The other stuff is custom, client-specific code that we've chosen to package up with dpkg (in some cases as just a big, binary blob) to make our lives easier.

"Debian can't handle the flow of code and bug fixes" because Debian is a distribution, not a Linaro-style development house with tons of free and paid support options.  Thankfully, Debian's community guidelines strictly forbid them from getting in your way should the need arise---apt-get the source code, fix the bug, locally-house the modified package, and move on.

Granted, there are probably apps/libs in Yocto that aren't packaged already in Debian. That doesn't mean you can't package them and house the .debs locally. If you can un-Yocto-ize them, I mean.

 
I think it can work if you pull in something like ISAR behind your firewall, but with that I worry that changes made behind the corporate firewall are a low priority to push up to Debian. Nonetheless, ISAR seems to potentially be a smart compromise between the "boil the ocean" yocto approach and binary package modularity of Debian and I hope it succeeds. I think it is needed.

If it scratches someone's itch, then it's cool by me.  But when people risk reinventing wheels because they don't use the old wheel effectively, that's a drain on all of us.  We're all smart people, and there's already too much in this universe to learn.  If Debian is 10% inadequate for someone, why reinvent the other 90% poorly along the way?

And sorry, but...when someone complains that "Debian can't do that" when, in fact, it can but it doesn't do it the way they like it, I get more than a little annoyed.  Call me a fanboy if you want, I don't have the time to listen because I'm using my cross-compilers rather than sitting around waiting for them to rebuild again. Or, maybe I'm boiling water for tea. Darn those watched pots... :-)

</rant>

b.g.
--

Bill Gatliff
Embedded Linux training and consulting
bgat@billgatliff.com
(309) 453-3421


Reply to: