Agree, we may be assessing merit from different perspectives.
But, if a code base is truly modular, why would you need to rebuild en masse? And, how would you confirm the result? Are you presuming a relatively thorough automated testing environment?
I worked for five years for a major diesel engine manufacturer, and I routinely consult for a Fortune 50 one, so I understand those issues too. I'm pretty sure we're talking about different things, at this point. But I'll keep going anyway... :-)
An automotive ECU might have 100+ million lines of code on it, but it's just a part of a larger, multi-vendor system...the majority of which you don't have build control over. So the system is already "modular" in the sense that there are well-defined information interfaces behind which the details don't (can't) matter to the other side.
Given that you've already got to deal with the "issues" of modularity in the system as a whole, e.g., you code to the protocol, and you can't just change the implementation of that protocol because it's another vendor's deliverable, why break that model inside your own box? Why force developers in five different buildings across three time zones to all stay in lockstep because their code might get snatched up into a product at any moment?
A better way is for them to release in blocks as functionality becomes ready, the same way each developer has a "working" branch on their desk that, when he's happy with it, gets published to the outside world. The outside world doesn't even WANT to see his incremental, ten-times-a-day changes---they want to see the code after he's "finished" it (whatever that means in their context).
You mean, kinda like a Linux distribution?
The virtualization doesn't really throw a wrench into the works, they're just recursive universes with the same issues as the enclosing universe but with a VERY hard boundary between them.
So the system isn't modular after all?
The only exception I can think of here is related to things like TZ, where the crypto keys are changing because they're based on something that changes elsewhere in the system: silicon, signatures of other code/images/libraries, etc. I don't have enough experience solving these problems to know how to deal with them well. A complete rebuild may be the only way for all I know.
It helps you find fail-to-build-from-source bugs early, unless you've got a bitchin' HIL automated test setup alongside.
Continuous integration has its place, for sure. But from what I've seen it's way oversold and, unless carefully managed, lets you idle tens of developers rather than just one or two when a problem sneaks in. And when you need a five-digit build number and a repo manifest to backtrack to the git commits that tell you the code that's in the build, then you're just brute-forcing modularity at that point.
For a certain management style and set of engineering problems and objectives, yes---Yocto is a good tool.
I just happen to think those styles are wrong, those objectives are suspicious, and those problems are best solved by avoiding them in the first place---which is the approach Debian takes.
Debian doesn't prevent you from doing a complete rebuild from source code, it just provides a way to avoid it: aim for library-style compatibility instead, and use dpkg's meta tools to sort those libraries out at bootstrap time.
There's a quiet assumption in your prose here that's worth calling out: you seem to be looking to the upstream maintainers to fix problems for you. The whole point of working with community source code is that you can fix problems yourself when the need arises.
Yes, the Debian distribution proper is a slow-moving code base. But just like with Yocto, you've got all the source code and so you can fix bugs and then confer with upstream outside of your development cycle if necessary. You don't HAVE to wait on Debian.
Viewed from another way, Pragmatux is essentially just stealing Debian's dpkg concept. Yeah, about 90% of mkos output is straight-up mainline .debs, because we'd use all that code anyway: sshd, systemd, etc. but we'd rather use the same images that the rest of the community is abusing^Ktesting every day, rather than Yocto-izing that code so that we can sit around and watch it build elsewhere and be eternally unsure of the results. The other stuff is custom, client-specific code that we've chosen to package up with dpkg (in some cases as just a big, binary blob) to make our lives easier.
"Debian can't handle the flow of code and bug fixes" because Debian is a distribution, not a Linaro-style development house with tons of free and paid support options. Thankfully, Debian's community guidelines strictly forbid them from getting in your way should the need arise---apt-get the source code, fix the bug, locally-house the modified package, and move on.
Granted, there are probably apps/libs in Yocto that aren't packaged already in Debian. That doesn't mean you can't package them and house the .debs locally. If you can un-Yocto-ize them, I mean.
If it scratches someone's itch, then it's cool by me. But when people risk reinventing wheels because they don't use the old wheel effectively, that's a drain on all of us. We're all smart people, and there's already too much in this universe to learn. If Debian is 10% inadequate for someone, why reinvent the other 90% poorly along the way?
And sorry, but...when someone complains that "Debian can't do that" when, in fact, it can but it doesn't do it the way they like it, I get more than a little annoyed. Call me a fanboy if you want, I don't have the time to listen because I'm using my cross-compilers rather than sitting around waiting for them to rebuild again. Or, maybe I'm boiling water for tea. Darn those watched pots... :-)