On Sat, 2011-02-19 at 15:55 -0500, Michael Gilbert wrote: > On Sat, 19 Feb 2011 20:30:47 +0000 Ben Hutchings wrote: > > > On Sat, 2011-02-19 at 14:59 -0500, Michael Gilbert wrote: > > > On Sat, 19 Feb 2011 19:32:08 +0000 Ben Hutchings wrote: > > [...] > > > > > Again, if the user is interested in such new developments, they will > > > > > need to be willing to learn how to run an unstable system. > > > > > > > > I thought that users interested in new stuff were supposed to run CUT. > > > > > > Most packages will in fact be new, just the kernel and reverse > > > dependencies will be held back. Hence CUT users will get 99% new > > > stuff (with respect to stable), and a tiny bit held back simply for > > > stability. Like I've said a couple times now, its a balancing act. > > > > > > All I'm asking for is a few month long experiment. And if the > > > experiment shows signs of flaws/weaknesses, then the blocker can > > > certainly be lifted. > > > > If an experiment is to have any validity, the hypothesis and the > > criteria for assessing the outcome must be decided in advance. If you > > can do that, perhaps you will persuade some people that this is worth > > doing. > > Hypothesis 1: using an older kernel in testing results in fewer vulnerabilities > > Criteria: fewer vulnerabilities in lenny than squeeze during squeeze testing cycle > Evidence: lenny's kernel was vulnerable to 67% of the vulnerabilities that squeeze > Conclusion: hypothesis verified > > Criteria: fewer vulnerabilities in squeeze than wheezy during wheezy testing cycle > Evidence: to be collected # vulnerabilities in squeeze and wheezy > Conclusion: to be determined This experiment does not require that the propagation of kernel packages into testing is changed. > Hypothesis 2: using an older kernel version makes less work to provide CUT > > Criteria: how often CUT target release date is met > Evidence: to be collected monthly release date by retaining 2.6.32 and monthly > for standard unstable->testing transitions > (note: requires a 2.6.32-only period for reference) > Conclusion: to be determined OK, that's a real experiment. However I suspect there will be many confounding factors that make it difficult to single out any one cause for delays. > I can't imagine anyone else being put through such a arduous process > to try an experiment for a couple months. Why does it have to be so > difficult? Because this experiment would involve many thousands of users, and you have to convince other developers that the benefit to these users may be worth the cost. Ben. -- Ben Hutchings Once a job is fouled up, anything done to improve it makes it worse.
Attachment:
signature.asc
Description: This is a digitally signed message part