[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Experimental queue?

The package MLton is a Standard ML compiler which is itself written in Standard ML. To bootstrap the package building process on a new architecture requires an initial by-hand cross-compile step (and occasionally some source-level patching). Thus, the first upload for a new architecture must be a manual upload of a built-by-hand package. Thereafter I need to confirm that the autobuilders can build subsequent uploads themselves.

I intend to bootstrap a few more architectures for this package and wanted to know if this would be an appropriate use for the experimental upload queue. The intermediate packages are probably more unstable than what one expects even from the unstable queue. I was hoping I could get some information about the experimental upload queue as I have never used it:

* Do the autobuilders build packages uploaded as experimental? (eg: to confirm a successful port)
* Is making an unstable upload really as easy as setting the changes file to experimental?
* Can a package uploaded to experimental be migrated to unstable?
 * I definitely don't want this happen automatically
 * At some point I probably want to push the newest versions from experimental to unstable (to facilitate building the new architectures) and then upload a new 'final' version that gets autobuilt for all the new targets, landing in unstable.

Finally, how can I determine which debian autobuilders have >1GB of RAM (required for a successful build).

Advice greatly appreciated.

Reply to: