Il 14/10/2010 00:26, Joerg Jaspert ha scritto: > Additionally there is a directory with "code dumps", basically a set of > python code, each file having a defined structure. The script would read > in all of em and figure out what they do / when they expect to run/what > they provide. Should we imagine a "one file, one task" structure? This way, it's just a matter of using imp.load_source() on all of them and looking for a standard layout (i.e. importing provides and command methods). Downside of this would be having *alot* of files into "dump" directory. > Priorities can be used to select which task to run first when executing > them in parallel and no dependency gets any order into it. Same priority > -> random, or alphabetic, or whatever order of execution) Is Priority really necessary? If tasks have to be run in parallel, it shouldn't be a problem if A runs a couple of CPU cycles just before B. If order matters, a dependency could be provided. There are examples of topsort algorithms which also check in advance for potential deadlocks, so we can be sure to not hang forever. > An easy first step can be a tool that: > 1. reads in the scripts > 2. computes the optimal scheduling > 3. outputs a list of processing steps, each step containing a list of > tasks that can be run in parallel. Output could include the sorted list of dinstall tasks, then they could be run in a threadpool and managed by a per-task lock, so tasks depending on others sleep until all of their dependencies are completed. -- .''`. : :' : Luca Falavigna <dktrkranz@debian.org> `. `' `-
Attachment:
signature.asc
Description: OpenPGP digital signature