Re: [OT] Screen (was Affecting Inst. Change)
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
On 05/12/07 02:51, william pursell wrote:
> Ron Johnson wrote:
>
>> On 05/11/07 19:36, s. keeling wrote:
>>> Ron Johnson <ron.l.johnson@cox.net>:
>>>> On 05/11/07 12:49, s. keeling wrote:
>>>>> Ron Johnson <ron.l.johnson@cox.net>:
>
>>>> How do you limit the number of batch jobs that can run at any one
>>>> time? (If the "job limit" of a queue is 4 and you submit 20 jobs,
>>>> the first 4 jobs grab the execute slots, and the remaining 16 wait
>>>> until execute slots open up.)
>>> Meaning, you'd prefer that all 16 jobs run concurrently? That sounds
>>> sub-optimal (for most eventual users, at least).
>>
>> No. Exactly the opposite.
>>
>> Think of a bank with 4 tellers. If 20 people walk into the bank,
>> the first 4 get to the tellers, and the other 16 wait in the queue.
>> As a customer at a teller finishes his business at a teller and
>> walks away, the next customer in line goes up to that teller.
>
> I'm probably immersed too much in *nix, but I'm thinking, "Why?"
> If you have 20 jobs and 4 cpus, you schedule them all to run
> concurrently and you let the scheduler worry about the details
> of what runs. If you really want to ensure that you don't
> take a hit for context switches, then you run them with a
> high absolute priority, via sched_setscheduler(). Most batch
> processing is I/O bound anyway, so this won't really have
> any impact.
That's the theory, anyway...
> But, basically, it sounds like you want the
> batch manager to take over the job of the scheduler. I
> don't see the advantage of that.
The "4 slot limit" was just an example. Right now, we've got dozens
of jobs running simultaneously.
- --
Ron Johnson, Jr.
Jefferson LA USA
Give a man a fish, and he eats for a day.
Hit him with a fish, and he goes away for good!
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.6 (GNU/Linux)
iD8DBQFGRZE0S9HxQb37XmcRAvYaAJ97YV8/NSkb1MbMAaRVTpl1I4xR0ACg0KTJ
hA0gev1ntMv+QjraosS0YtE=
=wEsV
-----END PGP SIGNATURE-----
Reply to: