[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: growisofs will not write at 8x with NEC 3540A



Joerg Schilling schrieb am 2006-01-07:

> Cdrecord runs at highest realtime priority. If you use an OS that does not
> honor this setting, you should change your OS.

> A decent OS limits single I/O chunks to a nonproblematic value (typical 126 kB).
> As cdrecord needs 2 I/O operations per media transfer, a working scheduler 
> would not allow more than one additinal transfer from a lower priorized command 
> except when there is plenty of idle time.

You cannot fix I/O latencies (hardware dependent) or queueing with CPU
scheduler settings.

"Working schedulers" deal with CPU timeslices, not I/O scheduling! The
problem at hand is that once the I/O was scheduled, the CPU is yield to
another process, and if that stuffs some I/O requests on the queue
before cdrecord gets the CPU back to dispatch its 2nd requests, the
stuff is on the I/O queue and cdrecord has to wait.

> Cdrecord requires the system to be able to perform 1.2x the DMA speed.
> In the worst case and a correct HW setup, cdrecord would need to require
> a factor of 1.33x as reserve.

My systems does way more than that. cdrecord reports figures way beyond
100x CD writing speed or something like that.

> In case that the writer is at the same cable as the HDD where the data is read, 

Nope, SATA HDD and VIA VT8237 controller.

> > cdrecord was running as root, i. e. with RT priority, mlocked pages and
> > everything. OK, it was Linux, not FreeBSD or Solaris...
> 
> So you found a new reason to switch to Solaris ;-)

I still have half a dozen reasons to not do that. I haven't yet decided
if that's a pity or not. FreeBSD 6 is evolving well, too.

> > In this context: what's the difference between "10 predicted buffer
> > underruns" and "burnproof was 1x used"?
> 
>  A precited BU happens when the cdrecord FIFO fill ratio goes under 5%

Ah, so we just managed to refill the buffer in time 9x and failed to so
that 1x.

Thank you.

-- 
Matthias Andree



Reply to: