Re: How long will this take?
On Wed 10 Jun 2020 at 14:51:32 (-0400), Michael Stone wrote:
> On Wed, Jun 10, 2020 at 12:02:13PM -0500, David Wright wrote:
[snipped the first part as it's covered elsewhere]
> > My use case for badblocks was closer to that of the OP, but still
> > different. Firstly, the disk contained personal data from unencrypted
> > use in the past. Secondly, I was intending to use it encrypted (as
> > mentioned) and prefer no high-watermark. Thirdly, because of its
> > age (2011), I was interested in seeing how well it performed. I have
> > no idea whether the disk is "modern" in the sense you used, as I don't
> > follow the technology like some people on this list evidently do.
> > Fourthly, I don't make a habit of throwing away 2TB disks.
> badblocks isn't particularly useful for achieving any of those goals
> vs just writing zeros. "modern" in this context means anything since
> probably the mid 90s but my memory is a bit fuzzy on the exact dates.
> certainly anything since the turn of the century.
> > But, as you know about these things, a few questions:
> > . How does badblocks do its job in readonly mode, given that it
> > doesn't know what any block's content ought to be.
> you have to write the test data ahead of time
> > . Why might the OP run badblocks, particularly non-destructively
> > (as if to preserve something), and *then* zero the drive.
> the only person I saw mention badblocks in this thread was you, but I
> guess I might have missed it
No, you're right, I brought it up, and I *am* conflating two things:
the OP running an unspecified "read test", reading every sector
looking for errors, and a hypothetical person running badblocks.
If you were preserving the disk contents (imagine there were
proprietary encryption software on it), and performed a "read test"
or ran badblocks on it, would that be sufficient to test the disk's
performance, as it's merely reading the sectors. Or do you have to
actually write, with badblocks -r for example?
> > . What's the easiest way of finding out about "consistent bad
> > (not remappable) sectors" on a drive, as I soon will have to
> > repeat this result (if not by this exact process) with a 3TB
> > disk of 2013 vintage. (The good news: it has a USB3 connection.)
> you'll get a bunch of errors while writing, and probably the drive
> will drop offline. you can use smartctl in the smartmontools package
> to see the status of retries & remapped sectors and get a health
> report on the drive, which you can use to decide whether to keep the
> drive in service even if it is currently working. (as a drive ages it
> will often record an increasing number of correctable errors, which
> typically will result in failure in the not-distant future.)
OK, so as far as the 2TB disk is concerned, writing anything over the
entire disk will provoke the reporting and/or remapping of any bad
sectors by SMART, so you can then check the statistics.
The only unaddressed point in my use case is the prevention of a
high-water mark, because zeroing the drive achieves precisely the
opposite. What ought I to be running, instead of badblocks -w -t random,
to achieve that goal?
> a confounding factor is that you might also get write errors and
> dropped disk if there's a USB issue, separate from whether the drive
> is working properly. smartctl may help you understand whether there's
> a physical drive issue, and you can try different USB adapters, ports,
> and cables.
Actually, one of the difficulties I have with the 3TB disk is reading
its SMART information. The disk claims to collect and retain it, but
the ?protocol/?device interface (in the container?) prevents my
reading it successfully. But I'll ask about that in a separate post