Re: trying to install bullseye for about 25th time.
On Thu 09 Jun 2022 at 06:04:08 (-0400), gene heskett wrote:
> On Thursday, 9 June 2022 04:18:04 EDT Andrew M.A. Cater wrote:
> > On Thu, Jun 09, 2022 at 02:59:43AM -0400, gene heskett wrote:
> > > On Thursday, 9 June 2022 00:31:55 EDT David Wright wrote:
> > > > On Tue 07 Jun 2022 at 16:07:13 (-0400), gene heskett wrote:
> > > > > On Tuesday, 7 June 2022 15:03:41 EDT gene heskett wrote:
> > > > > > On Tuesday, 7 June 2022 14:35:50 EDT David Wright wrote:
> > > > > > > On Tue 07 Jun 2022 at 14:17:08 (-0400), gene heskett wrote:
> > > > > The only way I know how to do that is take a screen shot with my
> > > > > camera. But thats not possible when running the D-I cuz w/o gimp,
> > > > > its at least 5 megs bigger than the server will accept. BTDT.
> > > >
> > > > I don't see why you need a screenshot to post the name(s) of the
> > > >
> > > > disk(s) in the partitioner menu. It's just one line per disk, like:
> > > > SCSI1 (0,0,0) (sda) - 500.1 GB ATA ST3500000AA
> > > >
> > > > taken from the listing posted in:
> > > > https://lists.debian.org/debian-user/2022/06/msg00055.html
> > > >
> > > > > > > > I could label it, but the partitioner doesn't do labels.
> > > > > > > > This drive is new, and has not anything written to it.
> > > > > > >
> > > > > > > Really? Which partitioner is that?
> > > > >
> > > > > The one in the D-I.
> > > >
> > > > The d-i partitioner lists PARTLABELs, as the cited listing showed:
> > > > BIOS boot pa/BullBoot/Linux swap/Viva-A/Viva-B/Viva-Home
>
> Andy, What good is a "partlabel" when it does not tell me which drive by
> the drives own readily found name as displayed by dmesg after a normal
> boot? With this drive showing up as ata-5 in dmesg, but somehow udev
> calls it /dev/sdb is confusing as can be. How the h--- does a drive
> plugged into ata-5 on the mobo, get to be named sdb when there are 8
> other drives in front of it, 4 of them on a different controller in the
> discovery process?
Come on, we've known for years that the /dev/sdX names for disks are
as good as random. On some machines, you can change the lettering of
their internal drives just by inserting USB sticks at boot.
> I've zero guarantees that the d-i boot will detect THIS drive I want to
> use, the same as it did for an 11-1 install which generates the dmesg I
> am reading. The d-i shoots itself in the foot with excellent aim in this
> regard.
For some reason, you won't show what the d-i partitioner /does/ call
it (assuming you're going to partition it with that, which I wouldn't).
> And the d-i will wind up doing a format on the raid10, destroying 6 month
> work, I'll have to reinvent. It did it unfailingly for many previous
> installs, because if I removed brltty, it would not boot past not find it
> in the reboot, which meant the only way I could reboot was to re-install
> yet again.
I've already posted how to ensure that can't happen, by telling the
d-i not to use the raid10 stuff when installing Debian, and setting up
your real /home later.
> To the d-i, my history is of no concern, do NOT forget that I've already
> done 25 damned installs trying to save my work.
Could we!
> I finally did figure out
> how to silence orca, without destroying the ability to reboot, but the
> uptime is 5 to 7 days. All because of seagates f-ing experiment with
> helium filled shingled drives which failed well within a year because
> they thought they could seal them well enough to keep the helium in them.
>
> If in 1960, a bank of monel metal bottles with 2" thick walls, went from
> 7200 psi to 4800 psi because it leaks thru 2" of monel from midnight to
> 7:30 when the day shift clocked in. That leakage cost the laboratory I
> was working for around $10,000 a day, we were validating the ullage tank
> presuure regulators for the Atlas missles that probably gave John Glen
> his first ride.
>
> Now seagate thinks they can keep it in a teeny hard drive so they can
> lower the flying height of the heads? The insanity in Oklahoma City knows
> no bounds. And I am out of the spinning rust camp forever, SSD's are
> faster AND far more dependable.
>
> I now have around 6 months work stored on that all SSD raid, and I'll be
> damned if I'll take a chance of losing it. But I'm convinced that I have
> to do one more install, clean of brltty and orca, to get uptimes past 8
> days. I have repeatedly asked how to get rid of it totally, several times
> on this list and have yet to be advised of a way to remove it that
> doesn't destroy the system, the dependency's removed cascade all the way
> back to libc6. Tying a specialty function that deep into the OS that it
> cannot be removed, only half-a--ed disabled and killing the uptime
> because it leaves a wild write someplace slowly destroying the system is
> inexcusable.
>
> Thats bs, and I'm fresh out of patience.
>
> There should be a procedure to fix this, but the procedure so far is to
> ignore my requests for help in this matter. Only 6 months later, have 2
> or 3 begun to understand and advise, and I'm gratefull, as I hope to be
> able to complete an install on a fresh drive that both saves my work And
> gets rid of the uptime limits of nominally a week. I am very carefully
> not installing stuff to the system other than from the repo, but to my
> own bin or AppImages directory on that raid10. With a suitably modified
> personal $PATH.
>
> As a detail that may be important, 60 gigs of swap is also on that
> raid10. mobo full at 32Gigs. No swap used ATM, current uptime 1d16h29.
With practice, I might get better at ignoring.
Cheers,
David.
Reply to: