Re: Relocation of Elgar to FU Berlin
On Sun, Jun 09, 2013 at 12:40:05PM +1200, schmitz wrote:
> >Thanks for picking it up and hosting it! :-)
> Thanks to Adrian for giving it a new home, and let's not forget to
> thank you for hosting it all this time!
> >>- replace the system disk with a new 250GB or 500GB drive
> >Remember that AmigaOS is still 3.1 and it should stay that way.
> >So, while it's no problem to replace the 40 GB disk by a larger
> >one (it just needs some time), IDE is still slow on A4000s and we
> >should prefer SCSI disks for performance reasons anyway. Actually,
> >there is not really need for >40 GB disks, IMHO.
> The system disk won't need that much space - even for the buildd
> chroots and scratch space we won't quite need as much. But let's not
> forget the old IDE disks will wear out some time, and a new disk
> might be what keeps elgar running a bit longer.
> Christian had procured IDE to SCSI adapters years ago - not sure
> these are still available for sale anywhere. That would be about the
> only option to hook up a large disk via SCSI. (We'd also need to
> test and debug the SCSI driver, of course).
They are EXPENSIVE... I can't believe that I paid 100EUR per piece. But
nowadays you want SATA-IDE adapters, they run at about the same price, I
think they were from acard. Sorry, I don't have my bookmarks here, I just
arrived in Thun. And suddenly things don't seem so expensive anymore ;-)
Other IDE options are the CF-IDE (or SD-IDE) converters, remember, the one I
use to resurrect my Falcon? For booting that should work, but of cource not
for chroot storage.
> Concur - we had a break-in into kullervo or crest at some stage a
> few years back, and that was through exim. Better make it accept
> mail from a smarthost only.
Do we have a smarthost for that? At the moment kullervo can receive email
directly. Well, through a couple of hoops, but I am not running fetchmail on
kullervo. I thought that when kullervo returns to NMMN, I would return to
the old setup, since I do not have a mail server on the NMMN network.