[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: raid10 is killing me, and applications that aren't willing towait for it to respond




On 12/13/23 13:20, gene heskett wrote:
On 12/13/23 11:51, Pocket wrote:

On 12/13/23 10:26, gene heskett wrote:
Greetings all;

I thought I was doing things right a year back when I built a raid10 for my /home partition. but I'm tired of fighting with it for access. Anything that wants to open a file on it, is subjected to a freeze of at least 30 seconds BEFORE the file requester is drawn on screen. Once it has done the screen draw and the path is established, read/writes then proceed at multi-gigabyte speeds just like it should, but some applications refuse to wait that long, so digiKam cannot import from my camera for example one, QIDISlicer is another that get plumb upset and declares a segfault, core dumped, but it can't write the core dump for the same reason it declared a segfault.  Here is a copy/paste of the last attempt to select the "device" tab in QIDISlicer:
-----------------------------------------------
Error creating proxy: Error calling StartServiceByName for org.gtk.vfs.GPhoto2VolumeMonitor: Timeout was reached (g-io-error-quark, 24)

** (qidi-slicer:389574): CRITICAL **: 04:55:46.975: Cannot register URI scheme wxfs more than once

** (qidi-slicer:389574): CRITICAL **: 04:55:46.975: Cannot register URI scheme memory more than once

(qidi-slicer:389574): Gtk-CRITICAL **: 04:55:47.084: gtk_box_gadget_distribute: assertion 'size >= 0' failed in GtkScrollbar [2023-12-13 05:10:27.325222] [0x00007f77e6ffd6c0] [error] Socket created. Multicast: 255.255.255.255. Interface: 192.168.71.3
Unhandled unknown exception; terminating the application.
Segmentation fault (core dumped)
-----------------------------------------------------
This where it was attempting to open the cache buffers if needed to remember what moonraker, a web server driver which is part of the klipper install on the printer, addressed at 192.168.71.110: with an odd, high numbered port above 10,000.

I've been here several times with this problem without any constructive responses other than strace, which of course does NOT work for network stuff, and would if my past history with it is any indication, generate several terabytes of output, but it fails for the same reason, no place to put its output because I assume, it can't write to the raid10 in a timely manner.

So one more time: Why can't I use my software raid10 on 4 1T SSD's ?????

Cheers, Gene Heskett.


I gave up using raid many years ago and I used the extra drives as backups.

So why did you give up? Must have been a reason.

Many reasons........

No real benefit (companies excepted), and issues like you have been posting.

If the RAID controller bites the bullet you are usually toast unless you have another RAID controller (same manufacturer and type) as a spare.

I have zero luck replacing one companies raid controller with another and ditto on raid built into the motherboard.

I really don't need any help losing my data/files as I do a good job of that all by myself ;)

I found it is better to just have my data on several backup disks, that way if one fails I get another disk and copy all the data to the newly purchased disk.

After removing raid, I completely redesigned my network to be more inline with the howtos and other information.

I have little to nothing on the client system I use daily, everything is on networks systems and they have certain things they do.

I have a "git" server that has all my setup/custom/building scripts and all my programming and solidworks projects.

I have DELPHI build apps going back to about 1995.

It all backed up to a backup server(master and slave) and also a 4TB offline external hard drive.  I have not "lost" any information since.

I also found that DHCP and NetworkManager is your friend.

Maybe you should review your network setup as you seem to have a lot is issues with it?


Wrote a script to rsync  /home to the backup drives.


Cheers, Gene Heskett.

--
It's not easy to be me


Reply to: