[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Question about distributed FS for high-performance I/O



-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

Hi,

On 25/01/2016 18:01, Serge Cohen wrote:
> Dear list,
> 
> I am looking for a distributed FS for high-performance I/O (not high
> availability) that is well suited to be both served by debian systems
> and on which it is easy to have debian clients. The clients of the FS
> will be performing scientific computation which are often I/O bound
> (few large files, rather than many small files/DB like)
> 
> A single file is in the range of 100s of MB to 100s of GB. Datasets
> can be going up to a few TB (eg. 10 files of 200GB each). The
> computation is embarrassingly parallel but mostly I/O bound (one of
> the typical problem is related to transposition of arrays of 100GB
> size, each element being a few kB).
> 
> I am in a small lab, we already have some (or all) of the hardware : 
> 1 HP RAID arrays in fibre-channel and SAS, 3 servers for OSS + MDS
> types an infiniband fabric 2 extra servers on the fabric for
> computations and as «NAS head» for the rest of the network (partly
> 10GbE) for serving to client running unsupported OS/fabric.
> 

I have a few questions, if you don't mind, regarding this paragraph
because I am not sure I really get it well:
- - How is the disk array attached to servers?
- - What is the unsupported OS? This makes me think that your constraint
  on Debian below is not a hard requirement for the storage part.

If your storage is attached to two servers, then maybe your best option
today is to use NFS? Correctly configured and tuned on this kind of
configuration, it can run reasonably well! Did you consider that option?

> Initially this system was supposed to run Lustre as FS, but since the
> support never went into stable and now is not even into unstable
> anymore it is no more an option given our limited resources in term
> of sys-admin and related activities.
> 
> One option I have recently seen is BeeGFS, and it seems a reasonable
> solution… but the documentation is sparse and there seems to be not
> that many users already.
> 

There are a few more options you can try:
- - GlusterFS
- - and Ceph

Documentation for both is reather good and both are supported for Debian
(on both client and server side).

Alternatively, there is also MooseFS (or its fork LizardFS). You can have
a metadata server with MooseFS, but I'm not sure this would help a lot given
your current hardware setup.

> Is there a plan for Lustre to be back into stable distribution, do
> any of you have experience with BeeGFS (ex. FraunhoferGFS/FhGFS). Or
> do some of you have better (or even interesting) experience with
> other solutions ?
> 

I haven't investigated the state of Lustre yet, but I suspect the server
part still needs a patched version of the Kernel. It is less true for the
client side though. That's why it didn't make much sense to provide Lustre
in Debian. There was an ongoing effort to provide the client part back [1]
in Debian, but afaik involved persons have been stuck due to lack of testing
infrastructure.

[1] http://anonscm.debian.org/gitweb/?p=users/waldi/lustre.git

> Thanks in advance for any comments/help/pointers.
> 
> Sincerily,
> 
> Serge.
> 
> PS : We are using Debian for both servers and clients, and it is not
> an option for us to be using another system or distribution.
> 

Kind regards,

- -- 
Mehdi
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2

iQIcBAEBCAAGBQJWvRHFAAoJEDO+GgqMLtj/+swP/1CvO4g+WKIzVHb9pnR52fur
oPkMNq4UY4ytLi9t710bc3PfnI+W8ebovBkmuXg8wIyFPNQKNBWAKRpi7byJPOQD
Y3K8XaMsoJfOoxSQw7XvSsqPPRZv727EDcvTHcVOVjoGDbPYogmTa8HAnBu4EDTu
C2J2d6hlwVS2bKkBq7fyIih0SDhstB50JOkaS5Guzca22dSfhjvwWBsCbbGqwUSC
+hIBox5GTbLhlZKvXJTkSnnd6B62SwAgrlLJPrznSH1Ee3Yke0YWqmUNhcLSLiMO
q5iQsJyapyUuuCcLBRnUdt4Hn2c9+xzx8jKwd885B1YvyHAJMkV2muGEJJwXfj8e
OCURMEaInLNs1ilUjT1Gm69KJ8uELAwRVLgUHcxAAaY4B7AesdaEQIghUH/RxaEZ
FV7kJEo1bMS6yYDMPZswTYqfAOG03pVQ1C1MGT3xgFGBfNL+7rYgROHZRP2ITMwY
zg3y9ObL/LJtyacJYL5IxSnBlob57VSGhjgF9vVPB236KwdzhIT1p+Sg33b+IWFg
RO/+QmbtKrDu+HNRsIUOvJB0ilaLSlsudkkdS1j0STtpENC97l9pP02WTftyzgJ+
A7xyInMPr8sYumvRiocH6x4tT/N3Wsm5ekhuWVlhajFRQ2Ehh/H7so79/T5GdlVF
xLOI+OGPrfuF7LbzVj6I
=hrxn
-----END PGP SIGNATURE-----


Reply to: