[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: file systems



On 4/30/2011 11:48 PM, shawn wilson wrote:

i'm interested in not seeing unsubstantiated opinion on a technical
mailing list.

That 'opinion' is based, in part, on the following facts, many of which are in my previous posts to this list. If you would like, to avoid expressing 'opinion' in the future, I could simply paste the following huge ass text into every email dealing with XFS, instead of using short hand subjective phrases such as 'XFS is the overall best Linux FS'. The following, and additional evidence freely available, demonstrates this 'opinion' to be fact.

All four US National Nuclear Security Administration (NNSA) labs: LANL, LLNL, Oak Ridge, and Sandia, as well as NASA Ames and the US Air Force Research Laboratory in Dayton, Ohio, have all used, or still use, XFS and/or CXFS on large scale storage, dozens of petabytes of XFS disk total.

NASA Ames has been using XFS for 16+ years, and still do, on the 10,240 processor (originally) Columbia super and the archival servers. They're currently running an 800TB CXFS filesystem on SAN storage, and local XFS filesystems on 215TB, 175TB, and 65TB direct fiber attached storage.

http://www.nas.nasa.gov/Resources/Systems/columbia.html
http://www.nas.nasa.gov/Resources/Systems/archive_storage.html

Professor Steven Hawking's research group has used 4 generations of SGI supercomputers spanning 14 years running cosmology simulations to support Dr. Hawking's theories, each machine, as with all SGI supers, running XFS: http://www.damtp.cam.ac.uk/cosmos/hardware/

Linux Kernel Archives said:

"A bit more than a year ago (as of October 2008) kernel.org, in an ever increasing need to squeeze more performance out of its machines, made the leap of migrating the primary mirror machines (mirrors.kernel.org) to XFS. We site a number of reasons including fscking 5.5T of disk is long and painful, we were hitting various cache issues, and we were seeking better performance out of our file system."

"After initial tests looked positive we made the jump, and have been quite happy with the results. With an instant increase in performance and throughput, as well as the worst xfs_check we've ever seen taking 10 minutes, we were quite happy. Subsequently we've moved all primary mirroring file-systems to XFS, including www.kernel.org , and mirrors.kernel.org. With an average constant movement of about 400mbps around the world, and with peaks into the 3.1gbps range serving thousands of users simultaneously it's been a file system that has taken the brunt we can throw at it and held up spectacularly."

The kernel code running on your system Shawn was originally served from an XFS filesystem. The Debian kernel team gets their upstream tarball from kernel.org as everyone does, served up by XFS. If this fact doesn't carry weight for a Linux user I don't know what would...


Very Interesting XFS research paper from a few years ago authored by two of the principal XFS developers:

http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.114.1918&rep=rep1&type=pdf


Independent Linux filesystem tests performed by an IBM engineer to track BTRFS performance during development. XFS trounces the others in most tests:

http://btrfs.boxacle.net/repository/raid/2.6.35-rc5/2.6.35-rc5/2.6.35-rc5_Large_file_creates_num_threads=1.html

http://btrfs.boxacle.net/repository/raid/2.6.35-rc5/2.6.35-rc5/2.6.35-rc5_Large_file_creates_num_threads=16.html

http://btrfs.boxacle.net/repository/raid/2.6.35-rc5/2.6.35-rc5/2.6.35-rc5_Large_file_creates_num_threads=128.html

http://btrfs.boxacle.net/repository/raid/2.6.35-rc5/2.6.35-rc5/2.6.35-rc5_Large_file_random_reads._num_threads=1.html

http://btrfs.boxacle.net/repository/raid/2.6.35-rc5/2.6.35-rc5/2.6.35-rc5_Large_file_random_reads._num_threads=16.html

http://btrfs.boxacle.net/repository/raid/2.6.35-rc5/2.6.35-rc5/2.6.35-rc5_Large_file_random_reads._num_threads=128.html

http://btrfs.boxacle.net/repository/raid/2.6.35-rc5/2.6.35-rc5/2.6.35-rc5_Large_file_random_writes._num_threads=1.html

http://btrfs.boxacle.net/repository/raid/2.6.35-rc5/2.6.35-rc5/2.6.35-rc5_Large_file_random_writes._num_threads=16.html

http://btrfs.boxacle.net/repository/raid/2.6.35-rc5/2.6.35-rc5/2.6.35-rc5_Large_file_random_writes._num_threads=128.html

http://btrfs.boxacle.net/repository/raid/2.6.35-rc5/2.6.35-rc5/2.6.35-rc5_Large_file_random_writes_odirect._num_threads=1.html

http://btrfs.boxacle.net/repository/raid/2.6.35-rc5/2.6.35-rc5/2.6.35-rc5_Large_file_random_writes_odirect._num_threads=16.html

http://btrfs.boxacle.net/repository/raid/2.6.35-rc5/2.6.35-rc5/2.6.35-rc5_Large_file_random_writes_odirect._num_threads=128.html

http://btrfs.boxacle.net/repository/raid/2.6.35-rc5/2.6.35-rc5/2.6.35-rc5_Large_file_sequential_reads._num_threads=1.html

http://btrfs.boxacle.net/repository/raid/2.6.35-rc5/2.6.35-rc5/2.6.35-rc5_Large_file_sequential_reads._num_threads=16.html

http://btrfs.boxacle.net/repository/raid/2.6.35-rc5/2.6.35-rc5/2.6.35-rc5_Large_file_sequential_reads._num_threads=128.html

http://btrfs.boxacle.net/repository/raid/2.6.35-rc5/2.6.35-rc5/2.6.35-rc5_Mail_server_simulation._num_threads=1.html

http://btrfs.boxacle.net/repository/raid/2.6.35-rc5/2.6.35-rc5/2.6.35-rc5_Mail_server_simulation._num_threads=16.html

http://btrfs.boxacle.net/repository/raid/2.6.35-rc5/2.6.35-rc5/2.6.35-rc5_Mail_server_simulation._num_threads=128.html


--
Stan


Reply to: