[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

small blocks or more inodes?



To follow up on an email i sent not to long ago I have been digging
through ext2 documentation to try to find what is the best setup for
storing tons of small files(tens/hundreds of thousands).

The filesystem is composed of 2 9.1GB Ultra160 drives in raid1.

I have 2 identical systems setup with different parameters:

mail-wa:~# more /root/dumpe2fs-md0 
Filesystem volume name:   <none>
Last mounted on:          <not available>
Filesystem UUID:          236531e2-03b8-4aac-ad6a-5ac880857bae
Filesystem magic number:  0xEF53
Filesystem revision #:    1 (dynamic)
Filesystem features:      filetype sparse_super
Filesystem state:         not clean
Errors behavior:          Continue
Filesystem OS type:       Linux
Inode count:              560128
Block count:              8956096
Reserved block count:     89560
Free blocks:              8883337
Free inodes:              560116
First block:              1
Block size:               1024
Fragment size:            1024
Blocks per group:         8192
Fragments per group:      8192
Inodes per group:         512
Inode blocks per group:   64

and

mail-ca:~# more /root/dumpe2fs 
Filesystem volume name:   <none>
Last mounted on:          <not available>
Filesystem UUID:          f79dcbca-4c5d-4b13-9a11-a46ee1bca1df
Filesystem magic number:  0xEF53
Filesystem revision #:    1 (dynamic)
Filesystem features:      filetype sparse_super
Filesystem state:         not clean
Errors behavior:          Continue
Filesystem OS type:       Linux
Inode count:              1121664
Block count:              2239024
Reserved block count:     111951
Free blocks:              2203810
Free inodes:              1121652
First block:              0
Block size:               4096
Fragment size:            4096
Blocks per group:         32768
Fragments per group:      32768
Inodes per group:         16256
Inode blocks per group:   508
Last mount time:          Mon Dec 11 21:55:32 2000


I tried running bonnie++ and told it to create 8GB of data in 4 million
files but it didn't work, all i got was "aborted" after some time.

any opinions as to which is more effective at storing small files?(or
maybe another option ...)

thanks!

nate


:::
http://www.aphroland.org/
http://www.linuxpowered.net/
aphro@aphroland.org
4:15pm up 88 days, 1:33, 3 users, load average: 0.05, 0.03, 0.00



Reply to: