[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Intel 82576 Gigabit on Debian 7 slow speed.



On 12/19/2015 07:48 AM, Mimiko wrote:
After reviewing the results of test, I've modified smb.conf. I've added
max protocol = SMB2 and removed SO_RCVBUF=8192 SO_SNDBUF=8192 from
socket options. The read speed from this server increased to 40MB/s, the
write speed to this server increased to 30MB/s.

Improving Samba is good, but I would optimize the file system first. You've got a small mountain of hardware, and your disk throughput numbers should be an order of magnitude better.


Also I want to set up drdb to sync partitions to another file server.

Do you mean drbd?

    http://drbd.linbit.com/


BUT, this server boots from an md mirror on two separate SSD's. Reading
and writing to this disks give same low speed.

LUKS?  Above or below md?


Its a supermicro server based on SuperMicro X8DTi-F motherboard

    http://www.supermicro.com/products/motherboard/QPI/5500/X8DTi-F.cfm

The motherboard web page says 6 @ SATA2 ports. Where do you get the other 12? SATA2 or SATA3?


What interface(s) do the HDD's and SSD's have?


> with 2 x Intel Xeon E5620 CPU @ 2,4GHz


http://ark.intel.com/products/47925/Intel-Xeon-Processor-E5620-12M-Cache-2_40-GHz-5_86-GTs-Intel-QPI?q=Intel%C2%AE%20Xeon%C2%AE%20Processor%20E5620%20%2812M%20Cache,%202.40%20GHz,%205.86%20GT/s%20Intel%C2%AE%20QPI%29

Each CPU has 4 cores, HT (so, 8 threads), and AES-NI.


> with 18 GB ram.

The motherboard supports 192 GB of RAM.


Do you know if your amount of RAM is too much, too little, or just right?


Same cable types, same server type on windows gives faster file
transfer. All servers is on gigabit switch.

Then it's software and/or configuration.


iperf between two Debian 7 servers gives a good performance almost. The
problem server has Intel 82576 dual gigabit.
On the other server it is Broadcom Corporation NetXtreme BCM5705
Gigabit. The othe server gives 80MB/s on read and 60MB/s on write. Still
not 100MB/s as on windows, but it is 1 cpu older server.

I ran iperf earlier between two of my machines with Intel desktop Gigabit Ethernet chips, a few meters of Category 5e cable, and a consumer Gigabit switchin between, and got ~950 MB/s. You might want to find another machine with Intel Ethernet chips and test again.


Yes, zeros are easily compressed. I've tried to use /dev/random as
source, but this device give me less then 1KB/s.

/dev/random is meant for small amounts of high-quality random numbers (e.g. seeds and keys). It draws directly from the the kernel entropy pool, and blocks when the pool is low.


When I used /dev/urandom, I've got 8,6MB/s and dd used 100% of one core.

/dev/urandom uses the entropy pool sparingly and generates the rest with a software PRNG.


LUKS?  On the raw drives, or on partitions?
I used raw drives 8 x 2TB and 8 x 1TB.

So, 16 LUKS containers below ZFS? If you have fewer than 16 ZFS volumes, you could put LUKS above ZFS and below ext4.


This is BIOS settings:
Ratio CMOS Setting: 18
C1E Support: Disabled
Hardware Prefetcher: Enabled
Adjacent Cache Line Prefetch: Enabled
DCU Prefetcher: Enabled
Data Reuse Optimisation: Enabled
MPS and ACPI MADT ordering: Modern ordering
Max CPUID Value Limit: Disabled
Intel(R) Virtualization Tech: Enabled
Execute-Disable Bit Capability: Enabled
Intel AES-NI: Disabled
                ^^^^^^^^
If you are running LUKS, that would explain low HDD sequential throughput (latency causes low random throughput) and low SSD overall throughput. Enable AES-NI in the BIOS settings and test again.


Simultaneous Multi-Threading: Enabled
Active Processor Cores: All
Intel(R) EIST tech: Disabled
Intel(R) C-STATE tech: Disabled
Clock Spread Spectrum: Disabled

Are those the defaults? I typically reset to the BIOS default settings, and then change as few as possible after careful consideration and followed by testing.


Different-sized drives makes things more interesting.  What arrangements
did you try?   What did you settle on, and why?

I've read a lot about zfs before setting up this server and used mostly
recommended ashift=12 and other settings to suit my needs.

Why raidz2?

To allow up two 2 disk to fail.

Without seeing your exact 'zpool create ...' invocation, it is difficult to comment.


A ZFS volume with an ext4 file system?  Why?

ACL access list and future use of drdb.

Perhaps you should file bug reports/ feature requests.


So hope to cope with windows speed in file transfer.

There are many complex decisions to make; it's easy to make a mistake. Once you figure it out, either OS should give good results.


David


Reply to: