Why don't I get a performance increase with the linux bonding driver in balance-alb mode in a "local network" configuration?

classic Classic list List threaded Threaded
2 messages Options
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Why don't I get a performance increase with the linux bonding driver in balance-alb mode in a "local network" configuration?

t35t0r
I've got the following system:

Linux server 2.6.26 #5 SMP Fri Feb 6 12:18:54 CST 2009 ppc64 PPC970FX,
altivec supported RackMac3,1 GNU/Linux

Setup on a cisco gig switch with both gigabit NICs in the following setup:

% cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.2.5 (March 21, 2008)

Bonding Mode: adaptive load balancing
Primary Slave: None
Currently Active Slave: eth0
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: eth0
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:0d:93:9e:2b:ca

Slave Interface: eth1
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:0d:93:9e:2b:cb

Both eth0 and eth1 are running at 1000Mb/s full according to ethtool.
I start netserver (from netperf) on the server and then run netperf on
four other singlegigabit connected clients on the same switch with:

netperf -l 45 -t TCP_STREAM -H server

but I get the following outputs:

client 1:

Recv   Send    Send
Socket Socket  Message  Elapsed
Size   Size    Size     Time     Throughput
bytes  bytes   bytes    secs.    10^6bits/sec

 87380  16384  16384    45.04     405.68

client 2:

Recv   Send    Send
Socket Socket  Message  Elapsed
Size   Size    Size     Time     Throughput
bytes  bytes   bytes    secs.    10^6bits/sec

 87380  16384  16384    45.01     216.07

client 3:

Recv   Send    Send
Socket Socket  Message  Elapsed
Size   Size    Size     Time     Throughput
bytes  bytes   bytes    secs.    10^6bits/sec

 87380  16384  16384    45.87     143.93

client 4:

Recv   Send    Send
Socket Socket  Message  Elapsed
Size   Size    Size     Time     Throughput
bytes  bytes   bytes    secs.    10^6bits/sec

 87380  16384  16384    45.05     164.05

adding those up gives me:

$ echo "405.68+216.07+143.93+164.05" | bc -l
929.73

Not more than 1gbps. Shouldn't I be able to get >> 1gbps with balance-alb mode?

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Why don't I get a performance increase with the linux bonding driver in balance-alb mode in a "local network" configuration?

Bugzilla from kyron@neuralbs.com
Did you ask your question/search on the Beowulf ML? This one seems familiar and is a
recurrent theme on that ML.

Eric

> I've got the following system:
>
> Linux server 2.6.26 #5 SMP Fri Feb 6 12:18:54 CST 2009 ppc64 PPC970FX,
> altivec supported RackMac3,1 GNU/Linux
>
> Setup on a cisco gig switch with both gigabit NICs in the following setup:
>
> % cat /proc/net/bonding/bond0
> Ethernet Channel Bonding Driver: v3.2.5 (March 21, 2008)
>
> Bonding Mode: adaptive load balancing
> Primary Slave: None
> Currently Active Slave: eth0
> MII Status: up
> MII Polling Interval (ms): 100
> Up Delay (ms): 0
> Down Delay (ms): 0
>
> Slave Interface: eth0
> MII Status: up
> Link Failure Count: 0
> Permanent HW addr: 00:0d:93:9e:2b:ca
>
> Slave Interface: eth1
> MII Status: up
> Link Failure Count: 0
> Permanent HW addr: 00:0d:93:9e:2b:cb
>
> Both eth0 and eth1 are running at 1000Mb/s full according to ethtool.
> I start netserver (from netperf) on the server and then run netperf on
> four other singlegigabit connected clients on the same switch with:
>
> netperf -l 45 -t TCP_STREAM -H server
>
> but I get the following outputs:
>
> client 1:
>
> Recv   Send    Send
> Socket Socket  Message  Elapsed
> Size   Size    Size     Time     Throughput
> bytes  bytes   bytes    secs.    10^6bits/sec
>
>  87380  16384  16384    45.04     405.68
>
> client 2:
>
> Recv   Send    Send
> Socket Socket  Message  Elapsed
> Size   Size    Size     Time     Throughput
> bytes  bytes   bytes    secs.    10^6bits/sec
>
>  87380  16384  16384    45.01     216.07
>
> client 3:
>
> Recv   Send    Send
> Socket Socket  Message  Elapsed
> Size   Size    Size     Time     Throughput
> bytes  bytes   bytes    secs.    10^6bits/sec
>
>  87380  16384  16384    45.87     143.93
>
> client 4:
>
> Recv   Send    Send
> Socket Socket  Message  Elapsed
> Size   Size    Size     Time     Throughput
> bytes  bytes   bytes    secs.    10^6bits/sec
>
>  87380  16384  16384    45.05     164.05
>
> adding those up gives me:
>
> $ echo "405.68+216.07+143.93+164.05" | bc -l
> 929.73
>
> Not more than 1gbps. Shouldn't I be able to get >> 1gbps with balance-alb mode?
>
>



Loading...