Linux based router for Gigabit traffic

Linux based router for Gigabit traffic

am 22.08.2004 14:32:59 von Anantha Kiran

------=_Part_27_7167909.1093177979212
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

Hi
I am doing a project, in which i have to redirect traffic coming from
one ethernet card of a machine, to one of three remaining three
ethernet cards of same machine based on the src,dest IP and Port
values of the pkt. I wrote a net_hook module to do this which i
working fine for lower speeds like < 400 Mbps traffic. But project
goal is to deal with gigabit traffic. I have used gigabit ethernet
cards and Switch. But when the traffic rate is more than 400Mbps it is
dropping packets. I did test, by increasing the transmit queue
length. But same problem is coming. I have found during the pkt drop
there is no memory or CPU is hundred percently utilised. So, what can
be the resource that is lacking while dropping of pkts is happend.

We have done a test, in which module will simply forward pkts from
"eth0" through "eth1". Same problem of pkt dropping at highier speed
is happening.

Now we thought , it can be due to PCI bus. But we are unable to find
what is configuration of PCI bus , our system is using. we have used
"lspci" to find that. But i am unable to interpret it. I am attaching
that output file. Can anybody tell me how to interpret that or give
some guidelines , like whom to mail, to find that.

Atlast we try to solve the problem in the following manner. Idea is
to, install a router for my machine which can deal with gigabit
traffic. If the router is able to work with gigabit traffic, then
definitely pkts dropping will not be due to PCI bus.

So, canbdoy please, suggest some opensource router for linux which
can deal with gigabit traffic. Or atleast some suggestions to solve
this problem.
--
K.AnanthaKiran
D-108/HALL-7

------=_Part_27_7167909.1093177979212
Content-Type: application/octet-stream; name="lspcioutput"
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename="lspcioutput"

MDA6MDAuMCBIb3N0IGJyaWRnZTogSW50ZWwgQ29ycC4gODI4NDVHL0dMIFtC cm9va2RhbGUtR10g
Q2hpcHNldCBIb3N0IEJyaWRnZSAocmV2IDAxKQ0KCVN1YnN5c3RlbTogSW50 ZWwgQ29ycC4gODI4
NDVHL0dMIFtCcm9va2RhbGUtR10gQ2hpcHNldCBIb3N0IEJyaWRnZQ0KCUNv bnRyb2w6IEkvTy0g
TWVtKyBCdXNNYXN0ZXIrIFNwZWNDeWNsZS0gTWVtV0lOVi0gVkdBU25vb3At IFBhckVyci0gU3Rl
cHBpbmctIFNFUlIrIEZhc3RCMkItDQoJU3RhdHVzOiBDYXArIDY2TWh6LSBV REYtIEZhc3RCMkIr
IFBhckVyci0gREVWU0VMPWZhc3QgPlRBYm9ydC0gPFRBYm9ydC0gPE1BYm9y dCsgPlNFUlItIDxQ
RVJSLQ0KCUxhdGVuY3k6IDANCglSZWdpb24gMDogTWVtb3J5IGF0IGY4MDAw MDAwICgzMi1iaXQs
IHByZWZldGNoYWJsZSkgW3NpemU9NjRNXQ0KCUNhcGFiaWxpdGllczogPGF2 YWlsYWJsZSBvbmx5
IHRvIHJvb3Q+DQoNCjAwOjAyLjAgVkdBIGNvbXBhdGlibGUgY29udHJvbGxl cjogSW50ZWwgQ29y
cC4gODI4NDVHL0dMIFtCcm9va2RhbGUtR10gQ2hpcHNldCBJbnRlZ3JhdGVk IEdyYXBoaWNzIERl
dmljZSAocmV2IDAxKSAocHJvZy1pZiAwMCBbVkdBXSkNCglTdWJzeXN0ZW06 IEludGVsIENvcnAu
OiBVbmtub3duIGRldmljZSA0YzU5DQoJQ29udHJvbDogSS9PKyBNZW0rIEJ1 c01hc3RlcisgU3Bl
Y0N5Y2xlLSBNZW1XSU5WLSBWR0FTbm9vcC0gUGFyRXJyLSBTdGVwcGluZy0g U0VSUi0gRmFzdEIy
Qi0NCglTdGF0dXM6IENhcCsgNjZNaHotIFVERi0gRmFzdEIyQisgUGFyRXJy LSBERVZTRUw9ZmFz
dCA+VEFib3J0LSA8VEFib3J0LSA8TUFib3J0LSA+U0VSUi0gPFBFUlItDQoJ TGF0ZW5jeTogMA0K
CUludGVycnVwdDogcGluIEEgcm91dGVkIHRvIElSUSAxMQ0KCVJlZ2lvbiAw OiBNZW1vcnkgYXQg
ZjAwMDAwMDAgKDMyLWJpdCwgcHJlZmV0Y2hhYmxlKSBbc2l6ZT0xMjhNXQ0K CVJlZ2lvbiAxOiBN
ZW1vcnkgYXQgZmZhODAwMDAgKDMyLWJpdCwgbm9uLXByZWZldGNoYWJsZSkg W3NpemU9NTEyS10N
CglDYXBhYmlsaXRpZXM6IDxhdmFpbGFibGUgb25seSB0byByb290Pg0KDQow MDoxZC4wIFVTQiBD
b250cm9sbGVyOiBJbnRlbCBDb3JwLiA4MjgwMURCIFVTQiAoSHViICMxKSAo cmV2IDAxKSAocHJv
Zy1pZiAwMCBbVUhDSV0pDQoJU3Vic3lzdGVtOiBJbnRlbCBDb3JwLjogVW5r bm93biBkZXZpY2Ug
NGM1OQ0KCUNvbnRyb2w6IEkvTysgTWVtLSBCdXNNYXN0ZXIrIFNwZWNDeWNs ZS0gTWVtV0lOVi0g
VkdBU25vb3AtIFBhckVyci0gU3RlcHBpbmctIFNFUlItIEZhc3RCMkItDQoJ U3RhdHVzOiBDYXAt
IDY2TWh6LSBVREYtIEZhc3RCMkIrIFBhckVyci0gREVWU0VMPW1lZGl1bSA+ VEFib3J0LSA8VEFi
b3J0LSA8TUFib3J0LSA+U0VSUi0gPFBFUlItDQoJTGF0ZW5jeTogMA0KCUlu dGVycnVwdDogcGlu
IEEgcm91dGVkIHRvIElSUSAxMQ0KCVJlZ2lvbiA0OiBJL08gcG9ydHMgYXQg ZTgwMCBbc2l6ZT0z
Ml0NCg0KMDA6MWQuMSBVU0IgQ29udHJvbGxlcjogSW50ZWwgQ29ycC4gODI4 MDFEQiBVU0IgKEh1
YiAjMikgKHJldiAwMSkgKHByb2ctaWYgMDAgW1VIQ0ldKQ0KCVN1YnN5c3Rl bTogSW50ZWwgQ29y
cC46IFVua25vd24gZGV2aWNlIDRjNTkNCglDb250cm9sOiBJL08rIE1lbS0g QnVzTWFzdGVyKyBT
cGVjQ3ljbGUtIE1lbVdJTlYtIFZHQVNub29wLSBQYXJFcnItIFN0ZXBwaW5n LSBTRVJSLSBGYXN0
QjJCLQ0KCVN0YXR1czogQ2FwLSA2Nk1oei0gVURGLSBGYXN0QjJCKyBQYXJF cnItIERFVlNFTD1t
ZWRpdW0gPlRBYm9ydC0gPFRBYm9ydC0gPE1BYm9ydC0gPlNFUlItIDxQRVJS LQ0KCUxhdGVuY3k6
IDANCglJbnRlcnJ1cHQ6IHBpbiBCIHJvdXRlZCB0byBJUlEgNQ0KCVJlZ2lv biA0OiBJL08gcG9y
dHMgYXQgZTg4MCBbc2l6ZT0zMl0NCg0KMDA6MWQuMiBVU0IgQ29udHJvbGxl cjogSW50ZWwgQ29y
cC4gODI4MDFEQiBVU0IgKEh1YiAjMykgKHJldiAwMSkgKHByb2ctaWYgMDAg W1VIQ0ldKQ0KCVN1
YnN5c3RlbTogSW50ZWwgQ29ycC46IFVua25vd24gZGV2aWNlIDRjNTkNCglD b250cm9sOiBJL08r
IE1lbS0gQnVzTWFzdGVyKyBTcGVjQ3ljbGUtIE1lbVdJTlYtIFZHQVNub29w LSBQYXJFcnItIFN0
ZXBwaW5nLSBTRVJSLSBGYXN0QjJCLQ0KCVN0YXR1czogQ2FwLSA2Nk1oei0g VURGLSBGYXN0QjJC
KyBQYXJFcnItIERFVlNFTD1tZWRpdW0gPlRBYm9ydC0gPFRBYm9ydC0gPE1B Ym9ydC0gPlNFUlIt
IDxQRVJSLQ0KCUxhdGVuY3k6IDANCglJbnRlcnJ1cHQ6IHBpbiBDIHJvdXRl ZCB0byBJUlEgOQ0K
CVJlZ2lvbiA0OiBJL08gcG9ydHMgYXQgZWMwMCBbc2l6ZT0zMl0NCg0KMDA6 MWQuNyBVU0IgQ29u
dHJvbGxlcjogSW50ZWwgQ29ycC4gODI4MDFEQiBVU0IgRUhDSSBDb250cm9s bGVyIChyZXYgMDEp
IChwcm9nLWlmIDIwIFtFSENJXSkNCglTdWJzeXN0ZW06IEludGVsIENvcnAu OiBVbmtub3duIGRl
dmljZSA0YzU5DQoJQ29udHJvbDogSS9PLSBNZW0rIEJ1c01hc3RlcisgU3Bl Y0N5Y2xlLSBNZW1X
SU5WLSBWR0FTbm9vcC0gUGFyRXJyLSBTdGVwcGluZy0gU0VSUisgRmFzdEIy Qi0NCglTdGF0dXM6
IENhcCsgNjZNaHotIFVERi0gRmFzdEIyQisgUGFyRXJyLSBERVZTRUw9bWVk aXVtID5UQWJvcnQt
IDxUQWJvcnQtIDxNQWJvcnQtID5TRVJSLSA8UEVSUi0NCglMYXRlbmN5OiAw DQoJSW50ZXJydXB0
OiBwaW4gRCByb3V0ZWQgdG8gSVJRIDEwDQoJUmVnaW9uIDA6IE1lbW9yeSBh dCBmZmE3ZmMwMCAo
MzItYml0LCBub24tcHJlZmV0Y2hhYmxlKSBbc2l6ZT0xS10NCglDYXBhYmls aXRpZXM6IDxhdmFp
bGFibGUgb25seSB0byByb290Pg0KDQowMDoxZS4wIFBDSSBicmlkZ2U6IElu dGVsIENvcnAuIDgy
ODAxQkEvQ0EvREIgUENJIEJyaWRnZSAocmV2IDgxKSAocHJvZy1pZiAwMCBb Tm9ybWFsIGRlY29k
ZV0pDQoJQ29udHJvbDogSS9PKyBNZW0rIEJ1c01hc3RlcisgU3BlY0N5Y2xl LSBNZW1XSU5WLSBW
R0FTbm9vcC0gUGFyRXJyLSBTdGVwcGluZy0gU0VSUisgRmFzdEIyQi0NCglT dGF0dXM6IENhcC0g
NjZNaHotIFVERi0gRmFzdEIyQisgUGFyRXJyLSBERVZTRUw9ZmFzdCA+VEFi b3J0LSA8VEFib3J0
LSA8TUFib3J0LSA+U0VSUi0gPFBFUlIrDQoJTGF0ZW5jeTogMA0KCUJ1czog cHJpbWFyeT0wMCwg
c2Vjb25kYXJ5PTAxLCBzdWJvcmRpbmF0ZT0wMSwgc2VjLWxhdGVuY3k9MzIN CglJL08gYmVoaW5k
IGJyaWRnZTogMDAwMGQwMDAtMDAwMGRmZmYNCglNZW1vcnkgYmVoaW5kIGJy aWRnZTogZmY4MDAw
MDAtZmY4ZmZmZmYNCglQcmVmZXRjaGFibGUgbWVtb3J5IGJlaGluZCBicmlk Z2U6IGU2YTAwMDAw
LWU2YWZmZmZmDQoJQnJpZGdlQ3RsOiBQYXJpdHktIFNFUlIrIE5vSVNBKyBW R0EtIE1BYm9ydC0g
PlJlc2V0LSBGYXN0QjJCLQ0KDQowMDoxZi4wIElTQSBicmlkZ2U6IEludGVs IENvcnAuIDgyODAx
REIgSVNBIEJyaWRnZSAoTFBDKSAocmV2IDAxKQ0KCUNvbnRyb2w6IEkvTysg TWVtKyBCdXNNYXN0
ZXIrIFNwZWNDeWNsZSsgTWVtV0lOVi0gVkdBU25vb3AtIFBhckVyci0gU3Rl cHBpbmctIFNFUlIr
IEZhc3RCMkItDQoJU3RhdHVzOiBDYXAtIDY2TWh6LSBVREYtIEZhc3RCMkIr IFBhckVyci0gREVW
U0VMPW1lZGl1bSA+VEFib3J0LSA8VEFib3J0LSA8TUFib3J0LSA+U0VSUi0g PFBFUlItDQoJTGF0
ZW5jeTogMA0KDQowMDoxZi4xIElERSBpbnRlcmZhY2U6IEludGVsIENvcnAu IDgyODAxREIgSUNI
NCBJREUgKHJldiAwMSkgKHByb2ctaWYgOGEgW01hc3RlciBTZWNQIFByaVBd KQ0KCVN1YnN5c3Rl
bTogSW50ZWwgQ29ycC46IFVua25vd24gZGV2aWNlIDRjNTkNCglDb250cm9s OiBJL08rIE1lbSsg
QnVzTWFzdGVyKyBTcGVjQ3ljbGUtIE1lbVdJTlYtIFZHQVNub29wLSBQYXJF cnItIFN0ZXBwaW5n
LSBTRVJSLSBGYXN0QjJCLQ0KCVN0YXR1czogQ2FwLSA2Nk1oei0gVURGLSBG YXN0QjJCKyBQYXJF
cnItIERFVlNFTD1tZWRpdW0gPlRBYm9ydC0gPFRBYm9ydC0gPE1BYm9ydC0g PlNFUlItIDxQRVJS
LQ0KCUxhdGVuY3k6IDANCglJbnRlcnJ1cHQ6IHBpbiBBIHJvdXRlZCB0byBJ UlEgOQ0KCVJlZ2lv
biAwOiBJL08gcG9ydHMgYXQgPHVuYXNzaWduZWQ+DQoJUmVnaW9uIDE6IEkv TyBwb3J0cyBhdCA8
dW5hc3NpZ25lZD4NCglSZWdpb24gMjogSS9PIHBvcnRzIGF0IDx1bmFzc2ln bmVkPg0KCVJlZ2lv
biAzOiBJL08gcG9ydHMgYXQgPHVuYXNzaWduZWQ+DQoJUmVnaW9uIDQ6IEkv TyBwb3J0cyBhdCBm
ZmEwIFtzaXplPTE2XQ0KCVJlZ2lvbiA1OiBNZW1vcnkgYXQgMTAwMDAwMDAg KDMyLWJpdCwgbm9u
LXByZWZldGNoYWJsZSkgW3NpemU9MUtdDQoNCjAwOjFmLjMgU01CdXM6IElu dGVsIENvcnAuIDgy
ODAxREIgU01CdXMgKHJldiAwMSkNCglTdWJzeXN0ZW06IEludGVsIENvcnAu OiBVbmtub3duIGRl
dmljZSA0YzU5DQoJQ29udHJvbDogSS9PKyBNZW0tIEJ1c01hc3Rlci0gU3Bl Y0N5Y2xlLSBNZW1X
SU5WLSBWR0FTbm9vcC0gUGFyRXJyLSBTdGVwcGluZy0gU0VSUi0gRmFzdEIy Qi0NCglTdGF0dXM6
IENhcC0gNjZNaHotIFVERi0gRmFzdEIyQisgUGFyRXJyLSBERVZTRUw9bWVk aXVtID5UQWJvcnQt
IDxUQWJvcnQtIDxNQWJvcnQtID5TRVJSLSA8UEVSUi0NCglJbnRlcnJ1cHQ6 IHBpbiBCIHJvdXRl
ZCB0byBJUlEgMw0KCVJlZ2lvbiA0OiBJL08gcG9ydHMgYXQgZTAwMCBbc2l6 ZT0zMl0NCg0KMDA6
MWYuNSBNdWx0aW1lZGlhIGF1ZGlvIGNvbnRyb2xsZXI6IEludGVsIENvcnAu IDgyODAxREIgQUMn
OTcgQXVkaW8gKHJldiAwMSkNCglTdWJzeXN0ZW06IEludGVsIENvcnAuOiBV bmtub3duIGRldmlj
ZSAwMzAzDQoJQ29udHJvbDogSS9PKyBNZW0rIEJ1c01hc3RlcisgU3BlY0N5 Y2xlLSBNZW1XSU5W
LSBWR0FTbm9vcC0gUGFyRXJyLSBTdGVwcGluZy0gU0VSUi0gRmFzdEIyQi0N CglTdGF0dXM6IENh
cCsgNjZNaHotIFVERi0gRmFzdEIyQisgUGFyRXJyLSBERVZTRUw9bWVkaXVt ID5UQWJvcnQtIDxU
QWJvcnQtIDxNQWJvcnQtID5TRVJSLSA8UEVSUi0NCglMYXRlbmN5OiAwDQoJ SW50ZXJydXB0OiBw
aW4gQiByb3V0ZWQgdG8gSVJRIDMNCglSZWdpb24gMDogSS9PIHBvcnRzIGF0 IGU0MDAgW3NpemU9
MjU2XQ0KCVJlZ2lvbiAxOiBJL08gcG9ydHMgYXQgZTA4MCBbc2l6ZT02NF0N CglSZWdpb24gMjog
TWVtb3J5IGF0IGZmYTdmODAwICgzMi1iaXQsIG5vbi1wcmVmZXRjaGFibGUp IFtzaXplPTUxMl0N
CglSZWdpb24gMzogTWVtb3J5IGF0IGZmYTdmNDAwICgzMi1iaXQsIG5vbi1w cmVmZXRjaGFibGUp
IFtzaXplPTI1Nl0NCglDYXBhYmlsaXRpZXM6IDxhdmFpbGFibGUgb25seSB0 byByb290Pg0KDQow
MTowMS4wIEV0aGVybmV0IGNvbnRyb2xsZXI6IEludGVsIENvcnAuIDgyNTQw RU0gR2lnYWJpdCBF
dGhlcm5ldCBDb250cm9sbGVyIChyZXYgMDIpDQoJU3Vic3lzdGVtOiBJbnRl bCBDb3JwLiBQUk8v
MTAwMCBNVCBEZXNrdG9wIEFkYXB0ZXINCglDb250cm9sOiBJL08rIE1lbSsg QnVzTWFzdGVyKyBT
cGVjQ3ljbGUtIE1lbVdJTlYrIFZHQVNub29wLSBQYXJFcnItIFN0ZXBwaW5n LSBTRVJSKyBGYXN0
QjJCLQ0KCVN0YXR1czogQ2FwKyA2Nk1oeisgVURGLSBGYXN0QjJCLSBQYXJF cnItIERFVlNFTD1t
ZWRpdW0gPlRBYm9ydC0gPFRBYm9ydC0gPE1BYm9ydC0gPlNFUlItIDxQRVJS LQ0KCUxhdGVuY3k6
IDMyICg2Mzc1MG5zIG1pbiksIGNhY2hlIGxpbmUgc2l6ZSAwOA0KCUludGVy cnVwdDogcGluIEEg
cm91dGVkIHRvIElSUSA3DQoJUmVnaW9uIDA6IE1lbW9yeSBhdCBmZjhlMDAw MCAoMzItYml0LCBu
b24tcHJlZmV0Y2hhYmxlKSBbc2l6ZT0xMjhLXQ0KCVJlZ2lvbiAxOiBNZW1v cnkgYXQgZmY4YzAw
MDAgKDMyLWJpdCwgbm9uLXByZWZldGNoYWJsZSkgW3NpemU9MTI4S10NCglS ZWdpb24gMjogSS9P
IHBvcnRzIGF0IGRjMDAgW3NpemU9NjRdDQoJRXhwYW5zaW9uIFJPTSBhdCBm ZjhhMDAwMCBbZGlz
YWJsZWRdIFtzaXplPTEyOEtdDQoJQ2FwYWJpbGl0aWVzOiA8YXZhaWxhYmxl IG9ubHkgdG8gcm9v
dD4NCg0KMDE6MDguMCBFdGhlcm5ldCBjb250cm9sbGVyOiBJbnRlbCBDb3Jw LiA4MjgwMUJEIFBS
Ty8xMDAgVkUgKExPTSkgRXRoZXJuZXQgQ29udHJvbGxlciAocmV2IDgxKQ0K CVN1YnN5c3RlbTog
SW50ZWwgQ29ycC46IFVua25vd24gZGV2aWNlIDMwMDkNCglDb250cm9sOiBJ L08rIE1lbSsgQnVz
TWFzdGVyKyBTcGVjQ3ljbGUtIE1lbVdJTlYrIFZHQVNub29wLSBQYXJFcnIt IFN0ZXBwaW5nLSBT
RVJSKyBGYXN0QjJCLQ0KCVN0YXR1czogQ2FwKyA2Nk1oei0gVURGLSBGYXN0 QjJCKyBQYXJFcnIt
IERFVlNFTD1tZWRpdW0gPlRBYm9ydC0gPFRBYm9ydC0gPE1BYm9ydC0gPlNF UlItIDxQRVJSLQ0K
CUxhdGVuY3k6IDMyICgyMDAwbnMgbWluLCAxNDAwMG5zIG1heCksIGNhY2hl IGxpbmUgc2l6ZSAw
OA0KCUludGVycnVwdDogcGluIEEgcm91dGVkIHRvIElSUSA0DQoJUmVnaW9u IDA6IE1lbW9yeSBh
dCBmZjg5ZjAwMCAoMzItYml0LCBub24tcHJlZmV0Y2hhYmxlKSBbc2l6ZT00 S10NCglSZWdpb24g
MTogSS9PIHBvcnRzIGF0IGQ4ODAgW3NpemU9NjRdDQoJQ2FwYWJpbGl0aWVz OiA8YXZhaWxhYmxl
IG9ubHkgdG8gcm9vdD4NCg0K
------=_Part_27_7167909.1093177979212--
-
To unsubscribe from this list: send the line "unsubscribe linux-admin" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Linux based router for Gigabit traffic

am 22.08.2004 14:49:04 von Neil Horman

Anantha Kiran wrote:

>Hi
>I am doing a project, in which i have to redirect traffic coming from
>one ethernet card of a machine, to one of three remaining three
>ethernet cards of same machine based on the src,dest IP and Port
>values of the pkt. I wrote a net_hook module to do this which i
>working fine for lower speeds like < 400 Mbps traffic. But project
>goal is to deal with gigabit traffic. I have used gigabit ethernet
>cards and Switch. But when the traffic rate is more than 400Mbps it is
>dropping packets. I did test, by increasing the transmit queue
>length. But same problem is coming. I have found during the pkt drop
>there is no memory or CPU is hundred percently utilised. So, what can
>be the resource that is lacking while dropping of pkts is happend.
>
>We have done a test, in which module will simply forward pkts from
>"eth0" through "eth1". Same problem of pkt dropping at highier speed
>is happening.
>
>Now we thought , it can be due to PCI bus. But we are unable to find
>what is configuration of PCI bus , our system is using. we have used
>"lspci" to find that. But i am unable to interpret it. I am attaching
>that output file. Can anybody tell me how to interpret that or give
>some guidelines , like whom to mail, to find that.
>
>Atlast we try to solve the problem in the following manner. Idea is
>to, install a router for my machine which can deal with gigabit
>traffic. If the router is able to work with gigabit traffic, then
>definitely pkts dropping will not be due to PCI bus.
>
> So, canbdoy please, suggest some opensource router for linux which
>can deal with gigabit traffic. Or atleast some suggestions to solve
>this problem.
>
>
Quite simply a general purpose CPU system isn't normally built to
handle network traffic at gigabit rates, especially not from multiple
ports at once. If you really want a line rate gigabit router, you need
custom ASIC to do hardware offload of that work. Several networking
companies make just such hardware (of course most only sell the chips,
not full pci cards with integrated NICS). Check out broadcom, AMCC or
Intel, they all have hardware like what you're looking for.
Neil
-
To unsubscribe from this list: send the line "unsubscribe linux-admin" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Linux based router for Gigabit traffic

am 22.08.2004 17:07:04 von Matti Aarnio

On Sun, Aug 22, 2004 at 06:02:59PM +0530, Anantha Kiran wrote:
> Hi
> I am doing a project, in which i have to redirect traffic coming from
> one ethernet card of a machine, to one of three remaining three
> ethernet cards of same machine based on the src,dest IP and Port
> values of the pkt. I wrote a net_hook module to do this which i
> working fine for lower speeds like < 400 Mbps traffic. But project
> goal is to deal with gigabit traffic. I have used gigabit ethernet
> cards and Switch. But when the traffic rate is more than 400Mbps it is
> dropping packets. I did test, by increasing the transmit queue
> length. But same problem is coming. I have found during the pkt drop
> there is no memory or CPU is hundred percently utilised. So, what can
> be the resource that is lacking while dropping of pkts is happend.
>
> We have done a test, in which module will simply forward pkts from
> "eth0" through "eth1". Same problem of pkt dropping at highier speed
> is happening.

The lspci output shows that you have a GE and a FE interfaces.
Getting thru more than 100 Mbps should not be possible with that..


01:01.0 Ethernet controller: Intel Corp. 82540EM Gigabit Ethernet Controller (rev 02)
Subsystem: Intel Corp. PRO/1000 MT Desktop Adapter
Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV+ VGASnoop- ParErr- Stepping- SERR+ FastB2B-
Status: Cap+ 66Mhz+ UDF- FastB2B- ParErr- DEVSEL=medium >TAbort- SERR- Latency: 32 (63750ns min), cache line size 08

That card has a PCI bus of:
PCI 2.3 32-bit 33/66 MHz

which means it can handle up to about 260 MB/s thru the bus.
If about 10% of that is transaction overhead, and then 50/50 goes
to network traffic, that gives upper limit of 115 MB/s which is
about 0.92 Gbit/s.

But if the bus runs at mere 33 MHz ...


01:08.0 Ethernet controller: Intel Corp. 82801BD PRO/100 VE (LOM) Ethernet Controller (rev 81)
Subsystem: Intel Corp.: Unknown device 3009
Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV+ VGASnoop- ParErr- Stepping- SERR+ FastB2B-
Status: Cap+ 66Mhz- UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort- SERR- Latency: 32 (2000ns min, 14000ns max), cache line size 08

"LOM" cards are something that are built into the system boards.
They communicate command and management traffic thru the PCI bus,
but payload traffic goes thru separate channel to main memory hub.

The LOM cards are usually giga-ethers, which means the pci-ids are
something most strange.. Also this is located into same PCI bus as
the PRO/1000 MT card above, and the bus can't alter speed in between
different cards -- which means that the bus is running at 33 MHz.

Possibly that LOM "card" really is some GE card, and it can run
full GE speed to main memory. However it will eat into PCI bus
bandwidth for the command traffic, although the data traffic flows
thru a LOM side-channel.


The PCI bridge in between main hub, and the bus where those cards
are tells:

00:1e.0 PCI bridge: Intel Corp. 82801BA/CA/DB PCI Bridge (rev 81) (prog-if 00 [Normal decode])
Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR+ FastB2B-
Status: Cap- 66Mhz- UDF- FastB2B+ ParErr- DEVSEL=fast >TAbort- SERR- Latency: 0
Bus: primary=00, secondary=01, subordinate=01, sec-latency=32


So, that bus is running at 33 MHz, and a 32 bit wide GE card in it
can get around 0.4 Gbit/s thru it (remembering that that bus has
also other traffic thru it, than just ethernet traffic itself.)


> Now we thought , it can be due to PCI bus. But we are unable to find
> what is configuration of PCI bus , our system is using. we have used
> "lspci" to find that. But i am unable to interpret it. I am attaching
> that output file. Can anybody tell me how to interpret that or give
> some guidelines , like whom to mail, to find that.
>
> Atlast we try to solve the problem in the following manner. Idea is
> to, install a router for my machine which can deal with gigabit
> traffic. If the router is able to work with gigabit traffic, then
> definitely pkts dropping will not be due to PCI bus.

A motherboard with two or more LOM GE interfaces MIGHT do it.
Those are usually available only with 10/100/1000Base-T, not
with optical (gbic) mode.

Another approach is to have system with multiple PCI-X busses,
and add PCI-X GE cards.

For example:
http://www.supermicro.com/products/motherboard/Xeon800/E7520 /X6DH8-XG2.cfm


> So, canbdoy please, suggest some opensource router for linux which
> can deal with gigabit traffic. Or atleast some suggestions to solve
> this problem.
> --
> K.AnanthaKiran
> D-108/HALL-7

/Matti Aarnio
-
To unsubscribe from this list: send the line "unsubscribe linux-admin" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Linux based router for Gigabit traffic

am 22.08.2004 20:14:51 von Anantha Kiran

Yes, we are generating the traffic by using a programme, which will
take the size of the pkt as argument and generates traffic to towards
the machine which we specified and prints the statistics like how many
pkts and with what speed.
That programme uses a dump file of pkts(collected over long time)
from which it searches for a pkt of size what we specified , then
starts sending that pkt in infinite loop and accounting the time ,
number of pkts.

So, we have taken care that we are generating Gbps traffic.


On Sun, 22 Aug 2004 12:32:04 -0400, Neil Horman wrote:
>
>
> alex@pilosoft.com wrote:
>
> >>I am doing a project, in which i have to redirect traffic coming from
> >>one ethernet card of a machine, to one of three remaining three ethernet
> >>cards of same machine based on the src,dest IP and Port values of the
> >>pkt. I wrote a net_hook module to do this which i working fine for lower
> >>speeds like < 400 Mbps traffic. But project goal is to deal with gigabit
> >>traffic. I have used gigabit ethernet cards and Switch. But when the
> >>traffic rate is more than 400Mbps it is dropping packets. I did test,
> >>by increasing the transmit queue length. But same problem is coming. I
> >>have found during the pkt drop there is no memory or CPU is hundred
> >>percently utilised. So, what can be the resource that is lacking while
> >>dropping of pkts is happend.
> >>
> >>
> >Are you using NAPI?
> >
> >What is the CPU utilization when packets start being dropped?
> >
> >I route full GE worth of traffic on a low-end machine without a problem.
> >
> >-alex
> >
> >-
> >To unsubscribe from this list: send the line "unsubscribe linux-net" in
> >the body of a message to majordomo@vger.kernel.org
> >More majordomo info at http://vger.kernel.org/majordomo-info.html
> >
> >
> Are you sure you're actually routing a gigabit worth of bandwidth
> through your machine, or just the small percent of frames that happen to
> be on your gigabit network?
> Neil
>


--
K.AnanthaKiran
D-108/HALL-7
-
To unsubscribe from this list: send the line "unsubscribe linux-net" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Linux based router for Gigabit traffic

am 22.08.2004 20:25:29 von Anantha Kiran

No, we are not using NAPI for this. But we wrote a net_hook module
which catches the pkts before allowing it to go upper layers.

CPU utilization is 70+ while dropping.

Can u tell me the configuration of your low-end machine.Like PCI-bus,
and others.


On Sun, 22 Aug 2004 08:59:14 -0400 (EDT), alex@pilosoft.com
wrote:
> > I am doing a project, in which i have to redirect traffic coming from
> > one ethernet card of a machine, to one of three remaining three ethernet
> > cards of same machine based on the src,dest IP and Port values of the
> > pkt. I wrote a net_hook module to do this which i working fine for lower
> > speeds like < 400 Mbps traffic. But project goal is to deal with gigabit
> > traffic. I have used gigabit ethernet cards and Switch. But when the
> > traffic rate is more than 400Mbps it is dropping packets. I did test,
> > by increasing the transmit queue length. But same problem is coming. I
> > have found during the pkt drop there is no memory or CPU is hundred
> > percently utilised. So, what can be the resource that is lacking while
> > dropping of pkts is happend.
> Are you using NAPI?
>
> What is the CPU utilization when packets start being dropped?
>
> I route full GE worth of traffic on a low-end machine without a problem.
>
> -alex
>
>


--
K.AnanthaKiran
D-108/HALL-7
-
To unsubscribe from this list: send the line "unsubscribe linux-net" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Linux based router for Gigabit traffic

am 23.08.2004 07:56:07 von davem

On Sun, 22 Aug 2004 08:49:04 -0400
Neil Horman wrote:

> Quite simply a general purpose CPU system isn't normally built to
> handle network traffic at gigabit rates, especially not from multiple
> ports at once. If you really want a line rate gigabit router, you need
> custom ASIC to do hardware offload of that work. Several networking
> companies make just such hardware (of course most only sell the chips,
> not full pci cards with integrated NICS). Check out broadcom, AMCC or
> Intel, they all have hardware like what you're looking for.

Gigabit routing is possible with commodity hardware. It's a software
problem for the cases that go fast enough currently.
-
To unsubscribe from this list: send the line "unsubscribe linux-net" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Linux based router for Gigabit traffic

am 23.08.2004 09:11:24 von Mikael Abrahamsson

On Sun, 22 Aug 2004, David S. Miller wrote:

> Gigabit routing is possible with commodity hardware. It's a software
> problem for the cases that go fast enough currently.

To properly handle full duplex gigabit speeds at 300 bytes average packet
size we need approx 800k packets per second.

So I guess the question should be rephrased into "how many packets per
second can you get commodity architecture to do?" because that's of more
interest than how many megabit/s we can handle (as it's usually the
limiting factor).

--
Mikael Abrahamsson email: swmike@swm.pp.se

-
To unsubscribe from this list: send the line "unsubscribe linux-admin" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Linux based router for Gigabit traffic

am 23.08.2004 09:58:58 von urgrue

On 2004.08.23 08:56, David S. Miller wrote:
> On Sun, 22 Aug 2004 08:49:04 -0400
> Neil Horman wrote:
>
> > Quite simply a general purpose CPU system isn't normally built to
> > handle network traffic at gigabit rates, especially not from
> multiple
> > ports at once. If you really want a line rate gigabit router, you
> need
> > custom ASIC to do hardware offload of that work. Several networking
>
> Gigabit routing is possible with commodity hardware. It's a software
> problem for the cases that go fast enough currently.
> -

Possible, but only up to 2 gbps. Correct me if I'm wrong, but one full-
duplex gigabit card alone can fill up the entire theoretical maximum
bandwidth of the PCI bus, which is only 266MBps.
Four gigabit cards on a normal PCI bus is far more than PCI can handle.
-
To unsubscribe from this list: send the line "unsubscribe linux-admin" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Linux based router for Gigabit traffic

am 23.08.2004 13:17:16 von Neil Horman

David S. Miller wrote:
> On Sun, 22 Aug 2004 08:49:04 -0400
> Neil Horman wrote:
>
>
>>Quite simply a general purpose CPU system isn't normally built to
>>handle network traffic at gigabit rates, especially not from multiple
>>ports at once. If you really want a line rate gigabit router, you need
>>custom ASIC to do hardware offload of that work. Several networking
>>companies make just such hardware (of course most only sell the chips,
>>not full pci cards with integrated NICS). Check out broadcom, AMCC or
>>Intel, they all have hardware like what you're looking for.
>
>
> Gigabit routing is possible with commodity hardware. It's a software
> problem for the cases that go fast enough currently.
Really? What are PCI bus transfer rates up to these days? I havent
looked into it in quite a while.
N3il

--
/***************************************************
*Neil Horman
*Software Engineer
*Red Hat, Inc.
*nhorman@redhat.com
*gpg keyid: 1024D / 0x92A74FA1
*http://pgp.mit.edu
***************************************************/
-
To unsubscribe from this list: send the line "unsubscribe linux-admin" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Linux based router for Gigabit traffic

am 23.08.2004 13:36:41 von urgrue

On 2004.08.23 14:17, Neil Horman wrote:
> Really? What are PCI bus transfer rates up to these days? I havent
> looked into it in quite a while.
> N3il

Your average 33mhz PCI bus has a paltry maximum bandwidth of 1 gbps.
Full-duplex gigabit routing on more than one interface pretty much
requires PCI Express. Here's a little table:
PCI32 33MHz = 133MB/s PCI32 66MHz = 266MB/s PCI64 33MHz = 266MB/s
PCI64 66MHz = 533MB/s
PCI-X 133MHz = 1066MB/s

Also gigabit routing on your average unmodified linux distribution
requires a VERY hefty CPU to keep up, even just with two or three
gigabit interfaces.
-
To unsubscribe from this list: send the line "unsubscribe linux-admin" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Linux based router for Gigabit traffic

am 23.08.2004 15:10:31 von Adam Lang

Well, as was mentioned, the question is "what type of commodity hardware".
You can easily buy motherboards now with PCI X (express).

Then, you also have to consider USB network cards, which I assume are a
different bus system as well. It all matters if you are in a controlled
environment or you have to work with lowest common denominator.
----- Original Message -----
From: "Mikael Abrahamsson"
To:
Cc:
Sent: Monday, August 23, 2004 3:11 AM
Subject: Re: Linux based router for Gigabit traffic


> On Sun, 22 Aug 2004, David S. Miller wrote:
>
> > Gigabit routing is possible with commodity hardware. It's a software
> > problem for the cases that go fast enough currently.
>
> To properly handle full duplex gigabit speeds at 300 bytes average packet
> size we need approx 800k packets per second.
>
> So I guess the question should be rephrased into "how many packets per
> second can you get commodity architecture to do?" because that's of more
> interest than how many megabit/s we can handle (as it's usually the
> limiting factor).
>
> --
> Mikael Abrahamsson email: swmike@swm.pp.se
>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-admin" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html

-
To unsubscribe from this list: send the line "unsubscribe linux-admin" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Linux based router for Gigabit traffic

am 23.08.2004 17:31:58 von smooge

On Sun, 22 Aug 2004 08:49:04 -0400, Neil Horman wrote:
> Anantha Kiran wrote:
>
> >Hi
> >I am doing a project, in which i have to redirect traffic coming from
>
> Quite simply a general purpose CPU system isn't normally built to
> handle network traffic at gigabit rates, especially not from multiple
> ports at once. If you really want a line rate gigabit router, you need

Modern CPU's can handle gigabit and by the numbers 10 gigabit traffic
for what he is wanting. (If you use PCI-X or Express)

It is mainly getting the PCI bus and kernel
interrupts to deal with that speed. We are able to push through 750 mbits
on E1000 cards through netfilter on HTTP loads. The main problem is
that the card generates an IRQ per packet and the soft irq takes up
all the CPU load. On mixed network loads we are at about 400 mbits also..
but it isnt the bus that is loaded but the number of packets per second
that the card can handle.

To find out what the bottleneck is on the system, you need to make sure
your box has the correct tools ( a top that shows soft-irq, a newer mpstat
and other items). Also make sure that the motherboard Bus is PCI-X or better.
-
To unsubscribe from this list: send the line "unsubscribe linux-net" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Linux based router for Gigabit traffic

am 23.08.2004 17:39:39 von smooge

On Sun, 22 Aug 2004 23:55:29 +0530, Anantha Kiran
wrote:
> No, we are not using NAPI for this. But we wrote a net_hook module
> which catches the pkts before allowing it to go upper layers.
>
> CPU utilization is 70+ while dropping.
>
> Can u tell me the configuration of your low-end machine.Like PCI-bus,
> and others.
>

Ours are Intel(R) Xeon(TM) CPU 2.40GHz and the motherboard has a PCI-X
chipset on it. We have one Intel fibre optic Gig-E on the 02:01 PCI-X
Bus, and the other on the 03:01 PCI-X bus. I see a limit of about 750
mbits/sec on a fixed load (HTTP accesses for specific boxes), and 400
mbits/sec on a mixed load (ICMP/UDP/TCP). In each case, the maximum
looks to not be the bus speed as much as the soft-irq limit (which is
what mpstat says is using all the resources). The number of packets
per second that ifconfig and other tools gives me looks about the
same. so I am thinking it isnt bus width at the moment as much as
IRQ/packets per second.

The kernels are NAPI compiled.. but I really havent figure out if
using netfilters on these mixed loads 'disables' it.

--
Stephen J Smoogen.
Professional System Administrator
-
To unsubscribe from this list: send the line "unsubscribe linux-net" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

DNS "named" question

am 24.08.2004 19:08:11 von Tony Gogoi

Hello,

This question is more like hitting in the dark, I guess.

We have a primary and secondary DNS server. Twice or thrice so far since
"named" automatically failed. Eg. on those times, I simply typed
"/etc/rc.d/init/named restart" It showed "STOP - FAILED", "START - OK",
after which things were normal.

I was wondering if anyone could give me pointers what could cause "named"
to be down automatically simultaneously on both primary and secondary DNS
servers.
Individually each of the servers function properly. So, there should be
some event (which got unlogged) which trigerred both servers to shut down
"named"?

There was nothing abnormal in the log files. Also, "named" was down on
both primary and secondary DNS servers on each occasion.

Thanks
Tony


-
To unsubscribe from this list: send the line "unsubscribe linux-admin" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: DNS "named" question

am 24.08.2004 19:21:35 von Scott Taylor

Tony Gogoi said:
>
> Hello,
>
> This question is more like hitting in the dark, I guess.
>
> We have a primary and secondary DNS server. Twice or thrice so far since
> "named" automatically failed. Eg. on those times, I simply typed
> "/etc/rc.d/init/named restart" It showed "STOP - FAILED", "START - OK",
> after which things were normal.
>
> I was wondering if anyone could give me pointers what could cause "named"
> to be down automatically simultaneously on both primary and secondary DNS
> servers.

Probably a bad record, spelling mistake, typo, etc. in your zone files.
nslint is your friend in this case.

> Individually each of the servers function properly. So, there should be
> some event (which got unlogged) which trigerred both servers to shut down
> "named"?
>
> There was nothing abnormal in the log files. Also, "named" was down on
> both primary and secondary DNS servers on each occasion.

Depending on the undisclosed version of your undisclosed DNS software,
there is probably a log setting that can help you debug this. Either way,
you should at least get a 'named died/shutdown/killed' message some place.

-
To unsubscribe from this list: send the line "unsubscribe linux-admin" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: DNS "named" question

am 24.08.2004 19:33:55 von Bradley Hook

Tony Gogoi wrote:
> Hello,
>
> This question is more like hitting in the dark, I guess.
>
> We have a primary and secondary DNS server. Twice or thrice so far since
> "named" automatically failed. Eg. on those times, I simply typed
> "/etc/rc.d/init/named restart" It showed "STOP - FAILED", "START - OK",
> after which things were normal.
>
> I was wondering if anyone could give me pointers what could cause "named"
> to be down automatically simultaneously on both primary and secondary DNS
> servers.
> Individually each of the servers function properly. So, there should be
> some event (which got unlogged) which trigerred both servers to shut down
> "named"?
>
> There was nothing abnormal in the log files. Also, "named" was down on
> both primary and secondary DNS servers on each occasion.
>
> Thanks
> Tony
>
>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-admin" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>

Try running something like this:

# named -d 255 -u named -g > ~/named.log 2>&1 &

(note that you need a user called 'named' for that to work)

The next time named crashes, take a look at the last few lines of
~/named.log and see if it gives you any clue as to where the problem is.
You can always turn up the debug level (the 255) to get more information
about what is going on.

~Bradley
-
To unsubscribe from this list: send the line "unsubscribe linux-admin" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Linux based router for Gigabit traffic

am 25.08.2004 09:48:03 von Stephen Samuel

An HTTP load is pretty much unidirectional (outbound), so that
700megabits is the total system load.

Anantha wants to ROUTE 1gigabit of traffic -- That's
1gig in + 1gig out =2gig total.

if you divide your 700Megabit HTTP traffic by two, you'll
get a bit less than the 400Mb that he's claiming to have
gotten so far.

A quick overview:

Matti's bus bandwith calcs indicate that standard PCI won't
do the job in terms of theoretical bandwidth capability.
Once you switch up to PCI-Express, you'll have the
bus bandwith, but you may end up with A CPU limit
(the soft INT's).
At that point, you might have to go to Real-Time Linux
and, possibly, custom drivers. If you can get away from
the the kernel Task Switching overhead for each packet,
you should be OK in terms of CPU capability.




Stephen J. Smoogen wrote:
> On Sun, 22 Aug 2004 23:55:29 +0530, Anantha Kiran
> wrote:
>
>>No, we are not using NAPI for this. But we wrote a net_hook module
>>which catches the pkts before allowing it to go upper layers.
>>
>>CPU utilization is 70+ while dropping.
>>
>>Can u tell me the configuration of your low-end machine.Like PCI-bus,
>>and others.
>>
>
>
> Ours are Intel(R) Xeon(TM) CPU 2.40GHz and the motherboard has a PCI-X
> chipset on it. We have one Intel fibre optic Gig-E on the 02:01 PCI-X
> Bus, and the other on the 03:01 PCI-X bus. I see a limit of about 750
> mbits/sec on a fixed load (HTTP accesses for specific boxes), and 400
> mbits/sec on a mixed load (ICMP/UDP/TCP). In each case, the maximum
> looks to not be the bus speed as much as the soft-irq limit (which is
> what mpstat says is using all the resources). The number of packets
> per second that ifconfig and other tools gives me looks about the
> same. so I am thinking it isnt bus width at the moment as much as
> IRQ/packets per second.
>
> The kernels are NAPI compiled.. but I really havent figure out if
> using netfilters on these mixed loads 'disables' it.
>


--
Stephen Samuel +1(604)876-0426 samuel@bcgreen.com
http://www.bcgreen.com/~samuel/
Powerful committed communication. Transformation touching
the jewel within each person and bringing it to light.
-
To unsubscribe from this list: send the line "unsubscribe linux-admin" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Linux based router for Gigabit traffic

am 27.08.2004 20:01:52 von neolozer

Anantha Kiran gmail.com> writes:

>
> Hi
> I am doing a project, in which i have to redirect traffic coming from
> one ethernet card of a machine, to one of three remaining three
> ethernet cards of same machine based on the src,dest IP and Port
> values of the pkt. I wrote a net_hook module to do this which i
> working fine for lower speeds like < 400 Mbps traffic. But project
> goal is to deal with gigabit traffic.
>
>...
>
> So, canbdoy please, suggest some opensource router for linux which
> can deal with gigabit traffic. Or atleast some suggestions to solve
> this problem.

I have not used any gigabit hardware but I know on 100Mbps cards throughput at
all speeds is noticely improved by enabling full duplex. And that is what I
suggest to you. I believe you set it with ifconfig but I have not done so
through ifconfig since 2.2.x.x. Now-a-days most distros allow those changes to
be made when you're setting up your box during the installation.

neolozer

-
To unsubscribe from this list: send the line "unsubscribe linux-admin" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html