Hardware: ZyXEL PLA-4201 Twin Pack - throughput test
It is not so easy to add wires/connection once flat or house is build. Even when people think about devices placement they can end up with missing Ethernet connection for some device. WiFi connection is not usable for all kind of network communication due to nature of it. WiFi have limited throughput, jitter and other kind of problems caused by "wireless" connection. So is powerline connection solution to this problem ? At beginning of this document i can say: "I don't know", but i try to find answer.
I bought ZyXEL PLA-4201 recently and I use this device during tests to find answers. I know how this device work by theory but I don't know what is in real word. In theory it should modulate all data to mains with hope that second device "catch" those data without too much distortion but there are several problems:
- switching power supplies and power regulators (light dimmer, vacuum cleaner "speed" settings, etc.) are creating serious noise covering wide range of frequencies
- power cables are not optimized to transfer "data". Most of the filters kill anything else that 50/60 Hz.
- if sockets are not on same power line then communication (as expected) is not possible
- communication can't pass transformers, surge protectors and other similar devices
Power cables are definitely not "polite" place for Ethernet data, but why don't test it ? In office I heard opinions like: "it will work only on 10 meter cable or shorter", "noise kill all transferred data", etc. Will it be true ?
I prepared several tests. Here is description of tests:
Contents
Methodology of synthetic tests
First of all I waited for everything to settle down and then at each test point I executed following tests. I run first test once more if it result in high response rate. It looks like that ZyXEL powerline adapters needs some traffic to "sync" and configure connection.
Test #1
ping -c 10 server
Simple ping test to see reaction of connection on as little traffic as possible (no other communication was flowing thought extenders during this and also other tests)
Test #2
ping -c 100000 -f server
Ping food test with default ping size. This test is generating relatively small packets at speed of source network card. Purpose of this test is to see packet lost and response time if queuing occur.
Test #3
ping -c 100000 -f -s 1400 server
Same as Test #3 but with size close to MTU. I selected smaller size as devices from some vendors have great throughput at size of MTU but can't maintain it when packed size is smaller.
Test #4
netperf -l 60 -H host
"One minute" classic netperf test to see maximu throughput.
Test #5
netperf -t omni -j -l 60 -H host -- -d rr
Omni netperf test mainly to see number of transactions but other results of this tests are also interesting.
Test #6
netperf -t omni -j -l 60 -H host -- -d rr -O \ "MIN_LATENCY,MAX_LATENCY,P50_LATENCY,P90_LATENCY,P99_LATENCY,MEAN_LATENCY,STDDEV_LATENCY"
Same omni test as in #5 but configured to show latency. I expect that test #2 and #3 show min/avg/max but it is also interesting to see this via percentile calculation. (e.g. several delayed packets can affect average calculation but they will be omitted in percentile output)
Direct connection test
First of all I need reference data. I connected two included 1 meter long Ethernet cables together and connected them directly to server's wire. Other end i connected to my laptop.
Computers negotiated 1000baseTx-FD with flow-control but later i discovered that ZyXEL PLA-4201 have only 100baseTx-FD (flow-control) connection. Package of this product is little bit misleading. It claims "500Mbps" throughput and support for 10/100/1000baseTx but Ethernet port on device is 10/100baseTx only. 500Mbps technology mean that you have more "power" to survive interference and one "powerline" segment can handle higher throughput resulting in seamless communication between several devices at same time. Tested powerline device provide hardware flow control at Ethernet ports. This is very good benefit as computer know possible transmission speed depending of powerline status.
I tested using following devices:
Server:
- Asrock ION-330HT
- lan: NVIDIA Corporation MCP79 Ethernet
Client:
- HP ProBook 6555b
- lan: Marvell Technology Group Ltd. Yukon Optima 88E8059 [PCIe Gigabit Ethernet Controller with AVB]
- wlan: Broadcom Corporation BCM43224 802.11a/b/g/n (rev 01)
Wifi access point:
- Asus RT-N10U
- Firmware: DD-WRT v24-sp2 (02/11/13) big
- Network: N-Only (40MHz wide channel, but due to limitation of connected devices only about 65 Mbps raw speed was used)
- Encryption: WPA2/AES with recommended "N Only" configuration
100baseTx-FD
Direct connection using CAT5 cable can provide excellent response time. By help of this we can see almost 2500 Transactions per seconds. Throughput is also very good. Daemons on both sides are running in user-space so this measurement include delay caused by driver, kernel and process scheduling.
Test 1:
10 packets transmitted, 10 received, 0% packet loss, time 8998ms rtt min/avg/max/mdev = 0.304/0.365/0.442/0.038 ms
Test 2:
10000 packets transmitted, 10000 received, 0% packet loss, time 4248ms rtt min/avg/max/mdev = 0.206/0.300/0.555/0.032 ms, ipg/ewma 0.424/0.303 ms
Test 3:
10000 packets transmitted, 10000 received, 0% packet loss, time 7185ms rtt min/avg/max/mdev = 0.480/0.579/0.849/0.042 ms, ipg/ewma 0.718/0.562 ms
Test 4:
MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to server () port 0 AF_INET Recv Send Send Socket Socket Message Elapsed Size Size Size Time Throughput bytes bytes bytes secs. 10^6bits/sec 87380 16384 16384 60.27 94.15
Test 5:
OMNI Send|Recv TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to server () port 0 AF_INET Local Local Remote Remote Request Response Elapsed Throughput Throughput Send Socket Recv Socket Recv Socket Send Socket Size Size Time Units Size Size Size Size Bytes Bytes (sec) Final Final Final Final 23400 87380 87380 22600 1 1 60.00 2445.96 Trans/s
Test 6:
OMNI Send|Recv TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to server () port 0 AF_INET Minimum Maximum 50th 90th 99th Mean Stddev Latency Latency Percentile Percentile Percentile Latency Latency Microseconds Microseconds Latency Latency Latency Microseconds Microseconds Microseconds Microseconds Microseconds 209 3627 410 429 486 405.87 41.78
WiFi
As i mentioned earlier WiFi have limited thought clearly show by Test 4 and 5. Response time is good while there is limited amount of traffic. By saturating connection response time jump to so high numbers. Response time at around 200 ms can cause noticeable delay in applications. For real time applications this can be serious problem.
Test 1:
10 packets transmitted, 10 received, 0% packet loss, time 9012ms rtt min/avg/max/mdev = 1.059/1.140/1.221/0.060 ms
Test 2:
10000 packets transmitted, 10000 received, 0% packet loss, time 11760ms rtt min/avg/max/mdev = 0.782/1.054/13.972/0.360 ms, pipe 2, ipg/ewma 1.176/1.062 ms
Test 3:
10000 packets transmitted, 9998 received, 0% packet loss, time 22377ms rtt min/avg/max/mdev = 1.575/2.506/218.434/8.988 ms, pipe 5, ipg/ewma 2.237/1.732 ms
Test 4:
MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to server () port 0 AF_INET Recv Send Send Socket Socket Message Elapsed Size Size Size Time Throughput bytes bytes bytes secs. 10^6bits/sec 87380 16384 16384 60.06 39.42
Test 5:
OMNI Send|Recv TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to server () port 0 AF_INET Local Local Remote Remote Request Response Elapsed Throughput Throughput Send Socket Recv Socket Recv Socket Send Socket Size Size Time Units Size Size Size Size Bytes Bytes (sec) Final Final Final Final 23400 87380 87380 22600 1 1 60.00 800.10 Trans/s
Test 6:
OMNI Send|Recv TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 10.54.1.7 () port 0 AF_INET Minimum Maximum 50th 90th 99th Mean Stddev Latency Latency Percentile Percentile Percentile Latency Latency Microseconds Microseconds Latency Latency Latency Microseconds Microseconds Microseconds Microseconds Microseconds 805 360236 1082 1185 2896 1176.57 3287.30
Power line setup
At first i connected devices as close as possible to have reference data. Only UPS was connected to used wiring as it is powering core components of my flat and it was not possible to disable it during tests.
Then i removed one device and connected test wiring.
At end of 7 meter long cable i connected hair dryier.
At end of 5 meter long cable i connected saving light bulb.
At end of 1.5 meter long cable i connected vacuum cleaner.
At end of another 5 meter long cable i connected saving light bulb. This line contain only two wires. Normally it should have tree wires but powerline devices from ZyXEL use only two wires so it should not be problem.
As last connection i used 5 meter long cable designed for reading lamps.
Power line tests
"Side by Side"
During this test i disconnected all selected "noise provides" and tested devices while there was no other power consumption on testing wiring. I was little bit surprised by ~3.5 ms delay during this test. I repeat first tests several times and also checked configuration of connected devices. I expect better response time but it looks like it is caused by technology itself (conversion from Ethernet to powerline and then back). While i was repeating tests i noticed also stability of this response. Throughput test show excellent result (almost as with direct Ethernet connection), but number of transactions are less due to delay on line.
Test 1:
10 packets transmitted, 10 received, 0% packet loss, time 9015ms rtt min/avg/max/mdev = 3.523/3.735/5.223/0.501 ms
Test 2:
10000 packets transmitted, 10000 received, 0% packet loss, time 36176ms rtt min/avg/max/mdev = 3.242/3.500/7.530/0.429 ms, ipg/ewma 3.617/3.608 ms
Test 3:
10000 packets transmitted, 10000 received, 0% packet loss, time 132008ms rtt min/avg/max/mdev = 6.744/22.551/38.489/1.688 ms, pipe 4, ipg/ewma 13.202/22.842 ms
Test 4:
MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to server () port 0 AF_INET Recv Send Send Socket Socket Message Elapsed Size Size Size Time Throughput bytes bytes bytes secs. 10^6bits/sec 87380 16384 16384 60.27 93.99
Test 5:
OMNI Send|Recv TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to server () port 0 AF_INET Local Local Remote Remote Request Response Elapsed Throughput Throughput Send Socket Recv Socket Recv Socket Send Socket Size Size Time Units Size Size Size Size Bytes Bytes (sec) Final Final Final Final 23400 87380 87380 22600 1 1 60.00 618.73 Trans/s
Test 6:
OMNI Send|Recv TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 10.54.1.7 () port 0 AF_INET Minimum Maximum 50th 90th 99th Mean Stddev Latency Latency Percentile Percentile Percentile Latency Latency Microseconds Microseconds Latency Latency Latency Microseconds Microseconds Microseconds Microseconds Microseconds 1309 13092 1557 1670 4234 1625.58 398.72
"Alone on line"
Values measured at end of ~24 meter long cable was almost similar to values measured on few centimeter long wire. Differences are under expected errors during measurement. It looks like distance is not so big problem while there is no noise on line. Throughput at speed of 100baseTx-FD connection and delay only ~3.5 ms is good result. Unfortunately currently most of the customers devices use switching power supply. Even if this is benefit for application itself (smaller size, less heat dispatched through power supply, stable output, etc.) they are source of serious noise. Results from this test can be marked as "lab" results. (Note: Saving Light Bulbs are also source of serious noise, especially if they are "cheap china" model)
Test point #1
Test 1:
10 packets transmitted, 10 received, 0% packet loss, time 9015ms rtt min/avg/max/mdev = 3.444/3.652/5.306/0.554 ms
Test 2:
10000 packets transmitted, 10000 received, 0% packet loss, time 35869ms rtt min/avg/max/mdev = 3.222/3.481/8.522/0.416 ms, ipg/ewma 3.587/3.558 ms
Test 3:
10000 packets transmitted, 10000 received, 0% packet loss, time 137557ms rtt min/avg/max/mdev = 12.323/22.494/38.041/1.444 ms, pipe 4, ipg/ewma 13.757/22.311 ms
Test 4:
MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to server () port 0 AF_INET Recv Send Send Socket Socket Message Elapsed Size Size Size Time Throughput bytes bytes bytes secs. 10^6bits/sec 87380 16384 16384 60.30 93.95
Test 5:
OMNI Send|Recv TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to server () port 0 AF_INET Local Local Remote Remote Request Response Elapsed Throughput Throughput Send Socket Recv Socket Recv Socket Send Socket Size Size Time Units Size Size Size Size Bytes Bytes (sec) Final Final Final Final 23400 87380 87380 22600 1 1 60.00 618.60 Trans/s
Test 6:
OMNI Send|Recv TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 10.54.1.7 () port 0 AF_INET Minimum Maximum 50th 90th 99th Mean Stddev Latency Latency Percentile Percentile Percentile Latency Latency Microseconds Microseconds Latency Latency Latency Microseconds Microseconds Microseconds Microseconds Microseconds 1304 8705 1559 1676 3800 1610.93 315.40
Test point #2
Test 1:
10 packets transmitted, 10 received, 0% packet loss, time 9014ms rtt min/avg/max/mdev = 3.353/3.450/3.494/0.085 ms
Test 2:
10000 packets transmitted, 10000 received, 0% packet loss, time 35962ms rtt min/avg/max/mdev = 3.265/3.493/11.572/0.445 ms, ipg/ewma 3.596/3.694 ms
Test 3:
10000 packets transmitted, 10000 received, 0% packet loss, time 136178ms rtt min/avg/max/mdev = 6.542/22.470/37.448/1.574 ms, pipe 4, ipg/ewma 13.619/22.390 ms
Test 4:
MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to server () port 0 AF_INET Recv Send Send Socket Socket Message Elapsed Size Size Size Time Throughput bytes bytes bytes secs. 10^6bits/sec 87380 16384 16384 60.33 94.00
Test 5:
OMNI Send|Recv TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to server () port 0 AF_INET Local Local Remote Remote Request Response Elapsed Throughput Throughput Send Socket Recv Socket Recv Socket Send Socket Size Size Time Units Size Size Size Size Bytes Bytes (sec) Final Final Final Final 23400 87380 87380 22600 1 1 60.00 618.98 Trans/s
Test 6:
OMNI Send|Recv TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to server () port 0 AF_INET Minimum Maximum 50th 90th 99th Mean Stddev Latency Latency Percentile Percentile Percentile Latency Latency Microseconds Microseconds Latency Latency Latency Microseconds Microseconds Microseconds Microseconds Microseconds 1301 6588 1559 1678 3725 1613.09 313.37
Test point #3
Test 1:
10 packets transmitted, 10 received, 0% packet loss, time 9015ms rtt min/avg/max/mdev = 3.449/3.477/3.532/0.084 ms
Test 2:
10000 packets transmitted, 10000 received, 0% packet loss, time 35758ms rtt min/avg/max/mdev = 3.252/3.470/9.408/0.404 ms, ipg/ewma 3.576/3.381 ms
Test 3:
10000 packets transmitted, 10000 received, 0% packet loss, time 131589ms rtt min/avg/max/mdev = 6.439/22.533/35.957/1.583 ms, pipe 4, ipg/ewma 13.160/22.297 ms
Test 4:
MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to server () port 0 AF_INET Recv Send Send Socket Socket Message Elapsed Size Size Size Time Throughput bytes bytes bytes secs. 10^6bits/sec 87380 16384 16384 60.33 93.99
Test 5:
OMNI Send|Recv TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 10.54.1.7 () port 0 AF_INET Local Local Remote Remote Request Response Elapsed Throughput Throughput Send Socket Recv Socket Recv Socket Send Socket Size Size Time Units Size Size Size Size Bytes Bytes (sec) Final Final Final Final 23400 87380 87380 22600 1 1 60.00 618.56 Trans/s
Test 6:
OMNI Send|Recv TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 10.54.1.7 () port 0 AF_INET Minimum Maximum 50th 90th 99th Mean Stddev Latency Latency Percentile Percentile Percentile Latency Latency Microseconds Microseconds Latency Latency Latency Microseconds Microseconds Microseconds Microseconds Microseconds 1306 7065 1560 1677 4315 1616.12 337.55
Test point #4
Test 1:
--- 10.54.1.7 ping statistics --- 10 packets transmitted, 10 received, 0% packet loss, time 9014ms rtt min/avg/max/mdev = 3.376/3.470/3.552/0.074 ms
Test 2:
10000 packets transmitted, 10000 received, 0% packet loss, time 36449ms rtt min/avg/max/mdev = 2.897/3.540/15.270/0.539 ms, pipe 2, ipg/ewma 3.645/3.478 ms
Test 3:
10000 packets transmitted, 10000 received, 0% packet loss, time 128001ms rtt min/avg/max/mdev = 12.291/22.570/38.633/1.883 ms, pipe 4, ipg/ewma 12.801/21.577 ms
Test 4:
MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to server () port 0 AF_INET Recv Send Send Socket Socket Message Elapsed Size Size Size Time Throughput bytes bytes bytes secs. 10^6bits/sec 87380 16384 16384 60.28 94.05
Test 5:
OMNI Send|Recv TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to server () port 0 AF_INET Local Local Remote Remote Request Response Elapsed Throughput Throughput Send Socket Recv Socket Recv Socket Send Socket Size Size Time Units Size Size Size Size Bytes Bytes (sec) Final Final Final Final 23400 87380 87380 22600 1 1 60.00 615.94 Trans/s
Test 6:
OMNI Send|Recv TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to server () port 0 AF_INET Minimum Maximum 50th 90th 99th Mean Stddev Latency Latency Percentile Percentile Percentile Latency Latency Microseconds Microseconds Latency Latency Latency Microseconds Microseconds Microseconds Microseconds Microseconds 1304 5839 1560 1682 3513 1618.76 314.13
Test point #5
Test 1:
10 packets transmitted, 10 received, 0% packet loss, time 9014ms rtt min/avg/max/mdev = 3.402/3.513/4.071/0.193 ms
Test 2:
10000 packets transmitted, 10000 received, 0% packet loss, time 36245ms rtt min/avg/max/mdev = 2.724/3.520/20.094/0.523 ms, ipg/ewma 3.624/3.408 ms
Test 3:
10000 packets transmitted, 10000 received, 0% packet loss, time 125892ms rtt min/avg/max/mdev = 6.197/22.571/38.242/1.885 ms, pipe 4, ipg/ewma 12.590/22.749 ms
Test 4:
MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to server () port 0 AF_INET Recv Send Send Socket Socket Message Elapsed Size Size Size Time Throughput bytes bytes bytes secs. 10^6bits/sec 87380 16384 16384 60.25 94.00
Test 5:
OMNI Send|Recv TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to server () port 0 AF_INET Local Local Remote Remote Request Response Elapsed Throughput Throughput Send Socket Recv Socket Recv Socket Send Socket Size Size Time Units Size Size Size Size Bytes Bytes (sec) Final Final Final Final 23400 87380 87380 22600 1 1 60.00 617.92 Trans/s
Test 6:
OMNI Send|Recv TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to server () port 0 AF_INET Minimum Maximum 50th 90th 99th Mean Stddev Latency Latency Percentile Percentile Percentile Latency Latency Microseconds Microseconds Latency Latency Latency Microseconds Microseconds Microseconds Microseconds Microseconds 1305 8036 1560 1679 3663 1615.62 320.26
"Noise on line"
I was not expecting it. Tested devices are robust to survive serious noise and also length of cable. I expected degradation of speed but also problems with response time. Even when there was only half of the throughput available response time was almost unchanged. Also there was no packed lost at all. Packet lost can slow down applications so it is good to know that packed lost don't occur even under difficult conditions.
Test point #1
Test 1:
10 packets transmitted, 10 received, 0% packet loss, time 9015ms rtt min/avg/max/mdev = 3.461/3.910/4.546/0.510 ms
Test 2:
10000 packets transmitted, 10000 received, 0% packet loss, time 37725ms rtt min/avg/max/mdev = 2.547/3.670/9.925/0.619 ms, ipg/ewma 3.772/3.534 ms
Test 3:
10000 packets transmitted, 10000 received, 0% packet loss, time 132143ms rtt min/avg/max/mdev = 5.947/22.587/43.401/2.914 ms, pipe 4, ipg/ewma 13.215/23.811 ms <pre> Test 4: <pre> Recv Send Send Socket Socket Message Elapsed Size Size Size Time Throughput bytes bytes bytes secs. 10^6bits/sec 87380 16384 16384 60.34 93.50
Test 5:
OMNI Send|Recv TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to server () port 0 AF_INET Local Local Remote Remote Request Response Elapsed Throughput Throughput Send Socket Recv Socket Recv Socket Send Socket Size Size Time Units Size Size Size Size Bytes Bytes (sec) Final Final Final Final 23400 87380 87380 22600 1 1 60.00 594.37 Trans/s
Test 6:
OMNI Send|Recv TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to server () port 0 AF_INET Minimum Maximum 50th 90th 99th Mean Stddev Latency Latency Percentile Percentile Percentile Latency Latency Microseconds Microseconds Latency Latency Latency Microseconds Microseconds Microseconds Microseconds Microseconds 1310 9838 1570 2201 3004 1688.51 362.65
Test point #2
Test 1:
10 packets transmitted, 10 received, 0% packet loss, time 9015ms rtt min/avg/max/mdev = 3.409/3.732/4.645/0.450 ms
Test 2:
10000 packets transmitted, 10000 received, 0% packet loss, time 38524ms rtt min/avg/max/mdev = 3.269/3.749/10.248/0.667 ms, ipg/ewma 3.852/3.747 ms
Test 3:
10000 packets transmitted, 10000 received, 0% packet loss, time 137121ms rtt min/avg/max/mdev = 6.111/22.856/42.547/3.892 ms, pipe 4, ipg/ewma 13.713/22.175 ms
Test 4:
MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to server () port 0 AF_INET Recv Send Send Socket Socket Message Elapsed Size Size Size Time Throughput bytes bytes bytes secs. 10^6bits/sec 87380 16384 16384 60.24 91.68
Test 5:
OMNI Send|Recv TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to server () port 0 AF_INET Local Local Remote Remote Request Response Elapsed Throughput Throughput Send Socket Recv Socket Recv Socket Send Socket Size Size Time Units Size Size Size Size Bytes Bytes (sec) Final Final Final Final 23400 87380 87380 22600 1 1 60.00 581.57 Trans/s
Test 6:
OMNI Send|Recv TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to server () port 0 AF_INET Minimum Maximum 50th 90th 99th Mean Stddev Latency Latency Percentile Percentile Percentile Latency Latency Microseconds Microseconds Latency Latency Latency Microseconds Microseconds Microseconds Microseconds Microseconds 1326 7898 1581 2210 3714 1705.86 386.82
Test point #3
Test 1:
10 packets transmitted, 10 received, 0% packet loss, time 9015ms rtt min/avg/max/mdev = 3.453/3.748/4.536/0.415 ms
Test 2:
10000 packets transmitted, 10000 received, 0% packet loss, time 39535ms rtt min/avg/max/mdev = 2.414/3.848/9.581/0.785 ms, ipg/ewma 3.953/4.099 ms
Test 3:
10000 packets transmitted, 10000 received, 0% packet loss, time 136755ms rtt min/avg/max/mdev = 3.359/22.937/38.814/4.030 ms, pipe 4, ipg/ewma 13.676/21.981 ms
Test 4:
MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to server () port 0 AF_INET Recv Send Send Socket Socket Message Elapsed Size Size Size Time Throughput bytes bytes bytes secs. 10^6bits/sec 87380 16384 16384 60.36 77.04
Test 5:
OMNI Send|Recv TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to server () port 0 AF_INET Local Local Remote Remote Request Response Elapsed Throughput Throughput Send Socket Recv Socket Recv Socket Send Socket Size Size Time Units Size Size Size Size Bytes Bytes (sec) Final Final Final Final 23400 87380 87380 22600 1 1 60.00 564.57 Trans/s
Test 6:
OMNI Send|Recv TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 10.54.1.7 () port 0 AF_INET Minimum Maximum 50th 90th 99th Mean Stddev Latency Latency Percentile Percentile Percentile Latency Latency Microseconds Microseconds Latency Latency Latency Microseconds Microseconds Microseconds Microseconds Microseconds 1381 9904 1600 2284 4480 1773.36 511.98
Test point #4
Test 1:
10 packets transmitted, 10 received, 0% packet loss, time 9015ms rtt min/avg/max/mdev = 3.445/4.142/5.676/0.868 ms
Test 2:
10000 packets transmitted, 10000 received, 0% packet loss, time 40742ms rtt min/avg/max/mdev = 2.387/3.969/14.361/0.923 ms, pipe 2, ipg/ewma 4.074/3.681 ms
Test 3:
10000 packets transmitted, 10000 received, 0% packet loss, time 137211ms rtt min/avg/max/mdev = 3.592/23.098/39.886/4.027 ms, pipe 4, ipg/ewma 13.722/23.760 ms
Test 4:
MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to server () port 0 AF_INET Recv Send Send Socket Socket Message Elapsed Size Size Size Time Throughput bytes bytes bytes secs. 10^6bits/sec 87380 16384 16384 60.40 73.99
Test 5:
OMNI Send|Recv TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to server () port 0 AF_INET Local Local Remote Remote Request Response Elapsed Throughput Throughput Send Socket Recv Socket Recv Socket Send Socket Size Size Time Units Size Size Size Size Bytes Bytes (sec) Final Final Final Final 23400 87380 87380 22600 1 1 60.00 535.59 Trans/s
Test 6:
OMNI Send|Recv TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 10.54.1.7 () port 0 AF_INET Minimum Maximum 50th 90th 99th Mean Stddev Latency Latency Percentile Percentile Percentile Latency Latency Microseconds Microseconds Latency Latency Latency Microseconds Microseconds Microseconds Microseconds Microseconds 1316 15079 1597 2344 5314 1857.38 672.09
Test 6:
OMNI Send|Recv TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 10.54.1.7 () port 0 AF_INET Minimum Maximum 50th 90th 99th Mean Stddev Latency Latency Percentile Percentile Percentile Latency Latency Microseconds Microseconds Latency Latency Latency Microseconds Microseconds Microseconds Microseconds Microseconds 1316 15079 1597 2344 5314 1857.38 672.09
Test point #5
Test 1:
10 packets transmitted, 10 received, 0% packet loss, time 9015ms rtt min/avg/max/mdev = 3.560/3.960/4.807/0.535 ms
Test 2:
10000 packets transmitted, 10000 received, 0% packet loss, time 40349ms rtt min/avg/max/mdev = 2.405/3.929/18.161/1.007 ms, pipe 2, ipg/ewma 4.035/3.950 ms
Test 3:
10000 packets transmitted, 10000 received, 0% packet loss, time 137943ms rtt min/avg/max/mdev = 3.569/23.216/42.977/4.342 ms, pipe 4, ipg/ewma 13.795/24.913 ms
Test 4:
MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to server () port 0 AF_INET Recv Send Send Socket Socket Message Elapsed Size Size Size Time Throughput bytes bytes bytes secs. 10^6bits/sec 87380 16384 16384 60.37 53.00
Test 5:
MNI Send|Recv TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to server () port 0 AF_INET Local Local Remote Remote Request Response Elapsed Throughput Throughput Send Socket Recv Socket Recv Socket Send Socket Size Size Time Units Size Size Size Size Bytes Bytes (sec) Final Final Final Final 23400 87380 87380 22600 1 1 60.00 524.31 Trans/s
Test 6:
OMNI Send|Recv TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to server () port 0 AF_INET Minimum Maximum 50th 90th 99th Mean Stddev Latency Latency Percentile Percentile Percentile Latency Latency Microseconds Microseconds Latency Latency Latency Microseconds Microseconds Microseconds Microseconds Microseconds 1339 16299 1594 2341 6097 1902.42 911.95
Only lamps on line
Vacuum cleaner and hair dryer are devices which are not in use on daily basics and also not for long time. Therefore I repeated tests on last test point with both devices switched off but connected to wiring. Results was similar as with all devices powered on. It looks like saving light bulbs produce serious noise to drop down throughout but adding more noise on same line don't cause too much problems to powerline devices.
Test point #5
Test 1:
10 packets transmitted, 10 received, 0% packet loss, time 9015ms rtt min/avg/max/mdev = 3.404/3.590/4.639/0.355 ms
Test 2:
10000 packets transmitted, 10000 received, 0% packet loss, time 39946ms rtt min/avg/max/mdev = 2.358/3.893/17.561/1.044 ms, pipe 2, ipg/ewma 3.995/3.745 ms
Test 3:
10000 packets transmitted, 10000 received, 0% packet loss, time 135802ms rtt min/avg/max/mdev = 3.426/23.142/42.852/4.098 ms, pipe 4, ipg/ewma 13.581/23.080 ms
Test 4:
MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to server () port 0 AF_INET Recv Send Send Socket Socket Message Elapsed Size Size Size Time Throughput bytes bytes bytes secs. 10^6bits/sec 87380 16384 16384 60.33 59.04
Test 5:
OMNI Send|Recv TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to server () port 0 AF_INET Local Local Remote Remote Request Response Elapsed Throughput Throughput Send Socket Recv Socket Recv Socket Send Socket Size Size Time Units Size Size Size Size Bytes Bytes (sec) Final Final Final Final 23400 87380 87380 22600 1 1 60.00 525.13 Trans/s
Test 6:
OMNI Send|Recv TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to server () port 0 AF_INET Minimum Maximum 50th 90th 99th Mean Stddev Latency Latency Percentile Percentile Percentile Latency Latency Microseconds Microseconds Latency Latency Latency Microseconds Microseconds Microseconds Microseconds Microseconds 1305 14034 1592 2343 6077 1904.30 939.65
Summary
Powerline devices from ZyXEL are robust and provide high throughput also under difficult condition. Even with serious noise on line throughput is better that maximum possible via "N" WiFi connection. On other hand there is delay at about 3.5 milliseconds caused by technology itself and limitation on usable sockets (they should be on same line with no "filters" between them). With price at about 46 EUR per pair it is reasonable investment. I recommend this if you would like to use wake on lan or boot via network. Both technologies are not possible via Wifi connection.