Simulate slow connection between two ubuntu server machines - linux

I want to simulate the following scenario: given that I have 4 ubuntu server machines A,B,C and D. I want to reduce the network bandwidth by 20% between machine A and machine C and 10% between A and B. How to do this using network simulation/throttling tools ?

Ubuntu comes with a tool called NetEm. It can control most of the network layer metrics (bandwidth, delay, packetloss). There are tons of tutorials online.
Dummynet is one such tool to do it.
KauNet a tool developed by karlstad university, which can introduce packet level control.

The simple program wondershaper fits here very well.
Just execute:
sudo wondershaper eth0 1024 1024
It will limit your network bandwidth on the eth0 interface to 1Mbps download and upload rate.

Related

How to improve KVM VPS's network performance

I am using QEMU to virtualize KVM VPS. I have already turned off all the things like gso,tso,tx,rx at the host's network interface and my VPS use VirtIO as the NIC. When I do speedtest at the host, usually a result of approximately 800 Mbps downlink and 600 Mbps uplink. However, when I do the same test inside the VPS, only 300/200 Mbps can be obtained, as if something is limiting the speed to 300 Mbps. However after I check everything, I did not find the cause of the poor network performance.
Are there any way to further improve the network performance in the KVM VPS? My host is using double Xeon E5530 (8 Core 16 Threads) and has 64 GiB of physical memory and approximately 100 VPS (mostly 256 MiB Memory/1 Core) are running on it. Average load of the host is about 3.0. Both the host and the VPS is using the same NIC on the host and network bridge is correctly set up.
I was facing some problem in a Xen to KVM migration and studing the situation we got the next conclusions.
1.- Our best performance was obtained inserting some new NICs to the server and assigning a pci device to a VPS.
You will get same performance as it was not virtualized.
Problems:
You need a VPS linked to a new external NIC controller.Pci passthrough.
You need one network controller for each port you want to configure. Search iommu information.
Forget live migrations between hosts with assigned pci's.
2.- Using virtio drivers and performance tunning.
We got better performance but impossible to compare with a pci-passthrough.
There are some researches from KVM people who say that they reach great performance, I can't say it is not true but I couldn't replicate that performance.
http://www.linux-kvm.org/page/Using_VirtIO_NIC
Tunning:
Following the next guide you can find some tips to get the best performance.
We noticed an important improve with Multi-Queue virtio-net approach but I guess It won't be useful for you if your VPS is using just one core.
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Virtualization_Tuning_and_Optimization_Guide/chap-Virtualization_Tuning_Optimization_Guide-Networking.html#sect-Virtualization_Tuning_Optimization_Guide-Networking-General_Tips

Need to increase the number of concurrent HTTP connections to 85000

I have a setup with 2 machines. I am using one as the server and the other as client. They are connected directly using a 1Ghz link. Both the machines have 4 cores, 8Gb ram and almost 100Gb disk space. I need to tune the Nginx server ( its the one im trying with but i can use any other as well) to handle 85000 concurrent connections. I have a 1kb file on the server and i am using curl on the client to get the same file over all the connections.
After trying various tuning settings, i have 1500 established connections and around 30000 TIME_WAIT connections when i call the curl around 40000 times. Is there a way i can make the TIME_WAITs ESTABLISHED?
Any help in tuning both the server and client will be much appreciated. I am pretty new to using Linux and trying to get the hang of it. The version of linux on both machines is Fedora 20.
Besides of tuning Nginx, you will also need to tune your Linux installation in respect to limits in number of tcp connections, sockets, open files, etc.
These two links should give you a great overview:
https://www.nginx.com/blog/tuning-nginx/
https://mrotaru.wordpress.com/2013/10/10/scaling-to-12-million-concurrent-connections-how-migratorydata-did-it/
You might want to check how much memory TCP buffers etc are using for all those connections.
See this SO thread: How much memory is consumed by the Linux kernel per TCP/IP network connection?
Also, this page is good: http://www.psc.edu/index.php/networking/641-tcp-tune
Given that your two machines are one the same physical network and delays are very low, you can use fairly small TCP window buffer sizes. Modern Linuxes (you didn't mention what kernel you're using) have TCP Autotuning that automatically adjusts these buffers, so you should not have to worry about this unless you're using an old kernel.
Regardless, however, the application(s) can allocate send- and receive buffers separately, which disables TCP Autotuning, so if you're running an application that does this, you might want to limit how much buffer space an application can request per connection (the net.core.wmem_max and net.core.rmem_max variables mentioned in the SO article).
I would recommend https://github.com/eunyoung14/mtcp to achieve 1 million concurrent connection, I did some tuning of mtcp and tested it on a used Dell PowerEdge R210 with 32G ram and 8 cores to achieve 1 million concurrent connection.

Get bandwidth statistics of network by ip from a linux terminal

I am connected to a local network through a linux system (Ubuntu 14.04).
Is it possible to get the bandwidth usage of other systems connected to the same network? All other systems are also using Ubuntu, however the version are different on some.
Thanks
this would probably help you:
http://bandwidthd.sourceforge.net/
BandwidthD tracks usage of TCP/IP network subnets and builds html files with graphs to display utilization.
What you can see on the network without having access to the machines depends on the network structure and where the monitoring system is placed.

Monitor ppp0 traffic usage with Linux

Hey I'm developing a 3G connected device with a raspberry Pi. My mobile provider allows me to use 50 MB/month.
This device will be installed somewhere nobody can have physically access to.
My concern is to avoid data traffic overuse. I need a tool to measure all the accumulated traffic going through (in and out) the ppp0 interface in order to disconnect the interface until next month if the 50MB limit is reached.
I tried with ifconfig but since I have some disconnections the counter is always rested at each reconnection.
I tried ntop and iftop but from what I understood these are tools for measuring real-time traffic.
I'm really looking for some kind of cumulative traffic use, like we usually can find on smartphones.
Any idea?
Take a look in to IPtraf :)
I'm not sure if it will go in to enough detail for you as it is relatively lightweight, though it may not be wise to go too heavy on the raspberry pi processor. You could also try looking around for netflow or SNMP based solutions, though I think that might be overkill.
Good luck!

configuring netem on same machine

I want to introduce the latency while accessing some files from my system such that I can measure the effect of latency for my application while accessing the data from the network (to be simulated using netem module).
I did the following to achieve this :-
I used two machines Host1 and Host2, and I placed the files to be accessed by the application on Host1 hard disk which can be accessed using /net/<login>/Host1/data and I launced my application on Host2 and accessed the data from Host1 using the path mentioned above.
I also introduced latency on Host1 using tc qdisc del dev eth0 root netem delay 20ms such that whenever the files are accessed from Host2 application, the access to data from Host1 should have a latency of 20ms.
I have couple of doubts :
Is there a way by which I could run the application on the same machine where the latency is set. I DONOT want the latency for the application which I will be running (Sometimes application could be accessed from another server, so If I launch the application on the machine having latency, then application would also be effected). So, is there a way I could introduce latency only to access of files.
Am I doing the correct usage of tc command for testing my scenario. So I just need conformation whether I am doing the correct usage of tc command.
Just to be clear, netem is intended for testing network traffic shaping, not hard disk traffic...
You could limit your netem rules to a specific test port on your localhost by building a tree. Its fairly abstruse, but possible.
The general scenario looks correct to me. The host that serves the resource should have tc running on it. The second host should be the monitor/measurer.

Resources