I want to introduce the latency while accessing some files from my system such that I can measure the effect of latency for my application while accessing the data from the network (to be simulated using netem module).
I did the following to achieve this :-
I used two machines Host1 and Host2, and I placed the files to be accessed by the application on Host1 hard disk which can be accessed using /net/<login>/Host1/data and I launced my application on Host2 and accessed the data from Host1 using the path mentioned above.
I also introduced latency on Host1 using tc qdisc del dev eth0 root netem delay 20ms such that whenever the files are accessed from Host2 application, the access to data from Host1 should have a latency of 20ms.
I have couple of doubts :
Is there a way by which I could run the application on the same machine where the latency is set. I DONOT want the latency for the application which I will be running (Sometimes application could be accessed from another server, so If I launch the application on the machine having latency, then application would also be effected). So, is there a way I could introduce latency only to access of files.
Am I doing the correct usage of tc command for testing my scenario. So I just need conformation whether I am doing the correct usage of tc command.
Just to be clear, netem is intended for testing network traffic shaping, not hard disk traffic...
You could limit your netem rules to a specific test port on your localhost by building a tree. Its fairly abstruse, but possible.
The general scenario looks correct to me. The host that serves the resource should have tc running on it. The second host should be the monitor/measurer.
Related
First of all, I wanna say that I don't have much experience in advanced networking on Linux.
A have a task to deploy our .deb packages in containers, and applications are mostly tunned for operating on localhost while being designed with the capability of operating onset of server machines (DB, application, client, etc), but since components of the app have been distributed between containers, I need to make it work together. The goal is to do it w/o any pre-setup sequences that change the IP address in configs for components since target IP is uncertain and IP alias in /etc/hosts may not solve the problem.
Could I somehow intercept outbound connection to localhost:5672 and forward it to, we say, 172.18.0.4:5672 with the ability to correctly receive incoming traffic from the resource we forwarded to? Can you give me some examples of the script?
I am running C++ client and sever implemented using grpc locally on different ports. What i want to do is to run both of them under different bandwidth so that i can see a difference on the time taken to finish the entire communication.
I have tried wondershaper and trickle, but it didn't seem to work.
I also tried to use tc to do the traffic control as following
tc qdisc add dev lo root tbf rate 10mbit burst 10mbit latency 900ms
I tried to use this command to limit the local bandwidth to 10M. Is this the right way to simulate the bandwidth locally?
I'm trying to make a portforwarding for different ports for communications, but it seems they are lost on reboot.
I'm using a script to make them, and it uses the following syntax:
upnpc -a 192.168.1.95 22 22 TCP
...
Since my system is made to actually stress the gateway to reboot, I need to have these ports open after a reboot. I could do it in the software (running the script if connection lost), but I don't want to do that unless it is absolutely necessary.
Do you have some idea of how to make a portforwarding with UPnP such that the forwarding is persisted after a reboot?
Port mappings are specifically not required to be persistent between gateway reboots, clients are supposed to keep an eye on the mappings and re-map when needed. WANIPConnection spec v2 also does not even allow indefinite mappings: another reason to keep the client running as long as you need the mapping to exist.
I am using an android app that streams real-time accelerometer data to the specified ip address of a server. I have written a "server" in C running on Linux which is running in VMware.
I am connected to the hotspot created by the Windows7(Host machine) running the VMware Workstation.
So my question is how do I connect the virtual-machine to same network as the hotspot so that I can get the phone and the "server" program on the same network and stream data to the server program?
I use VirtualBox, but I'm guessing the settings are very similar in VMWare Workstation.
You probably need to do one or both of these things:
1) Port Forwarding. If your app is hitting port 80 (or whatever port), you'll need to tell VMWare that any hits coming in to the host machine on that port get forwarded to the VM. Of course, your VM will have to be listening on that port. I'd suggest using a high port number (over 1024) to minimize conflicts, and avoid annoying root/admin issues using a low port number.
2) Hopefully that gets you there. If not, you may need to change the virtual adapter settings on the VM. NAT mode is a good first try. If not, there are other modes (bridged, internal, host-only) you can tinker with. (Not sure if VMWare uses different names)
That's probably all you need for the topology you describe -- Android device connected directly to the same subnet as the host machine. If not, perhaps your hotspot routes all client traffic to the gateway (i.e. out to the Internet), without allowing direct access to localhost. If so, maybe there are settings for that. If not, ngrok is your new best friend.
It is SUPER easy and allows you to tunnel traffic from anywhere on the Internet to a specific service running on your machine. This would sidestep some of the issues above.
If you want to take your Android device to another network (e.g. cell network), then ngrok is absolutely the way to go, particularly for development and prototyping. This lets you avoid issues with DNS, routing, firewalls, etc.
I want to simulate the following scenario: given that I have 4 ubuntu server machines A,B,C and D. I want to reduce the network bandwidth by 20% between machine A and machine C and 10% between A and B. How to do this using network simulation/throttling tools ?
Ubuntu comes with a tool called NetEm. It can control most of the network layer metrics (bandwidth, delay, packetloss). There are tons of tutorials online.
Dummynet is one such tool to do it.
KauNet a tool developed by karlstad university, which can introduce packet level control.
The simple program wondershaper fits here very well.
Just execute:
sudo wondershaper eth0 1024 1024
It will limit your network bandwidth on the eth0 interface to 1Mbps download and upload rate.