How to improve KVM VPS's network performance - linux

I am using QEMU to virtualize KVM VPS. I have already turned off all the things like gso,tso,tx,rx at the host's network interface and my VPS use VirtIO as the NIC. When I do speedtest at the host, usually a result of approximately 800 Mbps downlink and 600 Mbps uplink. However, when I do the same test inside the VPS, only 300/200 Mbps can be obtained, as if something is limiting the speed to 300 Mbps. However after I check everything, I did not find the cause of the poor network performance.
Are there any way to further improve the network performance in the KVM VPS? My host is using double Xeon E5530 (8 Core 16 Threads) and has 64 GiB of physical memory and approximately 100 VPS (mostly 256 MiB Memory/1 Core) are running on it. Average load of the host is about 3.0. Both the host and the VPS is using the same NIC on the host and network bridge is correctly set up.

I was facing some problem in a Xen to KVM migration and studing the situation we got the next conclusions.
1.- Our best performance was obtained inserting some new NICs to the server and assigning a pci device to a VPS.
You will get same performance as it was not virtualized.
Problems:
You need a VPS linked to a new external NIC controller.Pci passthrough.
You need one network controller for each port you want to configure. Search iommu information.
Forget live migrations between hosts with assigned pci's.
2.- Using virtio drivers and performance tunning.
We got better performance but impossible to compare with a pci-passthrough.
There are some researches from KVM people who say that they reach great performance, I can't say it is not true but I couldn't replicate that performance.
http://www.linux-kvm.org/page/Using_VirtIO_NIC
Tunning:
Following the next guide you can find some tips to get the best performance.
We noticed an important improve with Multi-Queue virtio-net approach but I guess It won't be useful for you if your VPS is using just one core.
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Virtualization_Tuning_and_Optimization_Guide/chap-Virtualization_Tuning_Optimization_Guide-Networking.html#sect-Virtualization_Tuning_Optimization_Guide-Networking-General_Tips

Related

Linux Kernel Config Scopes within VM or Hypervisor

In production we're going to deploy a redis server and need to set the overcommit_memory=1 and disable Transparent Huge Pages in Kernel.
The issue is currrently we only have one giant server, and it is to be shared by many other apps. We only want those kernel configs in the redis server. I wonder if we can achieve it by spinning up a dedicated VM for redis. Doing so in docker certainly doesn't make sense. My questions is:
Will those Kernel configs take actual effect in the redis VM even if the host OS doesn't have the same configs? I doubt it since the hardware resource is allocated by the host machine in the end.
Will the kernel config in the redis VM affect other VMs that run other apps? I think it won't, just want to confirm.
To achieve the goal, what kind of VM or hypervisor should we use?
If there's no way to do it in VM, is having a separate server (hardware) for redis the only way to go?
If you're running a real kernel on a virtual machine, the VM should be able to correctly handle overcommitted memory.
The host server will grant a fixed chunk of memory to the VM. The VM should manage that memory as it sees fit, including overcommitting its own address space.
This will not affect other applications running on the host (apart from the fact that is has less memory available). If it does, there is a problem with your hypervisor.
This should work with any Hypervisor. KVM is a good place to start.
Note that I have not actually tried this -- experiment results are welcome!

KVM and Libvirt: Bad CPU/Network performance of guest

I have an Arch Linux host which runs virtualised router.
When using a LXC guest as router, everything is fine. I get 100MBits Up/Down and almost no CPU usage at all.
However, when I use libvirt gest (pfSense FreeBSD) as a router, whenever there is heavy network traffic going through the guest, the CPU usage goes unreasonably high (up to 100%) but the worst thing is that the network throughput is halved! I get 45-49Mbits max.
Host doesn’t support PCI pass through, so this is my config for the libvirtd VM:
Nic1 (wan)
Network source: Direct ‘eth0’
Source mode: passthrough
Device model: virtio
Nic2 (lan)
Bridge name: br0
Device model: virtio
I tried e1000 instead but it changes absolutely nothing.
Host CPU: AMD A4-5000 Kabini
Guest CPU: default or Opteron_G3
This has been so since over a year now, since I started using KVM. If I do not solve this problem, I will have to dump libvirt because such performance is unacceptable.
It is pretty hard to diagnose these sort of problems with such limited information. Definitely don't use e1000 or any other NIC model - virtio-net will offer the best performance of any virtualized NIC. Make sure the host has /dev/vhost-net available as that accelerates guest NIC traffic in host kernel space.
If you want to use a guest as a high performance network routing appliance though, there's quite a few ways to tune it the VM in general. Pinning the guest vCPUs to specific host physical CPUs, and keeping other guests off these CPUs ensures the guest won't get its cache trashed by being pre-empted by other processes. Next up, use huge pages for the guest RAM to massively increase the TLB cache hit rate for guest memory access. If the host has multiple NUMA nodes, makes sure the guest CPU and guest RAM (hugepages) are fixed to come from the same host NUMA node. Similarly ensure IRQ handling for the host NIC used by the guest has affinity set to match the pCPUs used by the guest.

Is it possible to simulate Linux on USB devices using VMware?

I have successfully installed RedHat Linux and run them on harddrive using VMware simulation. Things work quite smooth if I put all the nodes VM on my physical machine.
For management purposes, I want to use USB devices to store ISO and plug one if more nodes are needed. I would like to run VMware on my physical machines.
Can I just build one virtual machine on one USB device? So I can plug one node if needed.
I mean, if I simulate machine A one USB 1 and another machine B on USB 2, can I build a network using my physical machine as server?
(1) If so, are there problems I should pay attention to?
(2) If not, are there any alternative solution for my management purpose?(I do not want to make VMs on partitions of my physical machine now) Can I use multiple mobile hard drives instead?
Actually I want to start up master-slaves Hadoop2.x deployments using virtual machines. Are there any good reference for this purpose?
I shall explain that am not too lazy to have a try on my idea, however, it is now rather expensive to do so if I do not even know something about the feasibility of this solution.
Thanks for your time.
I'm not an expert on VMWare, but I know that this is common on almost any virtualization system. You can install a system :
on a physical device (a hard disk, a hard disk partition)
or on a file
The physical device way allows normally better performances since you only use one driver between the OS and the device, while the file way offer greater simplicity to add one VM.
Now for your questions :
Can I just build one virtual machine on one USB device? Yes, you can always do it on a file, and depending on host OS directly on the physical device
... can I build a network using my physical machine as server? Yes, VMWare will allow the VM to communicate with each other and/or with the host and/or with external world depending on how you configure the network interfaces of your VMs.
If so, are there problems I should pay attention to?
USB devices are pluggable and unpluggable. If you unadvertantly unplug one while the OS is active bad things could happen. That's why I advised you to use files on the hard disk to host your VMs.
memory sticks (no concern for USB disks) support a limited number of writes and generally perform poorly on writes. Never put temp filesystem of swap there but use a memory filesystem for that usage, as is done for live filesystems on read-only CD or DVD
every VMs uses memory from the host system. That is often the first limitation for the number of simultaneous VMs on a personnal system

Get bandwidth statistics of network by ip from a linux terminal

I am connected to a local network through a linux system (Ubuntu 14.04).
Is it possible to get the bandwidth usage of other systems connected to the same network? All other systems are also using Ubuntu, however the version are different on some.
Thanks
this would probably help you:
http://bandwidthd.sourceforge.net/
BandwidthD tracks usage of TCP/IP network subnets and builds html files with graphs to display utilization.
What you can see on the network without having access to the machines depends on the network structure and where the monitoring system is placed.

Simulate slow connection between two ubuntu server machines

I want to simulate the following scenario: given that I have 4 ubuntu server machines A,B,C and D. I want to reduce the network bandwidth by 20% between machine A and machine C and 10% between A and B. How to do this using network simulation/throttling tools ?
Ubuntu comes with a tool called NetEm. It can control most of the network layer metrics (bandwidth, delay, packetloss). There are tons of tutorials online.
Dummynet is one such tool to do it.
KauNet a tool developed by karlstad university, which can introduce packet level control.
The simple program wondershaper fits here very well.
Just execute:
sudo wondershaper eth0 1024 1024
It will limit your network bandwidth on the eth0 interface to 1Mbps download and upload rate.

Resources