memory increase in centOS and Virtualbox - linux

I am using CentOS in Virtualbox.
My testing Web is frequently down, So I think reason is low memory and want to upgrade it.
origin memory is 1024MB, and in system configuration, updraded to 2048MB.
and to sync with CentOS, what commands need??
I think only upgrading memory in virtual box useless.
Must command some code in cent or chance some file.
but I did not know how to.

I think that should work.what makes you think it's not?
If you run
cat /proc/meminfo
before and after does it reflect the value you set?
You could also add a swap file if you haven't already

Related

Virtual Box not allowing me to type. [Arch Linux]

So I downloaded a virtual box and the cyberos for it. I am trying to run sudo pacman -S virtualbox-guest-utils. Once this runs through, it asks for my password. When I start typing it, there is nothing on the screen. It is as if it just rejects my typing. I tried type the password using an on-screen keyboard but there is still no result.
I've encountered this problem before with an Ubuntu. So, I've searched the problem and I've discorvered that it's a VirtualBox problem catching the keyboard input while loading the VM. So, the only thing you can do, is to shut down and restart your VM and hope that this bug won't perform. One of the most known solution is also to degrade the amount of maximum RAM your VM can use. For example, if you've got 12G of RAM, it's better if your VM can only access at a 6G of it.
This is the problem asked on the VirtualBox forum: https://forums.virtualbox.org/viewtopic.php?f=6&t=78077

Linux Kernel Config Scopes within VM or Hypervisor

In production we're going to deploy a redis server and need to set the overcommit_memory=1 and disable Transparent Huge Pages in Kernel.
The issue is currrently we only have one giant server, and it is to be shared by many other apps. We only want those kernel configs in the redis server. I wonder if we can achieve it by spinning up a dedicated VM for redis. Doing so in docker certainly doesn't make sense. My questions is:
Will those Kernel configs take actual effect in the redis VM even if the host OS doesn't have the same configs? I doubt it since the hardware resource is allocated by the host machine in the end.
Will the kernel config in the redis VM affect other VMs that run other apps? I think it won't, just want to confirm.
To achieve the goal, what kind of VM or hypervisor should we use?
If there's no way to do it in VM, is having a separate server (hardware) for redis the only way to go?
If you're running a real kernel on a virtual machine, the VM should be able to correctly handle overcommitted memory.
The host server will grant a fixed chunk of memory to the VM. The VM should manage that memory as it sees fit, including overcommitting its own address space.
This will not affect other applications running on the host (apart from the fact that is has less memory available). If it does, there is a problem with your hypervisor.
This should work with any Hypervisor. KVM is a good place to start.
Note that I have not actually tried this -- experiment results are welcome!

Using ZFS with Embedded Linux

I'm running embedded Linux (Debian on ARM/X86_64). Since it is very much like a full OS, with some hardware differential and a different platform, you may consider it as a regular machine. So, this will be used in the robotics field where the computer will ALWAYS be hard reset by turning off power. It would disqualify me to use a UPS so I would need to make the system infallible.
I'm running some processor-intensive tasks, like OpenCV and OpenNI and OpenKinect. How do I use an uber-powerful filesystem, like ZFS to mirror the entire disk on the SSD for error correction? Does ZFS perform well in Linux? I'm still kinda a newbie in Linux so I don't understand it's internal workings.
My list of possible platforms are:
--Debian#RaspberryPi
--kUbuntu#ODROID-X2
--Ubuntu#PandaBoard
--Ubuntu#NUC-i3/5.
Also, how can I make sure the filesystem doesn't get damaged during reset? I need the computer to start in good time, A.K.A, <3 minutes for the competition.
I will probably be using a 32GB SSD, so I guess a 16GB partition mirrored 2x works or 12 # 3x. I only need to get an OpenCV install working because the code will be downloaded from a SAMBA NFS automatically!
Thanks for your help and good luck ;)!
ZFS is not suited for low memory systems. It do perform well on system with 4GB of RAM and more.

kvm balloon driver results in different total-memory then requested

I have ubuntu and installed on it several qemu-kvm guests, running also ubuntu.
I'm using libvirt to change the guests' memory allocation. But always encounter a constant difference between the requested memory allocation and the actual memory allocation I query from the Total field in the top command inside the guests.
The difference is the same in all the guests, and consistent.
In one machine I installed it is 134MB (allocated is less then requested), In another one it is 348MB.
I can live with it, I just don't know the reason. Does someone encounter this kind of problem? Maybe solved it?
Thanks
This constant difference is likely the space reserved by the kernel. Note that this amount of space will increase (at least in Linux) as you have more physical memory available in the system. The change you're seeing is probably due to kvm giving that particular guest more or less memory to work with than it was before.
If you're interested, here is a quick article on memory ballooning, as implemented by VMWare ESX Server.

Make gdb use less memory

gdb uses way too much memory on my Linux machine - I've allocated 2GB to this LXC virtual machine, but that's not enough.
Is there anything I can do apart from selectively uninstalling -debuginfo packages, which will effectively blind me if a problem turns out to involve those packages?
This was due to an issue in a CVS version of gdb. Downgrading gdb solved it.

Resources