After installing proxmox onto my server hardware, it seems that just a few gb of disk capacity are available to use, but I don't know why.
The proxmox configuration happened the following way:
Hardware RAID Level 1 configured to mirror the first and second 1TB hard drives
Within the proxmox install wizard, I configured the server to use those disk and create a LVM
This LVM is showing up in the gui, but it has just 16 GB left of capacity, did not an option to do disk partitioning on install more specifically
What am I doing wrong or overlooking?
Proxmox LVM is thick provisioned. Log in per ssh and run "df -h" to see your disk usage. You can expand your LVM config (lvextend /dev/pve/data).
Here you have more details: Proxmox Wiki, look at "Advanced LVM Configuration Options"
I always install Proxmox like this:
LV root: 50G
LV data: 50G
But i've an external storage for VM storage.
Related
I have a Linux (redhat 7.6) VM and I need to give more RAM.
actually size: standard A1_v2 (2gb RAM)
new size: A4_v2 (8gb RAM)
If I do the resize by Azure portal, is there any considerations? Or any linux configuration that I will lose?
your vm would be rebooted to perform the resize. nothing on the OS level changes (well, unless you have some changes in memory, that would not be preserved after a reboot). basically if your vm (and\or applications inside the vm) can handle the reboot - nothing will break.
Is Esxi Var/log pointing to scratch partition?
If yes all the logs in var/log folder(hostd, vpxa, fdm etc) will be deleted after the esxi reboot if scratch partition is on RAM disk?
If you have not configured scratch partition , the logs will be in RAM disk and it will be deleted post reboot of ESXi.
The below article will help you configure scratch location on ESXi
https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1033696
I have Varnish 5.1.1 on Centos 6.5 and want to Use a fresh SSD for file storage, (my RAM 64GB get full quickly as I have a lot of objects to cache)
As said in Varnish Doc I have mounted a tmpfs partition for the working directory :
"The shmlog usually is located in /var/lib/varnish and you can feel free to remove any data found in this directory if needed. Varnish suggests make sure this location is mounted on tmpfs by using /etc/fstab to make sure this gets mounted on server reboot / start."
I have a dedicated 256 GB SSD drive for cache storage.
Do I have to mount it as tmpf with noatime like working dir ?
I did not find any suggestion on how to configure SSD for Varnish needs.
No, you do not have to mount anything special if you're just going to use your SSD. tempfs is specific for a RAM drive only, and if you're not going to take advantage of the superior speed of RAM over SSD, then leaving /var/lib/varnish as is on the default install is good enough.
/var/lib/varnish is used for logging, and Varnish logs a lot of data. Since it uses a circular buffer, size isn't an issue, however the I/O will wear your disks down.
TL;DR: always mount the work directory as tmpfs.
I have just created a linux (Ubuntu 14.4) virtual machine in Azure (SE Asia)
Issue: I only have 29GB not 127GB
It is a Basic Tier, A0 (smallest size)
The advertised disk drive size is 127GB (+20GB tmp)
http://msdn.microsoft.com/en-us/library/azure/dn197896.aspx
I find (after running out of disk space) that I only have around 29GB.
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 29G 24G 4.1G 86% /
none 4.0K 0 4.0K 0% /sys/fs/cgroup
udev 323M 12K 323M 1% /dev
tmpfs 68M 388K 67M 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 336M 0 336M 0% /run/shm
none 100M 0 100M 0% /run/user
/dev/sdb1 20G 4.3G 15G 23% /mnt
Running cfdisk shows there is no other free space on the drive.
I can't find any documentation to suggest why only 29GB.
Is this a bug/issue/problem with my VM?
Or is it something to do with linux/ubuntu 14.4/basic tier A0 ?
The VM's operating system drive is backed by a blob in your Azure storage account. The blob is a VHD file. When you created the VM, the appropriate VHD was copied from the gallery into your storage account.
The gallery-provided VHD file has a logical capacity of 30GB by design. The documentation states that the maximum allowed size is 127GB, but that is incidental - the gallery images are 30GB.
The solution is two steps, resize the VHD itself (and corresponding blob), then use Linux tooling to resize the partition. This may help:
Resizing a Windows Azure virtual disk
It's now possible to resize via the Azure UI.
That being said, after resizing, the Ubuntu VM will not see the new size by default, you will see that cfdisk sees the unallocated space, and to do it I would use fdisk like it's explained in this answer.
You need to execute follow Azure PowerShell command:
Update-AzureDisk –DiskName "<Disk name>" -Label "ResiZedOS" -ResizedSizeInGB <Size in GB>
Example:
Update-AzureDisk –DiskName "dimitar-linux-dimitar-linux-os" -Label "ResiZedOS" -ResizedSizeInGB 524
Note: Maximum size is 1023 GB.
Note 2: Virtual machine should be power off.
Documentation:
How to install and configure Azure PowerShell
Update-AzureVM
Update-AzureDisk
Question: How to get disk name ?
Answer: You can use azure cli or azure portal.
azure-cli command:
azure vm disk list <virtual machine name> # In my case: azure vm disk list dimitar-centos
As mentioned in Eron Wright's answer, the disk's size is determined from the Gallery image.
To resize the disk via the Azure Portal web UI:
Shut down (deallocate) the VM.
Open the VM blade and click Disks
Select the OS disk. For me, the size of the disk was blank.
Enter a new Size and click Save.
I tested this with an Standard_D2_V2 sized Ubuntu VM. Once I resized the disk and started the VM, I could access the full size of the disk.
The answer by #danailov work brilliantly for me (Azure Linux VM resized correctly)
Just to add this bit that might help somebody; here is how I found out my disk name via Azure Powershell;
Get-AzureDisk | Get-AzureDisk | Format-list DiskName, AttachedTo, DiskSizeInGB, OS > c:\files\disks.txt
The above command will output a text file at the stated location in your local pc with following details for each disks/VMs in your Azure account - DiskName, AttachedTo, DiskSizeInGB and OS the VM is running.
The Azure VM table...
https://www.windowsazure.com/en-us/pricing/details/#header-2
...says that, for example, a Medium Instance comes with 490GB of local storage. So I was expecting the usual 30GB Azure BLOB OS disk, and then a 490GB /mnt/resource.
But no:
$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root
28G 1.7G 25G 7% /
tmpfs 1.7G 0 1.7G 0% /dev/shm
/dev/sda1 485M 68M 392M 15% /boot
/dev/sdb1 133G 188M 126G 1% /mnt/resource
That's on the CentOS image, but it's the same for other images.
Am i missing something? I don't see the space in a volume group or anything, and there are no sd* devices that aren't mounted.
Neil Mackenzie's answer in the comments appears to be correct
All Windows Azure IaaS VMs come with two disks - an OS disk backed by a VHD
persisted in Windows Azure Blob storage and a temp disk physically
attached to the hosting server. This temp disk is truly ephemeral and
any contents on it will be lost if the VM is deallocated (shutdown on
the portal or API) or moved for server healing. For Linux VMs this
temporary disk /dev/sdb.
I found this link to corroborate:
http://blogs.msdn.com/b/wats/archive/2013/12/07/understanding-the-temporary-drive-on-windows-azure-virtual-machines.aspx