Proxmox VE: How to create a raw disk and pass it through to a VM - proxmox

I am searching for an answer on how to create and pass through a raw device to a VM using proxmox. Through that I am hoping to have full control of the disk including S.M.A.R.T. stats and disk spindown.
Currently I am using passthrough using the SATA passthrough offered by proxmox.
Unfortunately I have no clue how to create a raw disk file from my (empty) disk). Furthermore I am not entirely certain on how to bind it to the VM.
I hope someone knows the relevant steps.
Side notes:
This question is just a measure I want to try out to achieve a certain goal. For the sake of simplicity I posed my question confined to the part above. However, if you have a better idea, feel free to give me a hint. So far I have tried a lot of things to achieve my ultimate goal.
Goal that I want to achieve:
I am using Proxmox VE 5.3-8 on a HP Proliant Gen 8 server. It hosts several VMs among which OMV should serve as a NAS. Since the files will not be accessed too often, I opt for a spindown of the drives.
My goal is reduction of noise and power savings.
Current status:
I passed through two disks by adding them to
/etc/pve/nodes/pve/qemu-server/vmid.conf
sata1: /dev/disk/by-id/{disk-id}
Through that I do see SMART stats and everything except disk spindown works fine. Using virtio instead of SATA does not give me SMART values.
using hdparm -y to put a drive to sleep does not work inside the VM. Doing the same on the proxmox console result in a sleep, but it wakes up a few seconds later.
Passing through the entire HBA is currently not an option.
I read in a forum that first installing Debian and then manually installing the proxmox packages resulted in a success. However that was still for Debian jessie and three years ago.
Install Proxmox VE on Debian Stretch
Before I try this as a last resort, I want to make sure if passing the disk through as a raw file will lead to the result.
Maybe someone has an idea on how to achieve my ultimate goal.

I do not have a clear answer to your question, as per "passing through" the disk, but i recently found a good enough solution for my use case.
I have an HDD that i planned to use as a backup dir for VMs, but i also wanted to put any kind of data on it, and share that disk with any VM that would like to.
The solution i found is to format the disk using ZFS, then creating mount points for different usage (vzdump backup, shared nas folder accross VMs + ISO mounting point etc...). I followed this guide: https://forum.level1techs.com/t/how-to-create-a-nas-using-zfs-and-proxmox-with-pictures/117375
I ended up installing samba on proxmox host itself, with a config to share some folder/mount point of the disk, via SMB. Now the device appears as a normal disk over the network, with excellent read/write speed as everything is local.
Sorry that this post does not "answer" your question (no SMART data or things low level like that :'( ) BUT shared storage ^^'

Related

Using a CLI to recover a disk image saved with clonezilla

I have setup a live CentOS 7 that is booted via PXE if the client is connected to a specified network port.
Once the Linux is booted up, I have scripted a small logic that compares if there is a newer image version available on a central host than it is already deployed on the client. This is done with comparing the contents of a versions file. If there is a newer version, the image should be deployed on the client. Else only parts of the Image (qcow2-Files) should be replaced to safe time.
Since the Image is up to 1TB I do not want to apply the image at any case. It would also take too long.
On the client, there is a volume group that consists of lvms in different sizes and also "normal" partitions (like /dev/sda1).
Is there a way to deploy a whole partition structure using a cli?
I already figured this to recover one disk out of the whole system.
But this would make a lot of effort to script around that to get the destination structure I want.
I found out that there is no way to "run" clonezilla as a cli (which I actually cannot understand why this does not exist). I was trying to use parts of the clonezilla live iso with the command "ocs-sr", but I stuck somewhere and it always gives me a "unknown commands"-Error.
For my case the best would be a thing like:
. clonezilla --restore /path/to/images/folder --dest /dev
Which applies all Images in the imagefolder that is generated by clonezilla to the client.
Any help highly appreciated.
I've found that using Clonezilla's preparation script does the thing for me. You can use ocs_prerun parameter that will run a script before clonezilla will do anything.
If you are stuck into a company hardened image, you can try this to setup a (ubuntu) Linux with the needed programs on it.

Unable to increase disk size on file system

I'm currently trying to log in to one of the instances created on google cloud, but found myself unable to do so. Somehow the machine escaped my attention and the hard disk got completely full. Of course I wanted to free some disk space and make sure the server running could restart, but I am facing some issues.
First off, I have found the guide on increasing the size of the persistent disk (https://cloud.google.com/compute/docs/disks/add-persistent-disk). I followed that and already set it 50 GB which should be fine for now.
However, on file system level because my disk is full I cannot make any SSH connection. The error is simply a timeout caused by the fact that there is absolutely no space for the SSH deamon to write to its log. Without any form of connection I cannot free some disk space and/or run the "resize2fs" command.
Furthermore, I already tried different approaches.
I seem to not be able to change the boot disk to something else.
I created a snapshot and tried to increase the disk size on the new
instance I created from that snapshot, but it has the same problem
(filesystem is stuck at 15GB).
I am not allowed to mount the disk as an additional disk in another
instance.
Currently I'm pretty much out of ideas. The important data on the disk was back-upped but I'd rather have the settings working as well. Does anyone have any clues as where to start?
[EDIT]
Currently still trying out new things. I have also tried to run shutdown- and startup scripts that remove /opt/* in order to free some temporary space but the script either don't run or provide some error I cannot catch. It's pretty frustrating working nearly blind I must say.
The next step for me would be to try and get the snapshot locally. It should be doable using the bucket but I will let you know.
[EDIT2]
Getting a snapshot locally is not an option either or so it seems. Images from the google cloud instances can only be created or deleted, but not downloaded.
I'm now out of ideas.
So I finally found the answer. These steps were taken:
In the GUI I increased the size of the disk to 50 GB.
In the GUI I detached the drive by deleting the machine whilst
ensuring that I did not throw away the original disk.
In the GUI I created a new machine with a sufficiently big harddisk.
On the command line (important!!) I attached the disk to the newly
created machine (the GUI option has a bug still ...)
After that I could mount the disk as a secondary disk and perform all the operations I needed.
Keep in mind: By default google cloud solutions do NOT use logical volume management, so pvresize/lvresize/etc. is not installed and resize2fs might not work out of the box.

hung_task_timeout_secs error during copy to a mount point in linux

I am trying to copy data files from my VM to a NFS VM- ZFS Storage(Both VM's can talk to each other). During copy sometimes I encounter error:
INFO: task cp: blocked for more than 120 seconds .
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs disables this message"
Both my VM's hang and I have to restart them. If I copy again it works.
I have around 233 data files to copy and its becoming difficult to restart VM's again and again.
I looked at the solutions given on internet and changed the vm.dirty_ratio to 5 and vm.dirty_background_ratio to 10 to resolve but it did not work.
I am running these VM's on virtual box and allocated around 17GB RAM for one and the NFS VM around 6GB RAM.
Any hack which could help me in copying these files to the NFS without my VM's hanging?
I am sorry if I am answering an answer with another answer, but this case has many variables that need exploring.
1, you have a Linux VM sharing your storage (assumption)
A. Which distro ? 32 or 64 bits ? When the problem happens what does top reports for system load ?
B. Local storage or nas ? Or San ?
C. Which version of NFS ? 3 or 4 ?
D. Can you set the variables of your mount when mapping the NFS share? You might want to play with rsize and wsize, setting them to at least 64000. I would recommend also setting noatime and nodiratime on the share.
E. From my VMware background with Gluster, there are some timeout/refresh settings you can set on the storage side. How often the storage publishes its presence, telling it is alive. A good start is 20 seconds.
F. VMware can tell you how much latency you have for read or write on a physical and on a VM level. Try to figure out those to know who to blame.
Ah, and, of course, make sure your Linux VM has the latest patches applied.
Let's see where we get from here.

Running a webserver on a virtual machine (VirtualBox) - Pros/Cons in terms of security

I want to sharpen my skills in terms of gnu/linux and get a better understanding of how servers work. So I thought I'd set up an apache webserver with ftp, ssh, svn etc. Since I use Adobe products everyday in my line of work installing a linux dist. straight on my machine isn't an option. Yes, I could probably do a dualboot with linux and vista. But since I am a novice I don't want to risk something happening to my machine.
So I thought of start of installing a dist. with a pretty steep learning curve with a lot of manual configuration. To maximize the familiarization of command line operations and such. The goal is to make it working and have a safe setup.
So before I write a WOT;
I was curious of, what pros and cons there are in terms of security to have a setup like this?
Thank you!
None, there are no difference if the *nix system is on a VM or physical hardware if you give it access to resources.
In the case of the VM if you don't want it to have access to the hosts hard drive then don't add the physical hard drive. Same for the Network and any other resource.
I am running a bunch of virtual servers on my single server. I'm using OpenVZ but the basic pros and cons are the same.
Pros
I enjoy the fact that I get to experiment a lot. I can install stuff, screw things up royally, and then just wipe out the entire virtual server and start over. It beats re-installing the OS in real-life. I can also easily compare and contrast competing products this way. I'm also able to monitor the running of the system and understand how it works in a more intimate way.
Cons
Resource consumption, which is the reason why I chose OpenVZ - it doesn't consume that much as compared to VirtualBox.
Security wise, you need to take the same precautions as you do for a real system. The difference is that if your machine is compromised, you can just wipe it out easily.

Good Secure Backups Developers at Home [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 8 years ago.
Improve this question
What is a good, secure, method to do backups, for programmers who do research & development at home and cannot afford to lose any work?
Conditions:
The backups must ALWAYS be within reasonably easy reach.
Internet connection cannot be guaranteed to be always available.
The solution must be either FREE or priced within reason, and subject to 2 above.
Status Report
This is for now only considering free options.
The following open-source projects are suggested in the answers (here & elsewhere):
BackupPC is a high-performance,
enterprise-grade system for backing
up Linux, WinXX and MacOSX PCs and
laptops to a server's disk.
Storebackup is a backup utility
that stores files on other disks.
mybackware: These scripts were
developed to create SQL dump files
for basic disaster recovery of small
MySQL installations.
Bacula is [...] to manage
backup, recovery, and verification
of computer data across a network of
computers of different kinds. In
technical terms, it is a network
based backup program.
AutoDL 2 and Sec-Bk: AutoDL 2
is a scalable transport independant
automated file transfer system. It
is suitable for uploading files from
a staging server to every server on
a production server farm [...]
Sec-Bk is a set of simple utilities
to securely back up files to a
remote location, even a public
storage location.
rsnapshot is a filesystem
snapshot utility for making backups
of local and remote systems.
rbme: Using rsync for backups
[...] you get perpetual incremental
backups that appear as full backups
(for each day) and thus allow easy
restore or further copying to tape
etc.
Duplicity backs directories by
producing encrypted tar-format
volumes and uploading them to a
remote or local file server. [...]
uses librsync, [for] incremental
archives
simplebup, to do real-time backup of files under active development, as they are modified. This tool can also be used for monitoring of other directories as well. It is intended as on-the-fly automated backup, and not as a version control. It is very easy to use.
Other Possibilities:
Using a Distributed Version Control System (DVCS) such as Git(/Easy Git), Bazaar, Mercurial answers the need to have the backup available locally.
Use free online storage space as a remote backup, e.g.: compress your work/backup directory and mail it to your gmail account.
Strategies
See crazyscot's answer
I prefer http://www.jungledisk.com/ .
It's based on Amazon S3, cheap, multiplatform, multiple machines with a single license.
usb hard disk + rsync works for me
(see here for a Win32 build)
Scott Hanselman recommends Windows Home Server in his aptly titled post
The Case of the Failing Disk Drive or Windows Home Server Saved My Marriage.
First of all: keeping backups off-site is as important for individuals as it is for businesses. If you house burns down, you don't want to loose everything.
This is especially true because it is so easy to accomplish. Personally, I have an external USB harddisk I keep at my fathers house. Normally, it is hooked up to his internet connections and I backup over the net (using rsync), but when I need to backup really big things, I collect it and copy things over USB. Ideally, I should get another disk, to spread the risk.
Other options are free online storage facilities (use encryption!).
For security, just use TrueCrypt. It has a good name in the IT world, and seems to work very well.
Depends on which platform you are running on (Windows/Linux/Mac/...?)
As a platform independent way, I use a personal subversion server. All the valuables are there, so if I lose one of the machines, a simple 'svn checkout' will take things back. This takes some initial work, though, and requires discipline. It might not be for you?
As a second backup for the non-svn stuff, I use Time Machine, which is built-in to OS X. Simply great. :)
I highly recommend www.mozy.com. Their software is easy and works great, and since it's stored on their servers you implicitly get offsite backups. No worrying about running a backup server and making sure it's working. Also, the company is backed by EMC (a leading data storage product company), so gives me enough confidence to trust them.
I'm a big fan of Acronis Trueimage.Make sure you rotate through a few backup HDDs to you have a few generations to go back to, or if one of the backups goes bang. If it's a major milestone I snail-mail a set of DVDs to Mum and she files em for me. She lives in a different state so it should cover most disasters of less-than-biblical proportions.
EDIT: Acronis has encryption via a password. I also find the bandwidth of snailmail to be somewhat infinite - 10GB overnight = 115 kb/s, give or take. Never been throttled by Australia Post.
My vote goes for cloud storage of some kind. The problem with nearly all 'home' backups is they stay in the home, that means any catastrophic damage to the system being backed up will probably damage the backups as well (fire, flood etc). My requirements would be
1) automated - manual backups get forgotten, usually just when most needed
2) off-site - see above
3) multiple versions - that is backup to more than one thing, in case that one thing fails.
As a developer, usually data sizes for backup are relatively small so a couple of free cloud backup accounts might do. They also often fulfil part 1 as they can usually be automated. I've heard good things about www.getdropbox.com/.
The other advantage of more than 1 account is you could have one on 'daily sync' and another on 'weekly sync' to give you some history. This is nowhere near as good as true incremental backups.
Personally I prefer a scripted backup (to local hard-drives, which I rotate to work as 'offsites'. This is in large part due to my hobby (photography) and thus my relatively lame internet upstream bandwith not coping with the data volume.
Take home message - don't rely on one solution and don't assume that your data is not important enough to think about the issues as deeply as the 'Enterprise' does.
Buy a fire-safe.
This is not just a good idea for storing backups, but a good idea period.
Exactly what media you put in it is the subject of other answers here.
But, from the perspective of recovering from a fire, having a washable medium is good. As long as the temperature doesn't get too high CDs and DVDs seem reasonably resilient, although I'd be concerned about smoke damage.
Ditto for hard-drives.
A flash drive does have the benefit that there are no moving parts to be damaged and you don't need to be concerned about the optical properties.
mozy.com is king. I started using it just to backup code and then ponied up the 5 bux a month to backup my personal pictures and other stuff that I'd rather not lose if the house burns down. The initial backup can take a little while but after that you can pretty much forget about it until you need to restore something.
Get an external hard drive with a network port so you can keep your backups in another room which provides a little security against fire in addition to being a simple solution you can do yourself at home.
The next step is to get storage space in some remote location (there are very cheap monthly prices for servers for example) or to have several external hard drives and periodically switch between the one at home and a remote location. If you use encryption, this can be anywhere such as a friend's or parents' place or work.
Bacula is a good software, it's open source, and shall give good performance, kind of commercial software, a bit difficult the first time to configure, but not so hard. It has good documentation
I second the vote for JungleDisk. I use it to push my documents and project folders to S3. My average monthly bill from amazon is about 20c.
All my projects are in Subversion on an external host.
As well as this, I am on a Mac, so I use SuperDuper to take a nightly image of my drive. I am sure there are good options in the Windows/Linux world.
I have two external drives that I rotate on a weekly basis, and I store one of the drives off-site during it's week off.
This means that I am only ever 24 hours away from an image in case of failure, and I am only 7 days from an image in case of catastrophic failure (fire theft). The ability to plug the drive in to a machine and be running instantly from the image has saved me immensely. My boot partition was corrupted during a power failure (not a hardware failure, luckily). I plugged the backup in, restored and was working again in the time it took to transfer the files of the external drive.
Another vote for mozy.com
You get 2gb for free, or for $5/month gives you unlimited backup space. Backups can occur on a timed basis, or when your PC/Mac is not busy. It's encrypted during transit and storage.
You can retrieve files via built in software, through the web or pay for a DVD to be burned and posted back.
William Macdonald
If you feel like syncing to the cloud and don't mind the initial, beta, 2GB cap, I've fallen in love with Dropbox.
It has versions for Windows, OSX, and Linux, works effortlessly, keeps files versioned, and works entirely in the background based on when the files changed (not a daily schedule or manual activations).
Ars Technica and Joel Spolsky have both fallen in love (though the love seems strong with Spolsky, but lets pretend!) with the service if the word of a random internet geek is not enough.
These are interesting times for "the personal backup question".
There are several schools of thought now:
Frequent Automated Local Backup + Periodic Local Manual Backup
Automated: Scheduled Nightly backup to external drive.
Manual: Copy to second external drive once per week / month / year / oops-forgot
and drop it of at "Mom's house".
Lot's of software in the field, but here's a few: There's RSync and TimeMachine on Mac, and DeltaCopy www.aboutmyip.com/AboutMyXApp/DeltaCopy.jsp for Windows.
Frequent Remote Backup
There are a pile of services that enable you to backup across you internet connection to a remote data centre. Amazon's S3 service + JungleDisk's client software is a strong choice these days - not the cheapest option, but you pay for what you use and Amazon's track record suggests as a company it will be in business as long or longer than any other storage providers who hang their shingle today.
Did I mention it should be encrypted? Props to JungleDisk for handling the "encryption issue" and future-proofing (open source library to interoperate with Jungle Disk) pretty well.
All of the above.
Some people call it being paranoid ... others think to themselves "Ahhh, I can sleep at night now".
Also, it's more fault-tolerance than backup, but you should check out Drobo - basically it's dead simple RAID that seems to work quite well.
Here are the features I'd look out for:
As near to fully automatic as possible. If it relies on you to press a button or run a program regularly, you will get bored and eventually stop bothering. An hourly cron job takes care of this for me; I rsync to the 24x7 server I run on my home net.
Multiple removable backup media so you can keep some off site (and/or take one with you when you travel). I do this with a warm-pluggable SATA drive bay and a cron job which emails me every week to remind me to change drives.
Strongly encrypted media, in case you lose one. The linux encrypted device support (cryptsetup et al) does this for me.
Some sort of point-in-time recovery, but consider carefully what resolution you want. Daily might be enough - having multiple backup media probably gets you this - or you might want something more comprehensive like Apple's Time Machine. I've used some careful rsync options with my removable drives: every day creates a fresh snapshot directory, but files which are unchanged from the previous day are hard linked instead of copied, to save space.
Or simply just set up a gmail account and mail it to yourself :) Unless you're a bit paranoid about google knowing about your stuff since you said research. It doesn't help you much with structure and stuff but it's free, big storage and off-site so quite safe.
If you use OS X 10.5 or above then the cost of Time Machine is the cost of an external hard drive. Not only that, but the interface is dead simple to use. Open the folder you wish to recover, click on the time machine icon, and browse the directory as if it was 1999 all over again!
I haven't tried to encrypt it, but I imagine you could use truecrypt.
Yes this answer was posted quite some time after the question was asked, however I believe it should help those who stumble across this posting in the future (like I did).
Setup a Linux or xBSD server:
-Setup a source control system of your choice on it.
--Mirror Raid (raid 1) at min
--Daily (or even hourly) backups to external drive[s].
From the server you could also setup an automatic offsite backup. If the internet is out, you'd still have your external drive and just have it auto sync once it comes back.
Once it's setup it should be about 0 work.
You don't need anything "fancy" for offsite backup. Get a webhost that allows storing non-web data. sync via sftp or rsync over ssh. Store data on other end in true crypt container if your paranoid.
If you work for an employeer/contractor also ask them. Most places already have something in place or let you work with their IT.
My vote goes to dirvish (for linux). It uses rsync as backend but is very easy to configure.
It makes automatic, periodically and differential backups of directories. The big benefit is, that it creates hardlinks to all files not changed since the last backup. So restore is easy: Just copy last created directory back - instead of restoring all diffs one after another like other differential backup tools need to do.
I have the following backup scenarios and use rsync scripts to store on USB and network shares.
(weekly) Windows backup for "bare metal" recovery
Content of System drive C:\ using Windows Backup for quick recovery after physical disk failure, as I don't want to reinstall Windows and applications from scratch. This is configured to run automatically using Windows Backup schedule.
(daily and conditional) Active content backup using rsync
Rsync takes care of all changed files from laptop, phone, other devices. I backup laptop every night and after significant changes in content, like import of the recent photo RAWs from SD card to laptop.
I've created a bash script that I run from Cygwin on Windows to start rsync: https://github.com/paravz/windows-rsync-backup
If you're using deduplicaiton STAY AWAY from JungleDisk. Their restore client makes a mess of the reparsepoint, and makes the file unusable. You hopefully can fix it in safe mode with:
fsutil reparsepoint delete

Resources