migrate google compute platform snapshot/disk to softlayer - iso

I created a .tar.gz of a google compute platform disk (using gcimageundle). Is there some way to import this to ibm softlayer? softlayer wants an .iso, so assuming i find a way to convert the .tar.gz to .iso, can I expect using that .iso as a boot disk on a different machine to work, or will hardware differences necessarily cause trouble?

once you convert it to ISO, you must make sure that it is booteable, see this documentation for more information https://knowledgelayer.softlayer.com/procedure/import-image
https://knowledgelayer.softlayer.com/procedure/vhd-content-checking-and-conversion

Related

Using a CLI to recover a disk image saved with clonezilla

I have setup a live CentOS 7 that is booted via PXE if the client is connected to a specified network port.
Once the Linux is booted up, I have scripted a small logic that compares if there is a newer image version available on a central host than it is already deployed on the client. This is done with comparing the contents of a versions file. If there is a newer version, the image should be deployed on the client. Else only parts of the Image (qcow2-Files) should be replaced to safe time.
Since the Image is up to 1TB I do not want to apply the image at any case. It would also take too long.
On the client, there is a volume group that consists of lvms in different sizes and also "normal" partitions (like /dev/sda1).
Is there a way to deploy a whole partition structure using a cli?
I already figured this to recover one disk out of the whole system.
But this would make a lot of effort to script around that to get the destination structure I want.
I found out that there is no way to "run" clonezilla as a cli (which I actually cannot understand why this does not exist). I was trying to use parts of the clonezilla live iso with the command "ocs-sr", but I stuck somewhere and it always gives me a "unknown commands"-Error.
For my case the best would be a thing like:
. clonezilla --restore /path/to/images/folder --dest /dev
Which applies all Images in the imagefolder that is generated by clonezilla to the client.
Any help highly appreciated.
I've found that using Clonezilla's preparation script does the thing for me. You can use ocs_prerun parameter that will run a script before clonezilla will do anything.
If you are stuck into a company hardened image, you can try this to setup a (ubuntu) Linux with the needed programs on it.

Use Microsoft Azure as a computing cluster

My lab just got a sponsorship from Microsoft Azure and I'm exploring how to utilize it. I'm new to industrial level cloud service and pretty confused about tons of terminologies and concepts. In short, here is my scenario:
I want to experiment the same algorithm with multiple datasets, aka data parallelism.
The algorithm is implemented with C++ on Linux (ubuntu 16.04). I made my best to use static linking, but still depends on some dynamic libraries. However these dynamic libraries can be easily installed by apt.
Each dataset is structured, means data (images, other files...) are organized with folders.
The idea system configuration would be a bunch of identical VMs and a shared file system. Then I can submit my job with 'qsub' from a script or something. Is there a way to do this on Azure?
I investigated the Batch Service, but having trouble installing dependencies after creating compute node. I also had trouble with storage. So far I only saw examples of using Batch with Blob storage, with is unstructured.
So are there any other services in Azure can meet my requirement?
I somehow figured it out my self based on the article: https://azure.microsoft.com/en-us/documentation/articles/virtual-machines-linux-classic-hpcpack-cluster/. Here is my solution:
Create a HPC Pack with a Windows head node and a set of Linux compute node. Here are several useful template in Marketplace.
From Head node, we can execute command inside Linux compute node, either inside HPC Cluster Manager, or using "clusrun" inside PowerShell. We can easily install dependencies via apt-get for computing node.
Create a File share inside one of the storage account. This can be mounted by all machines inside the cluster.
One glitch here is that for some encryption reason, you can not mount the File share on Linux machines outside the Azure. There are two solutions in my head: (1) mount the file share to Windows head node, and create file sharing from there, either by FTP or SSH. (2) create another Linux VM (as a bridge), mount the File share on that VM and use "scp" to communicate with it from outside. Since I'm not familiar with Windows, I adopted the later solution.
For executable, I simply uploaded the binary executable compiled on my local machine. Most dependencies are statically linked. There are still a few dynamic objects, though. I upload these dynamic object to the Azure and set LD_LIBRARY_PATH when execute programs on computing node.
Job submission is done in Windows head node. To make it more flexible, I wrote a python script, which writes XML files. The Job Manager can load these XML files to create a job. Here are some instructions: https://msdn.microsoft.com/en-us/library/hh560266(v=vs.85).aspx
I believe there should be more a elegant solution with Azure Batch Service, but so far my small cluster runs pretty well with HPC Pack. Hope this post can help somebody.
Azure files could provide you a shared file solution for your Ubuntu boxes - details are here:
https://azure.microsoft.com/en-us/documentation/articles/storage-how-to-use-files-linux/
Again depending on your requirement you can create a pseudo structure via Blob storage via the use of containers and then the "/" in the naming strategy of your blobs.
To David's point, whilst Batch is generally looked at for these kind of workloads it may not fit your solution. VM Scale Sets(https://azure.microsoft.com/en-us/documentation/articles/virtual-machine-scale-sets-overview/) would allow you to scale up your compute capacity either via load or schedule depending on your workload behavior.

Migrate data from one server to another

I bought a new server and I want to move all the data (directories, sub directories, users, passwords, ..etc) from my old server to it.
Is there a way to do that?
Thanks,
Do you have physical access to both servers? If so you can use the dd command to make a clone of the disk from the old server to the disk that is going into the new server.
In order to do this though, both hard drives have to be installed in one of the servers.
You can also use netcat and dd to clone a disk over a network.
for the directories and files, use a FTP client from your server, if it allows you to, if not, just download all the content to your computer and upload it to the new server.
For the users and passwords, i guess they are in a Database, connect to the database using SSH, telnet, or MysqlAdmin or any RMDB client system and export a dump file, then log in to the new server's SQL system and import that dump file.
Anyway you should give more details of both servers anyway so we can help you, for example, are they Shared hosting or dedicated machine? and what kind of access do you have to them, also, their operative system would help people to reply you accurately
In principle, yes.
If the hardware is similar (= just more RAM, disk space but same CPU architecture and no special graphics card drivers), you might be able to copy every file and then install the boot loader once more (the boot loader config usually changes when the hard disk size changes).
Or you can create a list of all services that you use, determine which config files each one uses and then just copy those. Ideally, you shouldn't copy them but compare the old and the new versions and merge them.
The most work intensive way is to use a tool like puppet. In a nutshell, puppet allows to create install scripts for services (along with all the configuration that you need). So if you need to install a service again (new hardware, second server), you just tell puppet to do it. On the plus side, your whole installation will be documented, too. If you ever wonder why something is the way it is, you can look into the puppet files.
Of course, this approach takes a lot of time and discipline, so it might not be worth it in your case. Apply common sense.

Distributing Contents for Linux like in Android & iOS?

I'm currently designing a Linux-based system. Users of the system will be allowed to download contents, i.e. programs, from the Internet. The contents will be distributed in zip packages given special extension names, e.g. .cpk instead of .zip, and with zero compression.
I want to give users the same experience found in iOS and Android, in which contents are distributed in contained packages and run from there.
My question is that can I make my Linux system to run programs from inside the packages without unzipping them? If not, then is there another approach to what I'm after in Linux?
Please note that I don't want to extract contents into a temp folder and delete them after execution because that might take longtime, specially for large contents. That will also double the storage space requirements for running the contents.
Thank you in advance.
klik (at least in the klik2 CMG format) used an zISO image, which can be mounted by the kernel or by a FUSE client, rather than a zip. You could use other filesystem types that are supported by the kernel or via FUSE. Maybe fuse-zip is worth a shot?
You could also modify the loader to read directly out of the bundle. For example, Android's Dalvik VM can load dex files directly from apk bundles, which are effectively zip files. (Native code on Android, however, still needs to be unpacked first, and does take more time and space. Modifying the native loader is… tricky.)

Find the files related to a software

I have one doubt. I am doing a project related to system restore concept in Linux. There i am planning to perform application wise rollback in case of failure. Is there any way to figure out what are all the files used by an application in the system?
Ok. I will make it a little clear. For instance consider the firefox application. When it is installed many files are written from the .deb file to folders like /etc, /usr, /opt etc. In windows all the files are installed in one folder under program files while in linux its not. So is there any way to figure out the files that belong to a software?
Thanks.
Well this can cover several things.
If you mean, which files are provided by the installation of your application ? Then the answer is, use decent package management, provide your software as an rpm/deb/... whatever package, and unstallation will take care of the rest.
If you mean, which libraries are being referenced by our application ? Then you can use ldd this will tell your which dynamic libraries are used when executing this application.
If you mean, which files is my application actively using ? Then take a look at the output of lsof (lsof = list open files) (or alternatively ls /proc//fd/), this will show all file descriptors open by your application (files, sockets, pipes, tty's, ...)
Or you could use all of the above.
One thing you can't track (unless you log this yourself) is which files have been created by your application during its lifetime.
To determine all the files installed along with the app depends on the package manager. All the ones I've dealt with (apt, pacman) have had this capability.
To determine all the files currently open by an application, use lsof.
Well, that depends ...
Most Linux system have some kind of packet management software, like aptitude in debian and ubuntu. There, you have information about what belongs to a packet. You might be able to use that information. That does not cover files created during runtime of apps though.
If you are using an RPM based distro
# rpm -Uvh --repackage pkg-1-1.i386.rpm
will repackage the old files and upgrade in a transaction so you can later rollback if something went wrong. To rollback to yesterday's state for example
# rpm -Uvh --rollback yesterday
See this article for other examples.

Resources