How accurate is "everything is a file" in regards to linux? [closed] - linux

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Suppose I were to set up an ubuntu machine and install some services and software on it. Further suppose I were to set up another stock ubuntu machine, this time without the additional services and software. I know there are ways of creating installation/setup scripts or taking disk images and such to build large numbers of identical machines, but if I were to programmatically take a file-based diff between the installations and migrate all file additions/changes/removals/etc from the fully configured system to the stock system, would I then have two identical, working systems (i.e. a full realization of the 'everything is a file' linux philosophy), or would the newly configured system be left in an inconsistent state because simply transferring files isn't enough? I am excluding hostname references and such in my definitions of identical and inconsistent.
I ask this because I need to create a virtual machine, install a bunch of software, and add a bunch of content to tools like redmine, and in the near future I'm going to have to mirror that onto another vm. I cannot simply take a disk image because the source I receive the second vm from does not give me that sort of access and the vm will have different specs. I also cannot go with an installation script based approach at this point because that will require a lot of overhead, will not account for the added user content, and I won't know everything that is going to be needed on the first vm until it our environment is stable. The approach I asked about above seems to be a roundabout but reasonable way to get things done so long as it its assumptions are theoretically accurate.
Thanks.

Assuming that the two systems are largely identical in terms of hardware (that is, same network cards, video cards, etc), simply copying the files from system A to system V is generally entirely sufficient. In fact, at my workplace we have used exactly this process as a "poor man's P2V" mechanism in a number of successful cases.
If the two systems have different hardware configurations, you may need to make appropriate adjustments on the target system to take this into account.
UUID Mounts
If you have UUID based mounts -- e.g., your /etc/fstab looks like this...
UUID=03b4f2f3-aa5a-4e16-9f1c-57820c2d7a72 /boot ext4 defaults 1 2
...then you will probably need to adjust those identifiers. A good solution is to use label based mounts instead (and set up the appropriate labels, of course).
Network cards
Some distributions record the MAC address of your network card as part of the network configuration and will refuse to configure your NIC if the MAC address is different. Under RHEL-derivatives, simply removing the MAC address from the configuration will take care of this. I don't think this will be an issue under Ubuntu.

Related

Proxmox VE: How to create a raw disk and pass it through to a VM

I am searching for an answer on how to create and pass through a raw device to a VM using proxmox. Through that I am hoping to have full control of the disk including S.M.A.R.T. stats and disk spindown.
Currently I am using passthrough using the SATA passthrough offered by proxmox.
Unfortunately I have no clue how to create a raw disk file from my (empty) disk). Furthermore I am not entirely certain on how to bind it to the VM.
I hope someone knows the relevant steps.
Side notes:
This question is just a measure I want to try out to achieve a certain goal. For the sake of simplicity I posed my question confined to the part above. However, if you have a better idea, feel free to give me a hint. So far I have tried a lot of things to achieve my ultimate goal.
Goal that I want to achieve:
I am using Proxmox VE 5.3-8 on a HP Proliant Gen 8 server. It hosts several VMs among which OMV should serve as a NAS. Since the files will not be accessed too often, I opt for a spindown of the drives.
My goal is reduction of noise and power savings.
Current status:
I passed through two disks by adding them to
/etc/pve/nodes/pve/qemu-server/vmid.conf
sata1: /dev/disk/by-id/{disk-id}
Through that I do see SMART stats and everything except disk spindown works fine. Using virtio instead of SATA does not give me SMART values.
using hdparm -y to put a drive to sleep does not work inside the VM. Doing the same on the proxmox console result in a sleep, but it wakes up a few seconds later.
Passing through the entire HBA is currently not an option.
I read in a forum that first installing Debian and then manually installing the proxmox packages resulted in a success. However that was still for Debian jessie and three years ago.
Install Proxmox VE on Debian Stretch
Before I try this as a last resort, I want to make sure if passing the disk through as a raw file will lead to the result.
Maybe someone has an idea on how to achieve my ultimate goal.
I do not have a clear answer to your question, as per "passing through" the disk, but i recently found a good enough solution for my use case.
I have an HDD that i planned to use as a backup dir for VMs, but i also wanted to put any kind of data on it, and share that disk with any VM that would like to.
The solution i found is to format the disk using ZFS, then creating mount points for different usage (vzdump backup, shared nas folder accross VMs + ISO mounting point etc...). I followed this guide: https://forum.level1techs.com/t/how-to-create-a-nas-using-zfs-and-proxmox-with-pictures/117375
I ended up installing samba on proxmox host itself, with a config to share some folder/mount point of the disk, via SMB. Now the device appears as a normal disk over the network, with excellent read/write speed as everything is local.
Sorry that this post does not "answer" your question (no SMART data or things low level like that :'( ) BUT shared storage ^^'

Easy ways to separate DATA (Dev-Environments/apps/etc) partition from Linux System minimal sized OS partition? Docker or Overlayfs Overlayroot? Other? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 5 years ago.
Improve this question
Back when the powers that be didn't squeeze the middle-class as much and there was time to waste "fooling around" etc, I used to compile everything from scratch from .tgz and manually get dependencies and make install to localdir.
Sadly, there's no more time for such l(in)uxuries these days so I need a quick lazy way to keep my 16GB Linux Boot OS partition as small as possible and have apps/software/Development Environment and other data on a separate partition.
I can deal with mounting my home dir to other partition but my remaining issue is with /var and /usr etc and all the stuff that gets installed there every time I apt-get some packages I end up with a trillion dependencies installed because an author of a 5kb app decided not to include a 3kb parser and wanted me to install another 50MB package to get that 3kb library :) yay!
Of course later when I uninstall those packages, all those dependencies that got installed and never really have a need for anymore get left behind.
But anyway the point is I don't want to have to manually compile and spend hours chasing down dependencies so I can compile and install to my own paths and then have to tinker with a bunch of configuration files. So after some research this is the best I could come up with, did I miss some easier solution?
Use OVERLAYFS and Overlayroot to do an overlay of my root / partition on my secondary drive or partition so that my Linux OS is never written to anymore but everything will be transparently written to the other partition.
I like the idea of this method and I want to know who uses this method and if its working out well. What I like is that this way I can continue to be lazy and just blindly apt-get install toolchain and everything should work as normal without any special tinkering with each apps config files etc to change paths.
Its also nice that dependencies will be easily re-used by the different apps.
Any problems I haven't foreseen with this method? Is anyone using this solution?
DOCKER or Other Application Containers, libvirt/lxc etc?
This might be THE WAY to go? With this method I assume I should install ALL my apps I want to install/try-out inside ONE Docker container otherwise I will be wasting storage space by duplication of dependencies in each container? Or does DOCKER or other app-containers do DEDUPLICATION of files/libs across containers?
Does this work fine for graphical/x-windows/etc apps inside docker/containers?
If you know of something easier than Overlayfs/overlayroot or Docker/LXC to accomplish what I want and that's not any more hassle to setup please tell me.tx
After further research and testing/trying out docker for this, I've decided that "containers" like docker are the easy way to go to install apps you may want to purge later. It seems that this technique already uses the Overlayfs overlayroot kind of technique under the hood to make use of already installed dependencies in the host and installs other needed dependencies in the docker image. So basically I gain the same advantages as the other manual overlayroot technique I talked about and yet without having to work to set all that up.
So yep, I'm a believer in application containers now! Even works for GUI apps.
Now I can keep a very light-weight small size main root partition and simply install anything I want to try out inside app containers and delete when done.
This also solves the problem of lingering no longer needed dependencies which I'd have to deal with myself if manually doing an overlayroot over the main /.

how to use google cloud compute engine [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
i work on windows 10.
i have made a google cloud linux compute engine with 230gb standard persistent disk,1 GPU(tesla K80) ,13gb memory,2vCPU.
i have installed jupyter notebook and and all deeplearning frameworks and i am able to use it perfectly.
but i dont know how to access the data for deeplearning that is in my computer on the jupyter notebook running on my compute engine instance.
can anybody tell me how to use boot disk and what exactly its use is?
how to access data from my laptop?
I looked into the following links but couldnt understand the terminology.
https://cloud.google.com/compute/docs/disks/add-persistent-disk#formatting
https://cloud.google.com/compute/docs/disks/mount-ram-disks
To clarify the terminology:
Persistent disk: it is the same way you add a hard disk to your machine. If you add one more, you have to mount it somewhere inside your filesystem. (e.g. /media/data) You can find about making directory and mounting command on you mentioned documentation (down from 5.)
Ram disk: it will treat extra disk as a memory space (e.g. for high performance computing). This is not consider as storage and will be count as tmpfs that doesn't keep data permanently. You may use if your task requires greater amount of RAM.
(disclaimer: I never use both extra persistent storage.)
In case you cannot find your data in Jupyter, it depends on the location you start jupyter notebook. For example, if you start Jupyter notebook at home directory, you will see data only in home directory. If you have a mounted drive, one way to access to that mount is making softlink to your working directory.
P.s. you can also use software like WinSCP to access to all file system apart from using only Jupyter.
Make sure to set an ingress firewall rule to allow traffic to the GCE instance.
In the console, go to:
networking
VPC network
External IPs
Reserve a static IP address.
Then go to:
VPC network
firewall rules
Create a tag allowing the protocol tcp:9999 from source IP 0.0.0.0/0.
When you create your instance, associate it with both the IP address and the firewall rule.
Here you can find more detailed instructions on how to create firewall rules on a GCP project: https://cloud.google.com/vpc/docs/using-firewalls

Embedded Linux – mechanism for deploying firmware updates? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I am considering developing on the Yocto project for an embedded Linux project (an industrial application) and I have a few questions for those with experience with embedded Linux in general -- Yocto experience a bonus. Just need to get an idea of what is being commonly done in firmware updates.
I have a few requirements, that being authentication, a secure communications protocol, some type of rollback if the update failed. Also, if there is a way to gradually release the patch across the fleet of devices then that would also be interesting as I want to avoid bricked devices in the field.
How do you deploy updates/patches to field devices today – and how long did it take to develop it? Are there any other considerations I am missing?
Although you certainly can use rpm, deb, or ipk for your upgrades, my preferred way (at least for small to reasonably sized images) is to have two images stored on flash, and to only update complete rootfs images.
Today I would probably look at meta-swupdate if I were to start working with embedded Linux using OpenEmbedded / Yocto Project.
What I've been using for myself and multiple clients is something more like this:
A container upgrade file which is a tarball consisting of another tarball (hereafter called the upgrade file), the md5sum of the upgrade file, and often a gpg-signature.
An updater script stored in the running image. This script is responsible to unpack the outer container of the upgrade file, verify the correctness of the upgrade file using md5sum and often to verify a cryptographic signature (normally gpg based). If the update file passes these tests, the updater script looks for a upgrade script inside the update file, and executes this.
The upgrade script inside the update file performs the actual upgrade, ie normally rewrite the non-running image, extracting and rewriting the kernel and if these steps are successful, instruct the bootloader to use the newly written kernel and image instead of the currently running system.
The benefit of having the script that performs the actual upgrade inside the upgrade file, is that you can do whatever you need in the future in a single step. I've made special upgrade images that upgrades the FW of attached modems, or that extracted some extra diagnostics information instead of performing an actual upgrade. This flexibility will payoff in the future.
To make the system even more reliable, the bootloader users a feature called bootcount, which could the number of boot attempts, and if this number gets above a threshold, eg 3, the bootloader chooses to boot the other image (as the image configured to be booted is considered to be faulty). This ensures that of the image is completely corrupt, the other, stored image will automatically be booted.
The main risk with this scheme is that you upgrade to an image, whose upgrade mechanism is broken. Normally, we also implement some kind of restoration mechanism in the bootloader, such that the bootloader can reflash a completely new system; though this rescue mechanism usually means that the data partition (used to store configurations, databases, etc) also will be erased. This is partly for security (not leaking info) and partly to ensure that after this rescue operation the system state will be completely known to us. (Which is a great benefit when this is performed by an inexperienced technician far away).
If you do have enough flash storage, you can do the following. Make two identical partitions, one for the live system, the other for the update. Let the system pull the updated image over a secure method, and write it directly to the other partition. It can be as simple as plugging in a flash drive, with the USB socket behind a locked plate (physical security), or using ssh/scp with appropriate host and user keys. Swap the partitions with sfdisk, or edit the setting of your bootloader only if the image is downloaded and written correctly. If not, then nothing happens, the old firmware lives on, and you can retry later. If you need gradual releases, then let the clients select an image based on the last byte of their MAC address. All this can be implemented with a few simple shellscripts in a few hours. Or a few days with actually testing it :)
#Anders answer is complete exaustive and very good. The only thing I can add as a suggestion to you is to think on some things:
Has your application an internet connection/USB/SD card to store a
complete new rootfs? Working with Linux embedded is not the like write a 128K firmware on a Cortex M3..
Has your final user the capability to do the update work?
Is your application installed in a accessible area remote controlled?
About the time you need to develop a complete/robust/stable solution is not a so simple question, but take notes that is a key point of an application that impact on the market feeling of your application. Especially in early days/month of first deploy where is usual to send updates to fix little/big youth bugs.

Clone Debian/Ubuntu installation [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Is there an easy way of cloning entire installed debian/ubuntu system?
I want to have identical installation in terms of installed packages and as much as possible of settings.
I've looked into options of aptitude, apt-get, synaptic but have found nothing.
How to mirror apt-get installs.
Primary System
dpkg --get-selections > installed-software
scp installed-software $targetsystem:.
Target System
dpkg --set-selections < installed-software
dselect"
done.
+1 to this post
This guide should answer your direct question
But I would recomend Rsync and simply clone entire /root. Its only expensive the first time.
You can also create your own package repository and have all your machines run their daily updates from your repository.
Supposing you want to install Ubuntu on multiple identical systems you could try with the Automatic Install feature.
You can use rsync for that and there is an interesting thread about it on ubuntuforms:
ubuntuforms
There is RSYNC which let's you synchornise files between installations. So you could just rsync your entire distro, or at least the directories that contain the programs and the configuration files.
Also, I don't know if this is what you are asking, but you could turn your existing install into an ISO image, this would allow you to install it elsewhere, thus having a duplicate.
Hope that helps
If the drives and systems are identical, you might consider using dd to copy the source machine to the target.
The only changes that would need to be made on booting the new machine would be to change the hostname.
Once the machine has been duplicated, go with what the other answers have suggested, and look at rsync. You won't want to rsync everything, though: system log files etc should be left alone.
Also, depending on how often "changes" are made to either system (from bookmarks to downloaded ISOs), you may need to run rsync in daemon mode, and have it update nearly constantly.
SystemImager
FAI
We have systemimager working great with RHEL and CentOS. Haven't tried it on Debian.
The trick linked by Luka works great with debian though.
Well it all depends on scale, and how often you want to use it, using systemimager is basicly rsync on steroids, it has some scripts which make creation of images easy and allows you have network settings etc. This can be easily used where you need to create a farm of webservers or a farm of mailserver with only a small difference between installations where you are able to boot one blank system over the network and have it completely installed. This has the advantage that it's almost completely automated, a script learn your partitioning layout and automatically applies it.
When you only need one copy of a system, keep it simple, boot from a livecd, create your partitioning, copy over network using rsync, install your bootloader and everything 'll be fine.

Resources