I launched an EC2 Spot Instance and unchecked the "Delete On Termination" option for the EBS root volume. I chose the Ubuntu 14.04 64-bit HVM AMI.
At some point the instance got terminated due to max price and the EBS volume stayed behind as intended. Now eventually when the Spot Instance is relaunched it creates a brand-new EBS root volume. The old EBS root volume is still sitting out there.
Actually I simulated the above events for testing purposes by manually terminating the Spot Instance and launching a new one, but I assume the result would be the same in real usage.
So now, how can I get the old EBS volume re-mounted as the current root volume?
I tried the example from http://linux.die.net/man/8/pivot_root, with a few modifications to get around obvious errors:
# manually attach old EBS to /dev/sdf in the AWS console, then do:
sudo su -
mkdir /new-root
mkdir /new-root/old-root
mount /dev/xvdf1 /new-root
cd /new-root
pivot_root . old-root
exec chroot . sh <dev/console >dev/console 2>&1
umount /old-root
The terminal hangs at the exec chroot command, and the instance won't accept new ssh connections.
I'd really like to get this working, as it provides a convenient mechanism to save money off the On Demand prices for development, test, and batch-oriented EC2 instances without having to re-architect the whole application deployment, and without the commitment of a Reserved Instance.
What am I missing?
The answer is to place the pivot_root call inside of /sbin/init on the initial (ephemeral) EBS root volume.
Here are some scripts that automate the process of launching a new Spot Instance and modifying the /sbin/init on the 1st (ephemeral) EBS volume to chain-load the system from a 2nd (persistent) EBS volume:
https://github.com/atramos/ec2-spotter
Related
I want to reduce the size of the EBS volume from 250GB to 100GB. I know it can't be done directly from the console. That's why I have tried few links like Decrease the size of EBS volume in your EC2 instance and Amazon EBS volumes: How to Shrink ’em Down to Size which haven't helped me. May be this will work for plain data but in my case I have to do it on /opt which have installations and configuration.
Please let me know if it is possible to do, and how.
mount new volume to /opt2, copy all the files from /opt with rsync or something preserve the links etc. update your /etc/fstab and reboot.
if all good, umount the old volume from the ec2.
Hi Laurel and Jayesh Basically you guys have to follow following instructions:
First, Shut down the instance (MyInstance) to prevent any problem.
Create a new 6GIB EBS volume.
Mount the new volume (myVolume)
Copy data from the old volume to the new volume (myVolume)
Use rysnc to copy from old volume to the new volume (myVolume) sudo rsync -axv / /mnt/myVolume/.
Wait until it’s finished. ✋
Install GRUB
Install grub on myVolume using command
Log out from the instance and shut it down.
Detach old volume and attach the new volume (myVolume) to /dev/xvda
Start instance, you see an instance is now running with 6GIB EBS volume size.
Reference: https://www.svastikkka.com/2021/04/create-custom-ami-with-default-6gib.html
I have an app that has been successfully running on EC2 for a few years. The system is Ubuntu 18.04.2 LTS (GNU/Linux 4.15.0-1032-aws x86_64).
It's a small and simple app with low traffic. I had never made any changes to the server itself until today. I wanted to deal with the X packages can be updated. message, so I ran:
sudo apt-get update
sudo apt-get upgrade
sudo apt-get dist-upgrade
Then I ran sudo reboot. Once rebooted, the app runs perfectly. I can access it as normal via the public URL and look at things, including db (postgresql directly on the server) data with no issues or surprises.
But, when I tried to ssh into the machine again, I couldn't. I do ssh -i "key.pem" -vvv ubuntu#<IP> and get:
debug1: Connecting to <IP> [<IP>] port 22.
debug1: connect to address <IP> port 22: Operation timed out
ssh: connect to host <IP> port 22: Operation timed out
No changes were made to security groups. Also, it's such a small project, that I never setup EC2 Instance Connect or anything like that.
I had the thought of launching a new EC2 and just switching the EBS volumes, thinking EBS would bring the app and data, while the instance itself would have configs and permissions.
I do not understand much about this (clearly), and was surprised to learn that the EBS volume itself seems to be the problem and hold all the cards.
I can switch EBS volumes back and forth between the two EC2 instances. At any given time, whichever one has the newest (and therefore blank) EBS volume attached at /dev/sda1 allows SSH but surely does not run the app. And, vice-versa: Whichever EC2 instance has the original EBS volume runs the app perfectly but keeps me locked out of ssh.
In this scenario, the question is: How can I make one of the EC2 instances bypass this EBS issue and make its own decision about allowing me to connect with ssh?
Or: What is the obvious and/or silly thing I'm missing here?
PS: I do have elastic IP going for all of this, so it doesn't seem like DNS would be the source of the problem.
With John Rotenstein's help, I was able to resolve this.
Here are the core steps:
Phase 1 - Attach and mount additional volume
Per John's comment, it's possible to boot the instance from the "good" volume and then attach and mount the "bad" volume after. This allowed me to explore files and look for issues.
AWS panel
Attach volume to EC2 instance as root by using /dev/sda1 for name
Start the EC2 instance
Attach the other volume after instance has booted
Terminal
SSH into the server
See root volume information:
~$ df -hT /dev/xvda1
Check for mounted volumes:
~$ lsblk
See additional volume information:
~$ df -hT /dev/xvdf1
Switch to root user:
~$ sudo su -
Make a directory to be the mount path:
~$ mkdir /addvol
Mount the additional volume to the path:
~$ mount /dev/xvdf1 /addvol
Check additional volume contents:
~$ ls -la /addvol/home/ubuntu
Now I could see and navigate the additional volume's contents, finding config files, looking at authorized_keys, file permissions, etc.
This article from AWS helped a lot to get me here.
After finally getting to this point, I could not find any problems with the keys, or permissions, etc. John pointed me to this article dealing with Ubuntu's firewall things.
Phase 2 - Dealing with the firewall
I ran some commands from the article and tried to understand how they worked.
Once I grasped it a little, I decided to use an existing reboot script I have on the volume to ensure the firewall was ok with SSH connections.
I updated my existing custom reboot script, adding the following lines:
sudo ufw allow ssh
sudo ufw allow 22
sudo ufw disable
sudo ufw --force enable
Basically it calls to allow for ssh twice, once by name and then by port. I'm a newbie on this stuff and just went for the overkill.
Then it disables and enables the firewall to ensure it runs with these news things configured.
Because sudo ufw enable requires an interaction, I chose to use sudo ufw --force enable.
Phase 3 - Testing and using it!
After the script update, I exited the server.
AWS panel:
Stop the EC2 instance
Detach one volume from the instance
Detach the other volume from the instance
Reattach the "bad" volume, this time as root
Start the EC2 instance
Terminal:
SSH into the instance - Voila!
NOTE: Before truly working 100%, my computer complained about the known_hosts thing. The server key must have changed on the update/upgrade and/or after all of the volume changes. I don't think having to confirm hosts is a big deal, so I just usually clear all of the contents in my local .ssh/known_hosts file. If you prefer to be specific, you can find the server's information on there specifically and delete only the relevant lines.
I have two ubuntu server VMs running on the same proxmox server. Both are running docker. I want to migrate one container from one of the VMs to the other. For that I need to attach a USB drive to the target VM which will be mounted inside the docker container. I mounted the drive exactly the same way in both VMs (the old one is shut down of course) and the mounting works, I can access the directory and see the contents of the drive. Now I want to run the container with the exact same command as I used on the old vm which looks something like this:
docker run -d --restart unless-stopped --stop-timeout 300 -p 8081:8081 --mount type=bind,source="/data",destination=/internal_data
This works in the old VM, but on the new one it says:
docker: Error response from daemon: invalid mount config for type "bind": bind source path does not exist: /data.
See 'docker run --help'.
I don't understand what's wrong. /data exists and is owned by root, the same as it is on the old VM. In fact, it's the same drive with the same contents. If I shut down the new VM and boot up the old one with the drive mounted in exactly the same way, it just works.
What can cause this error, if the source path does in fact exist?
I fixed it by mounting the drive in a mount point in /mnt/.
I changed nothing else and in the other VM it works when mounting on the root with the same user and permissions. No idea why that fixed it.
I want to mount my usb drive into a running docker instance for manually backup of some files.
I know of the -v feature of docker run, but this creates a new container.
Note: its a nextcloudpi container.
You can only change a very limited set of container options after a container starts up. Options like environment variables and container mounts can only be set during the initial docker run or docker create. If you want to change these, you need to stop and delete your existing container, and create a new one with the new mount option.
If there's data that you think you need to keep or back up, it should live in some sort of volume mount anyways. Delete and restart your container and use a -v option to mount a volume on where the data is kept. The Docker documentation has an example using named volumes with separate backup and restore containers; or you can directly use a host directory and your normal backup solution there. (Deleting and recreating a container as I suggested in the first paragraph is extremely routine, and this shouldn't involve explicit "backup" and "restore" steps.)
If you have data that's there right now that you can't afford to lose, you can docker cp it out of the container before setting up a more robust storage scheme.
As David Maze mentioned, it's almost impossible to change the volume location of an existing container by using normal docker commands.
I found an alternative way that works for me. The main idea is convert the existing container to a new docker image and initialize a new docker container on top of it. Hope works for you too.
# Create a new image from the container
docker commit CONTAINERID NEWIMAGENAME
# Create a new container on the top of the new image
docker run -v HOSTLOCATION:CONTAINERLOCATION NEWIMAGENAME
I know the question is from May, but for future searchers:
Create a mounting point on the host filesystem:
sudo mkdir /mnt/usb-drive
Run the docker container using the --mount option and set the "bind propagation" to "shared":
docker run --name mynextcloudpi -it --mount type=bind,source=/mnt/usb-drive,target=/mnt/disk,bind-propagation=shared nextcloudpi
Now you can mount your USB drive to the /mnt/usb-drive directory and it will be mounted to the /mnt/disk location inside the running container.
E.g: sudo mount /dev/sda1 /mnt/usb-drive
Change the /dev/sda1, of course.
More info about bind-propagation: https://docs.docker.com/storage/bind-mounts/#configure-bind-propagation
Running docker on the Mac, with a centos image, I see mounted volumes taking on the ownership of the centos (internal) user, while on the filesystem the ownership is mine (mdf:mdf).
Using the same centos image on RHEL 7, I see the volumes mounted, but inside, in centos, the home dir and the files all show my uid (1055).
I can do a recursive chown to, e.g., insideguy:insideguy, and all looks right. But back in the host filesystem, the ownerships have changed to some other person in the registry that has the same uid as was selected for insideguy(1001) when useradd was executed.
Is there some fundamental limitation in docker for Linux that makes this happen?
As another side effect, in our cluster one cannot chown on a mounted filesystem, even with sudo privileges; only on a local filesystem. So the desire to keep the docker home directories in, e.g., ~/dockerhome, fails because docker seems to be trying (and failing) to perform some chowns (not described in the Dockerfile or the start script, so assumed to be part of the --volume treatment). Placed in /var or /opt with appropriate ownerships, all goes well.
Any idea what's different between the two docker hosts?
Specifics: OSX 10.11.6; docker v1.12.1 on mac, v1.12.2 on RHEL 7; centos 7
There is a fundamental limitation to Docker on OS X that makes this happen: that is the fact that Docker only runs on Linux.
When running Docker on other platforms, this requires first setting up a Linux VM (historically through VirtualBox, although more recently other options are available) and then running Docker inside that VM.
Because Docker is running natively on Linux, it is sharing filesystems directly with the host when you use something like docker run -v /host/path:/container/path. So if inside the container you run chown userA somefile and user A has userid 1001, and on your host that user id belongs to userB, then of course when you look at the files on the host they will appear to be owned by userB. There's no magic here; this is just how Unix file permissions work. You get the same behavior if, say, you were to move a disk or NFS filesystem from one host to another that had conflicting entries in their local /etc/passwd files.
Most Docker containers are running as root (or at least, not as your local user). This means that any files created by a process in Docker will typically not be owned by you, which can of course cause problems if you are trying to access a filesystem that does not permit this sort of access. Your choices when using Docker are pretty the same choices you have when not using Docker: either ensure that you are running containers as your own user id -- which may not be possible, since many images are built assuming they will be running as root -- or arrange to store files somewhere else.
This is one of the reasons why many people discourage the use of host volume mounts, because it can lead to this sort of confusion (and also because when interacting with a remote Docker API, the remote Docker daemon doesn't have any access to your local host filesystem).
With Docker for Mac, there is some magic file sharing that goes on to expose your local filesystem to the Linux VM (for example, with VirtualBox, Docker may use the shared folders feature). This translation layer is probably the cause of the behavior you've noted on OS X with respect to file ownership.