I want to reduce the size of the EBS volume from 250GB to 100GB. I know it can't be done directly from the console. That's why I have tried few links like Decrease the size of EBS volume in your EC2 instance and Amazon EBS volumes: How to Shrink ’em Down to Size which haven't helped me. May be this will work for plain data but in my case I have to do it on /opt which have installations and configuration.
Please let me know if it is possible to do, and how.
mount new volume to /opt2, copy all the files from /opt with rsync or something preserve the links etc. update your /etc/fstab and reboot.
if all good, umount the old volume from the ec2.
Hi Laurel and Jayesh Basically you guys have to follow following instructions:
First, Shut down the instance (MyInstance) to prevent any problem.
Create a new 6GIB EBS volume.
Mount the new volume (myVolume)
Copy data from the old volume to the new volume (myVolume)
Use rysnc to copy from old volume to the new volume (myVolume) sudo rsync -axv / /mnt/myVolume/.
Wait until it’s finished. ✋
Install GRUB
Install grub on myVolume using command
Log out from the instance and shut it down.
Detach old volume and attach the new volume (myVolume) to /dev/xvda
Start instance, you see an instance is now running with 6GIB EBS volume size.
Reference: https://www.svastikkka.com/2021/04/create-custom-ami-with-default-6gib.html
Related
I have an app that has been successfully running on EC2 for a few years. The system is Ubuntu 18.04.2 LTS (GNU/Linux 4.15.0-1032-aws x86_64).
It's a small and simple app with low traffic. I had never made any changes to the server itself until today. I wanted to deal with the X packages can be updated. message, so I ran:
sudo apt-get update
sudo apt-get upgrade
sudo apt-get dist-upgrade
Then I ran sudo reboot. Once rebooted, the app runs perfectly. I can access it as normal via the public URL and look at things, including db (postgresql directly on the server) data with no issues or surprises.
But, when I tried to ssh into the machine again, I couldn't. I do ssh -i "key.pem" -vvv ubuntu#<IP> and get:
debug1: Connecting to <IP> [<IP>] port 22.
debug1: connect to address <IP> port 22: Operation timed out
ssh: connect to host <IP> port 22: Operation timed out
No changes were made to security groups. Also, it's such a small project, that I never setup EC2 Instance Connect or anything like that.
I had the thought of launching a new EC2 and just switching the EBS volumes, thinking EBS would bring the app and data, while the instance itself would have configs and permissions.
I do not understand much about this (clearly), and was surprised to learn that the EBS volume itself seems to be the problem and hold all the cards.
I can switch EBS volumes back and forth between the two EC2 instances. At any given time, whichever one has the newest (and therefore blank) EBS volume attached at /dev/sda1 allows SSH but surely does not run the app. And, vice-versa: Whichever EC2 instance has the original EBS volume runs the app perfectly but keeps me locked out of ssh.
In this scenario, the question is: How can I make one of the EC2 instances bypass this EBS issue and make its own decision about allowing me to connect with ssh?
Or: What is the obvious and/or silly thing I'm missing here?
PS: I do have elastic IP going for all of this, so it doesn't seem like DNS would be the source of the problem.
With John Rotenstein's help, I was able to resolve this.
Here are the core steps:
Phase 1 - Attach and mount additional volume
Per John's comment, it's possible to boot the instance from the "good" volume and then attach and mount the "bad" volume after. This allowed me to explore files and look for issues.
AWS panel
Attach volume to EC2 instance as root by using /dev/sda1 for name
Start the EC2 instance
Attach the other volume after instance has booted
Terminal
SSH into the server
See root volume information:
~$ df -hT /dev/xvda1
Check for mounted volumes:
~$ lsblk
See additional volume information:
~$ df -hT /dev/xvdf1
Switch to root user:
~$ sudo su -
Make a directory to be the mount path:
~$ mkdir /addvol
Mount the additional volume to the path:
~$ mount /dev/xvdf1 /addvol
Check additional volume contents:
~$ ls -la /addvol/home/ubuntu
Now I could see and navigate the additional volume's contents, finding config files, looking at authorized_keys, file permissions, etc.
This article from AWS helped a lot to get me here.
After finally getting to this point, I could not find any problems with the keys, or permissions, etc. John pointed me to this article dealing with Ubuntu's firewall things.
Phase 2 - Dealing with the firewall
I ran some commands from the article and tried to understand how they worked.
Once I grasped it a little, I decided to use an existing reboot script I have on the volume to ensure the firewall was ok with SSH connections.
I updated my existing custom reboot script, adding the following lines:
sudo ufw allow ssh
sudo ufw allow 22
sudo ufw disable
sudo ufw --force enable
Basically it calls to allow for ssh twice, once by name and then by port. I'm a newbie on this stuff and just went for the overkill.
Then it disables and enables the firewall to ensure it runs with these news things configured.
Because sudo ufw enable requires an interaction, I chose to use sudo ufw --force enable.
Phase 3 - Testing and using it!
After the script update, I exited the server.
AWS panel:
Stop the EC2 instance
Detach one volume from the instance
Detach the other volume from the instance
Reattach the "bad" volume, this time as root
Start the EC2 instance
Terminal:
SSH into the instance - Voila!
NOTE: Before truly working 100%, my computer complained about the known_hosts thing. The server key must have changed on the update/upgrade and/or after all of the volume changes. I don't think having to confirm hosts is a big deal, so I just usually clear all of the contents in my local .ssh/known_hosts file. If you prefer to be specific, you can find the server's information on there specifically and delete only the relevant lines.
I have two ubuntu server VMs running on the same proxmox server. Both are running docker. I want to migrate one container from one of the VMs to the other. For that I need to attach a USB drive to the target VM which will be mounted inside the docker container. I mounted the drive exactly the same way in both VMs (the old one is shut down of course) and the mounting works, I can access the directory and see the contents of the drive. Now I want to run the container with the exact same command as I used on the old vm which looks something like this:
docker run -d --restart unless-stopped --stop-timeout 300 -p 8081:8081 --mount type=bind,source="/data",destination=/internal_data
This works in the old VM, but on the new one it says:
docker: Error response from daemon: invalid mount config for type "bind": bind source path does not exist: /data.
See 'docker run --help'.
I don't understand what's wrong. /data exists and is owned by root, the same as it is on the old VM. In fact, it's the same drive with the same contents. If I shut down the new VM and boot up the old one with the drive mounted in exactly the same way, it just works.
What can cause this error, if the source path does in fact exist?
I fixed it by mounting the drive in a mount point in /mnt/.
I changed nothing else and in the other VM it works when mounting on the root with the same user and permissions. No idea why that fixed it.
I want to mount my usb drive into a running docker instance for manually backup of some files.
I know of the -v feature of docker run, but this creates a new container.
Note: its a nextcloudpi container.
You can only change a very limited set of container options after a container starts up. Options like environment variables and container mounts can only be set during the initial docker run or docker create. If you want to change these, you need to stop and delete your existing container, and create a new one with the new mount option.
If there's data that you think you need to keep or back up, it should live in some sort of volume mount anyways. Delete and restart your container and use a -v option to mount a volume on where the data is kept. The Docker documentation has an example using named volumes with separate backup and restore containers; or you can directly use a host directory and your normal backup solution there. (Deleting and recreating a container as I suggested in the first paragraph is extremely routine, and this shouldn't involve explicit "backup" and "restore" steps.)
If you have data that's there right now that you can't afford to lose, you can docker cp it out of the container before setting up a more robust storage scheme.
As David Maze mentioned, it's almost impossible to change the volume location of an existing container by using normal docker commands.
I found an alternative way that works for me. The main idea is convert the existing container to a new docker image and initialize a new docker container on top of it. Hope works for you too.
# Create a new image from the container
docker commit CONTAINERID NEWIMAGENAME
# Create a new container on the top of the new image
docker run -v HOSTLOCATION:CONTAINERLOCATION NEWIMAGENAME
I know the question is from May, but for future searchers:
Create a mounting point on the host filesystem:
sudo mkdir /mnt/usb-drive
Run the docker container using the --mount option and set the "bind propagation" to "shared":
docker run --name mynextcloudpi -it --mount type=bind,source=/mnt/usb-drive,target=/mnt/disk,bind-propagation=shared nextcloudpi
Now you can mount your USB drive to the /mnt/usb-drive directory and it will be mounted to the /mnt/disk location inside the running container.
E.g: sudo mount /dev/sda1 /mnt/usb-drive
Change the /dev/sda1, of course.
More info about bind-propagation: https://docs.docker.com/storage/bind-mounts/#configure-bind-propagation
I have installed the official MongoDB docker image in a VM on AWS EC2, and the database has already data on it. If I stop the VM (to save expenses overnight), will I lose all the data contained in the database? How can I make it persistent in those scenarios?
There are multiple options to achieve this but the 2 most common ways are:
Create a directory on your host to mount the data
Create a docker
volume to mount the data
1) Create a data directory on a suitable volume on your host system, e.g. /my/own/datadir. Start your mongo container like this:
$ docker run --name some-mongo -v /my/own/datadir:/data/db -d mongo:tag
The -v /my/own/datadir:/data/db part of the command mounts the /my/own/datadir directory from the underlying host system as /data/db inside the container, where MongoDB by default will write its data files.
Note that users on host systems with SELinux enabled may see issues with this. The current workaround is to assign the relevant SELinux policy type to the new data directory so that the container will be allowed to access it:
$ chcon -Rt svirt_sandbox_file_t /my/own/datadir
The source of this is the official documentation of the image.
2) Another possibility is to use a docker volume.
$ docker volume create my-volume
This will create a docker volume in the folder /var/lib/docker/volumes/my-volume. Now you can start your container with:
docker run --name some-mongo -v my-volume:/data/db -d mongo:tag
All the data will be stored in the my-volume so in the folder /var/lib/docker/my-volume. So even when you delete your container and create a new mongo container linked with this volume your data will be loaded into the new container.
You can also use the --restart=always option when you perform your initial docker run command. This mean that your container automatically will restart after a reboot of your VM. When you've persisted your data too there will be no difference between your DB before or after the reboot.
I launched an EC2 Spot Instance and unchecked the "Delete On Termination" option for the EBS root volume. I chose the Ubuntu 14.04 64-bit HVM AMI.
At some point the instance got terminated due to max price and the EBS volume stayed behind as intended. Now eventually when the Spot Instance is relaunched it creates a brand-new EBS root volume. The old EBS root volume is still sitting out there.
Actually I simulated the above events for testing purposes by manually terminating the Spot Instance and launching a new one, but I assume the result would be the same in real usage.
So now, how can I get the old EBS volume re-mounted as the current root volume?
I tried the example from http://linux.die.net/man/8/pivot_root, with a few modifications to get around obvious errors:
# manually attach old EBS to /dev/sdf in the AWS console, then do:
sudo su -
mkdir /new-root
mkdir /new-root/old-root
mount /dev/xvdf1 /new-root
cd /new-root
pivot_root . old-root
exec chroot . sh <dev/console >dev/console 2>&1
umount /old-root
The terminal hangs at the exec chroot command, and the instance won't accept new ssh connections.
I'd really like to get this working, as it provides a convenient mechanism to save money off the On Demand prices for development, test, and batch-oriented EC2 instances without having to re-architect the whole application deployment, and without the commitment of a Reserved Instance.
What am I missing?
The answer is to place the pivot_root call inside of /sbin/init on the initial (ephemeral) EBS root volume.
Here are some scripts that automate the process of launching a new Spot Instance and modifying the /sbin/init on the 1st (ephemeral) EBS volume to chain-load the system from a 2nd (persistent) EBS volume:
https://github.com/atramos/ec2-spotter