Moving users from one Ec2 machine to another - linux

I have an Amazon Linux machine, where users log in and connect to other servers (Bastion server), now I have upgraded my Linux machine to.
How do I move all the users present in server1 to Server2
Things I have tried:
created snapshots of Server1
converted to volumes and attached it to Server2.
Please suggest what else I can do to get all users from Server1

You should not create snapshots of the boot disk, since it contains the Operating System.
Instead, you should:
Start with the raw Amazon Linux 2 image
Create the new users in the Operating System. See: Add new user accounts with SSH access to an EC2 Linux instance
Copy the /usr/USERNAME directories from the old instance to the new instance
This will preserve the users' settings and SSH keys.

copied all the main files, passwd, group, and shadow files from etc and rebooted it worked.

Related

Attach SAN storage share path to Two linux machine

I have two VM running on Centos7. One is active and another one is passive server.
And created 200GB size LUN in SAN for common share path for both VMs. If I upload files on one server, then same can be seen on another one. Even it helps on failover case of single VM.
Can someone please share me how to setup this method ?.
You might want to use NFS Server so that your can freely share directories plus when one of your nfs client went down you can easily start another then share your files again.

how to access a file which is in other system using linux

how to access a file which is located in another system using Linux?
using Linux commands how can we access a file which is stored in a different system?
To share files between two system on linux ,you can using following methods:
Transferring Files with NFS
Sharing Files with Samba
NFS implementation
To configure the server, proceed as follows:
Prepare the system:
Open a shell, log in as root, and grant write permissions to all users:
mkdir /srv/nfs
chgrp users /srv/nfs
chmod g+w /srv/nfs
Make sure that your user name and user ID is known on the client as well as on the server. Refer to Chapter 8, Managing Users with YaST for detailed instructions about how to create and manage user accounts.
Prepare the NFS server:
Start YaST as root.
Select Network Services+NFS Server (this module is not installed by default. If it is missing in YaST, install the package yast2-nfs-server).
Enable NFS services with Start.
Open the appropriate firewall port with Open Port in Firewall if you are using a firewall.
Export the directories:
Click Add directory and select /srv/nfs.
Set the export options to:
rw,root_squash,async
Repeat these steps, if you need to export more than one directory.
Apply your settings and leave YaST. Your NFS server is ready to use.
To manually start the NFS server, enter rcnfsserver start as root. To stop the server, enter rcnfsserver stop. By default, YaST takes care of starting this service at boot time.
To configure the client, proceed as follows:
Prepare the NFS client:
Start YaST as root.
Select Network Services+NFS Client.
Activate Open Port in Firewall if using a firewall.
Import the remote file system:
Click Add.
Enter the name or IP address of the NFS server or click Choose to automatically scan the network for NFS servers.
Enter the name of your remote file system or automatically choose it with Select.
Enter an appropriate mount point, for example /mnt. If you repeat this step with another exported file system, make sure you choose another mount point than /mnt.
Repeat these steps if you need to import more than one external directory.
Apply your settings and leave YaST. Your NFS client is ready to use.
To start the NFS client manually, enter rcnfs start.
For more details you can refer following link.
http://doc.opensuse.org/documentation/html/openSUSE_113/opensuse-reference/cha.filetrans.html#sec.filetrans.share

Backup and Decommission Instance stores in AWS

I have inherited some Instance Store-backed Linux AMIs that need to be archived and terminated. We run a Windows & VMWare environment, so I have little experience with Linux & AWS.
I have tried using the Windows EC2 command line tools to export to a vhdk disk image, but receive an error stating that the instance must be EBS-backed to do so.
What's the easiest way to get a complete backup? Keep in mind that we have no plans to actually use the instances again, this is just for archival purposes.
Assuming you have running instance-store instances (and not AMIs, which would mean you already have a backup), you can still create an AMI. Its not simple, and may not be worth the effort if you never plan to actually re-launch the instances, but the following page gives you a couple options:
(1) create an instance-store backed AMI from a running instance
(2) subsequently create an EBS-backed AMI from the instance-store AMI
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/creating-an-ami-instance-store.html
You can also do a sync of the filesystem directly to S3 or attach an EBS volume and copy the files there.
In the end, I used the dd command in combination with ssh to copy images of each relevant drive to offline storage. Here is a summary of the process:
Ssh into the remote machine and run df -aTh to figure out which drives to backup
Log out of ssh
For each desired disk, run the following ssh command to create and download the disk image (changing the if path to the desired disk): ssh root#[ipaddress] "dd if=/dev/sda1 | gzip -1 -" | dd of=outputfile.gz
Wait for the image to fully download. You may want to examine your network usage and make sure that an appropriate amount of incoming traffic occurs.
Double check the disk images for completeness and mount-ability
Terminate the Instance

How create ftp user to give access /var/www for amazon ec2 ubuntu AMI

I have setup ec2 ubuntu server successfully. Able to connect ec2 instance via ftp as well. But that is the default "ubuntu" user and gives control to all system. So I want to create a new user and allow them access to /var/www folder and they can not see anything else. Even sharing a private key is dangerous. I have been googling for last one day but not able to find any solution. Don't have much linux server knowledge but by following steps, I able to achieve it.
I created one user with password via terminal, allowed set that new user's home dir to /var/www and tried to connect via filezilla. But could not connect and gives error ECONNREFUSED - Connection refused by server
I don't know what you did to set up ftp, but you probably want to install and configure vsftp. There is a very good summary of steps here.

SVN Server on NFS3 "Database is locked"

Despite a lots of topics about this error, I'm still having trouble with setting up av SVN Server. Server is running on Scientific Linux 6 and repositories are supposed to be stored via NFS3 on a SUNOS Storage server.
I read that mounting with "nolocks" options would solve the problem but I don't want to do so as a lot of users are working at the same time on the server, I guess removing the locks would create new problems.
SVN is installed, working on local files, but when I try to create a repo on distant location, files are created but I get the error "database is locked" and cannot use the repo. I use the fsfs system which is supposed to work fine with NFS.
Would anyone have another option for me ?
OK I eventually set up a new share on the NFS server, accessible by my SVN server only, mounted there with "nolock". Then it works, but not really the point, I still don't know how to set that up without removing the locks.
An NFS client will normally use the NFS Lock Manager (NLM) to synchronize locking of certain files on the NFS server with other NFS client accessing/locking the same files. The nolock mount option tells the NFS client not to use the NFS Lock Manager but instead to manage the locks locally on the NFS client machine itself. This is useful if you only have 1 NFS client or several NFS clients where each client works on a different area of the exported file system so that there is no lock contention.
It looks like you have the following:
(A) SVN_Client ==> (B) SVN_Server/NFS_Client ==> (C) NFS_Server
Where: Server (B) is Scientific linux 6 providing SVN services to clients and mounting from Server (C), the SunOS Storage Server.
Assuming you have no other machine mounting from the NFS server and providing the same SVN services, the nolock option will work correctly as server(B) will do all the lock management locally. There is no need/requirement to lock centrally on the NFS server.
This is true for NFSv3 which you mentioned in your question.

Resources