Amazon AWS EC2 Dashboard and SSH - Saving Text Files to Volumes - linux

As a new user of SSH and the Amazon AWS EC2 Dashboard, I am trying to test to see whether I can, in one instance, save data onto a volume, then access that data from another instance by adding the volume to the instance (after terminating the first instance).
When I create the first instance, the AMI is "Amazon Linux AMI 2014.03.2 (HVM)" and the family is "general purpose" with EBS storage only. I automatically assign a public IP address to the instance. I configure the root volume so that it does NOT delete on termination.
As soon as the instance is launched, I open up PuTTY and set the host name to the instance's Public IP Address under Port 22, and authenticate using a private key saved onto the disc that I have already generated earlier.
Upon signing into the instance, I create a text file by typing the following code:
echo "testing">test.txt
I then confirm that the text "testing" is saved to the file "test.txt":
less test.txt
I see the text "testing", thus confirming that it is saved to the file. (I am assuming at this point that it is saved onto the volume, but I am not entirely sure.)
I then proceed to terminate the instance. I launch another one using the same AMI, same instance type, and a different public IP address. In addition to the root volume, I attempt to add the volume that was used as the root volume for the previous instance. (Oddly enough, the snapchat IDs for the previous volume and the root volume of the new instance are identical.) In addition, I use the same tag instance, the same security group and the same key pair as the previous instance.
I open up PuTTY again, this time using the Public IP Address of the new instance, but still using the same private key and port used for the previous instance. Opening logging in, I type:
less test.txt
but I am greeted with this message:
test.txt: No such file or directory
Is there any advice that anyone can offer me regarding this issue. Is it even possible to store a text file onto a volume? If so, am I performing this operation incorrectly?

As the secondary volume has the same UUID and the Amazon Linux used UUID based identification for root, then there might be a chance that the secondary volume was taken as the root volume. This may be the reason why there would be a mess up in choosing the root volume and the initial attempt to find test.txt would fail.
The reboot might have allowed it to take a different order which is why you were able to find it.

Related

Kubernetes Persistent Volume not shows the real capacity

I have a persistent volume in my cluster (Azure disk) that contains 8Gi.
I resized it to contain 9Gi, then changed my PV yaml to 9Gi as well (since it is not updated automatically) and everything worked fine.
Then I made a test and changed the yaml of my PV to 1000Gi (and expected to see an error) and received error from my pvc that claims this PV: "NodeExpand failed to expand the volume : rpc error: code = Internal desc = resize requested for 10, but after resizing volume size was 9"
However, if I typed kubectl get pv, it is still looks like this PV capacity is 1000Gi (and of course that in Azure this is still 9Gi since I not resized it).
Any advice?
As a general rule: you should not have to change anything on your PersistentVolumes.
When you request more space, editing a PersistentVolumeClaim: a controller (either CSI, or in-tree driver/kube-controllers) would implement that change against your storage provider (ceph, aws, ...).
Once done expanding the backend volume, that same controller would update the corresponding PV. At which point, you may (or might not) have to restart the Pods attached to your volume, for its filesystem to be grown.
While I'm not certain how to fix the error you saw: one way to avoid those would be to refrain from editing PVs.

Openshift how to access shared folders

I am using nodejs to write a file in a shared drive and it is working fine in my local machine, however after deploying the above code in Openshift the file is not creating and it is because OpenShift is not able to access the folder. Below is my code:
writeFile() {
const sharedFolderPath = "\\server\folder";
fs.writeFile(sharedFolderPath, templatePath, (err) => {
if (err) {
console.error(err);
} else {
console.info("file created successfully");
}
})
}
How to configure share folder with credentials in Openshift so that my code could write the file?
If this is server side, and you are using OpenShift S2I builder for NodeJS, you can only write to directories under /opt/app-root.
If you need data to survive a restart of the container, then you need to use a persistent volume. You can then mount the volume anywhere so long as doesn't overlap a directory which had other stuff in it you need to still access. Using persistent volumes which are ReadWriteOnce means you will need though to switch deployment strategy from default of Rolling to Recreate.
By default, OpenShift runs images with arbitrary user ids, e.g. 1000010000, so if it works locally, but not on OpenShift, it could be that the directory is not writable for that user.
The following is from the OpenShift Guidelines for creating images (emphasis is mine):
Support Arbitrary User IDs
In order to support running containers with volumes mounted in a secure fashion, images should be capable of being run as any arbitrary user ID. When OpenShift mounts volumes for a container, it configures the volume so it can only be written to be a particular user ID, and then runs the image using that same user ID. This ensures the volume is only accessible to the appropriate container, but requires the image be able to run as an arbitrary user ID.
To accomplish this, directories that must be written to by processes in the image should be world-writable. In addition, the processes running in the container must not listen on privileged ports (ports below 1024).
So you might need to add a RUN chmod 777 /server/folder to your Dockerfile to make that directory world-writable.

Bacula - Director unable to authenticate with Storage daemon

I'm trying to stay sane while configuring Bacula Server on my virtual CentOS Linux release 7.3.1611 to do a basic local backup job.
I prepared all the configurations I found necessary in the conf-files and prepared the mysql database accordingly.
When I want to start a job (local backup for now) I enter the following commands in bconsole:
*Connecting to Director 127.0.0.1:9101
1000 OK: bacula-dir Version: 5.2.13 (19 February 2013)
Enter a period to cancel a command.
*label
Automatically selected Catalog: MyCatalog
Using Catalog "MyCatalog"
Automatically selected Storage: File
Enter new Volume name: MyVolume
Defined Pools:
1: Default
2: File
3: Scratch
Select the Pool (1-3): 2
This returns
Connecting to Storage daemon File at 127.0.0.1:9101 ...
Failed to connect to Storage daemon.
Do not forget to mount the drive!!!
You have messages.
where the message is:
12-Sep 12:05 bacula-dir JobId 0: Fatal error: authenticate.c:120 Director unable to authenticate with Storage daemon at "127.0.0.1:9101". Possible causes:
Passwords or names not the same or
Maximum Concurrent Jobs exceeded on the SD or
SD networking messed up (restart daemon).
Please see http://www.bacula.org/en/rel-manual/Bacula_Freque_Asked_Questi.html#SECTION00260000000000000000 for help.
I double and triple checked all the conf files for integrity and names and passwords. I don't know where to further look for the error.
I will gladly post any parts of the conf files but don't want to blow up this question right away if it might not be necessary. Thank you for any hints.
It might help someone sometime who made the same mistake as I:
After looking through manual page after manual page I found it was my own mistake. I had (for a reason I don't precisely recall, I guess to trouble-shoot another issue before) set all ports to 9101 - for the director, the file-daemon and the storage daemon.
So I assume the bacula components must have blocked each other's communication on port 9101. After resetting the default ports like 9102, 9103 according to the manual, it worked and I can now backup locally.
You have to add director's name from the backup server, edit /etc/bacula/bacula-fd.conf on remote client, see "List Directors who are permitted to contact this File daemon":
Director {
Name = BackupServerName-dir
Password = "use *-dir password from the same file"
}

`aws s3 ls <bucket-name>` works on local machine, but on EC2 NoSuchBucket error

When I use the command aws s3 ls on both my EC2 machine and local Macbook, the output is the same, it lists all the buckets in S3. aws configure has the exact same ID, secret key, region, output format.
However, when I actually go to look at the contents of a bucket using the command aws s3 ls <bucket-name>, my local machine correctly lists all the items, while my EC2 responds with:
A client error (NoSuchBucket) occurred when calling the ListObjects operation: The specified bucket does not exist.
The EC2 machine can clearly communicate with the account correctly, but why would it not be able to list bucket contents when my local machine can? I don't see any permissions that would let my machine access it when the EC2 can't.
This isn't a complete answer but a workaround. The output of when using --debug like helloV suggested showed that the command was using the bucket name with the first 5 characters removed. When I added 5 random characters to the front of the bucket name (like .....bucket-name as opposed to bucket-name) it works and properly lists the content. If anyone has any clue as to why this is I would like to know.

Importing database for amazon rds: ERROR 2006 (HY000) at line 667: MySQL server has gone away

While importing a database to my amazon rds instance i've been issued the following error:
ERROR 2006 (HY000) at line 667: MySQL server has gone away
I went ahead and tried changing the interative_timeout setting to a larger number. However, it'll only let me set that for a session and amazon doesn't allow it to be set for global sessions.
How do i import a larger database into my amazon's rds instance?
The documentation gives instructions on how to import large datasets. Typically, the best method is to create flat files and import them in to your RDS instance.
I recently completed a migration of a database over 120GB in size from a physical server to RDS. I dumped each table in to a flat CSV file, then split the larger files in to multiple 1GB size parts. I then imported each table in to RDS.
You can simply change your RDS DB sizing settings by using the parameter group settings. Most MSQL settings are in there. It will require a restart of the instance however. The setting you want is max_allowed_packet and you need to not only set it with the client, but on the server itself.
Here's how I did it, mind you my databases weren't very large (largest one was 1.5G).
First dump your existing database(s):
mysqldump [database_name] --master-data=2 --single-transaction --order-by-primary -uroot -p | gzip > /mnt/dumps/[database_name].sql.gz
You can then transfer this file to an Amazon EC2 instance that has permission to access your RDS instance using something like scp. Once the file is located on your Amazon EC2 instance you should extract it using:
gzip [database_name].sql.gz -d
#you should now have a file named [database_name].sql in your directory.
mysql -uroot -p -h=[rds-instance]
source [database_name].sql
It should then start importing. This information is located in their documentation.

Resources