s3cmd copy removes metadata, how do you maintain? - .htaccess

I am using the Website Redirect Location feature of S3s web hosting. The architecture uses a bucket for the production www (www) site, and a bucket (redirects) for legacy 301 redirects that have been recreated as directories+files in S3, then metadata has been set for the redirects per AWS documentation.
I am using the s3cmd to copy the contents of redirects into www, but the metadata is being stripped.
This is the command:
s3cmd cp -r s3://redirects/ s3://www/
of course my bucket names have been shortened for this question
If there is another way to migrate 301 redirects from .htaccess into S3, please enlighten me :)

The AWS Command-Line Interface (CLI) copies metadata when using the aws s3 cp command.
Update: While metadata prefixed with x-amz-meta- is copied with the object, it appears that Website Redirect Location metadata is not copied.

ended up doing this in node. easy peasy.
var AWS = require('aws-sdk');
var config = {
region: 'us-west-2'
};
AWS.config.update(config);
var s3 = new AWS.S3();
s3.putObject ({
Bucket: "myBucket",
Key: "dir/index.html",
WebsiteRedirectLocation: "http://io9.com/"
},
function(err,data) {
if(err) {
console.log(err)
} else {
console.log(data)
}
});

Related

cloud run application to create file via api call and store the file in gcs bucket

I have created an application via nodeJS which creates a JSON file when I hit the PUT API.. via Postman. This file needs to be stored in GCS Bucket.
All of this works fine when I run the application locally.. file gets created and gets uploaded to my GCS bucket, but when I deploy the application to Cloud Run it does not work.... nor do I see any error in logs.
I have also added a member which is same service account used to create Cloud Run application and given it Legacy Bucket Owner role.
Below is my code snippet of PUT Block:
app.put('/api/ddr/:ticket_id', (req, res) => {
const ticket_id = req.params.ticket_id;
const requestBody = req.body;
if(!requestBody || !ticket_id){
res.status(404).send({message: 'There is an error'});
}
var fs = require('fs');
var writer = fs.createWriteStream(filename,{ 'flags': 'a'
, 'encoding': null
, 'mode': 0666
});
writer.write(JSON.stringify(requestBody)+',');
console.log('Filename is ' +filename);
res.status(201).send('all ok');
});
Also, please find below code snippet for uploading the file:
async function uploadFile() {
await storage.bucket(bucketName).upload(filePath);
console.log(`${filePath} uploaded to ${bucketName}`);
}
uploadFile().catch(console.error);
EDIT: The problem seems with my Dockerfile because when I run my application via nodemon the application works as expected but when i run the docker ... the application does not seem to work properly.. I don't know much about setting up a DockerFile.. Help will be very much appreciated... Below is my Docker code:
FROM node:14.8.0-slim
WORKDIR /app
COPY package.json /app
RUN npm install
COPY . /app
CMD ["npm", "start"]

Read SSH Config file and login to server using Node-JS npm library

I am able to connect to EC2 instances through git-bash/Putty when I run below command:
ssh host-name
.This works because in ssh/config file all the parameters are present.
My requirement is to do the same using Node-js npm library. is there way to do it?
Thanks
There is a npm package available. https://www.npmjs.com/package/node-ssh
When you create your EC2 instance (e.g. ubuntu server) you can download your private key file yourprivatekey.pem which you need for authentication.
Create a new node project and run npm i node-ssh. Copy yourprivatekey.pem in project folder and change the file permission chmod 400 yourprivatekey.pem.
Create a index.js file in the project folder and paste the following example code with a ssh copy command, to copy a file from EC2 to your local machine.
const fs = require('fs')
const path = require('path')
const {NodeSSH} = require('node-ssh')
const ssh = new NodeSSH()
ssh.connect({
host: 'abc123456-example.compute-1.amazonaws.com', //replace with your ec2 host
username: 'ubuntu', //replace with your EC2 username
privateKey: './yourprivatekey.pem' //replace with your private key file
})
.then(function() {
// Local, Remote (replace with you own file path)
ssh.getFile('/home/pi/test.txt', '/home/ubuntu/test.txt').then(function(Contents) {
console.log("The File's contents were successfully downloaded")
}, function(error) {
console.log("Something's wrong")
console.log(error)
})
})
Other ssh operations see in API Documentation at the link above.

Google Cloud Storage node client ResumableUploadError

We have an app that's running in GCP, in Kubernetes services. The backend is inside a container running a node/alpine base image. We try to use the nodejs client library for Google Cloud Storage (#google-cloud/storage": "~2.0.3") to update file to our bucket like in the github repo samples :
storage.bucket(bucketName)
.upload(path.join(sourcePath, filename),
{
'gzip': true,
'metadata': {
'cacheControl': 'public, max-age=31536000',
},
}, (err) => {
if (err) {
return reject(err);
}
return resolve(true);
});
});
It works fine for files smaller than 5Mb, but when I get higher size files, I get an error :
{"name":"ResumableUploadError"}
A few google searches later, I see that the client automaticaly switch to resumable upload. Unfortunately, I cannot find any example on how to manage this special cases with the node client. We want to allow up to 50Mb so it's a bit of a concern right now.
OK, just so you know the problem was because my container runs the node/alpine image. The alpine distributions are stripped to the minimum so there was no ~/.config folder which is used by the Configstore library used by the google-cloud/storage node library. I had to go in the repo check the code and saw the comment in file.ts Once I added the folder in the container (by adding RUN mkdir ~/.config in Dockerfile) everything started to work as intended.
Alternatively you can set resumable: false in the options you pass in. So the complete code would look like this:
storage.bucket(bucketName)
.upload(path.join(sourcePath, filename),
{
'resumable': false,
'gzip': true,
'metadata': {
'cacheControl': 'public, max-age=31536000',
},
}, (err) => {
if (err) {
return reject(err);
}
return resolve(true);
});
});
If you still want to have resumable upload and you don't want to have to create additional bespoke directories in Dockerfile, here is another solution.
Resumable upload requires a writable directory to be accessible. Depending on the os and how you installed #google-cloud/storage, the default config path could change. To make sure that this always works, without having to create specific directories in your Dockerfile, you can specify the configPath to a writable file.
Here's an example of what you can do. Be sure to point configPath to a file not a existing directory (otherwise you'll get Error: EISDIR: illegal operation on a directory, read)
gcsBucket.upload(
`${filePath}`,
{
destination: `${filePath}`,
configPath: `${writableDirectory}/.config`,
resumable: true
}
)

Sync AWS S3 bucket/folder locally

I have a system with user accounts distributed between projects. The projects have each a folder structure with uploaded files. The documents are stored on AWS S3. Through the portal the users are able to manage (CRUD) the folders and documents.
But I also want to implement a client application that syncs a local folder with the different projects folders. Does AWS have such an API? I know about the cli tool S3cmd, is that the way to go?
Or does AWS have an API (preferably for NodeJS) that works with this kind of functionality, syncing a local folder with an S3 folder?
What would be the 'correct way' (if any) to go?
You can use this npm module for sync s3 folder
Install npm module
npm install s3-sync
Use module in your code segment
var stream = s3sync({
key: process.env.AWS_ACCESS_KEY
, secret: process.env.AWS_SECRET_KEY
, bucket: 'sync-testing'
})
stream.write({
src: __filename
, dest: '/uploader.js'
})
stream.end({
src: __dirname + '/README.md'
, dest: '/README.md'
})
More details refer this

How to upload file using easy-ftp in node?

I am trying to upload a file into my hosting server using node and easy-ftp.
I try with the following code:
var EasyFtp = require ("easy-ftp");
var ftp = new EasyFtp();
var config = {
host:'homexxxxx.1and1-data.host',
type:'SFTP',
port:'22',
username:'u90xxxx',
password:"mypass"
};
ftp.connect(config);
ftp.upload("/test/test.txt", "/test.txt", function(err){
if (err) throw err;
ftp.close();
});
No error message but no file uploaded
I tried the same using promises
const EasyFTP = require('easy-ftp-extra')
const ftp = new EasyFTP()
const config = {
host:'homexxxxx.1and1-data.host',
type:'SFTP',
port:'22',
username:'u90xxxx',
password:"mypass"
};
ftp.connect(config);
ftp.upload('/test.txt', '/test.txt')
.then(console.log)
.catch(console.error)
ftp.upload()
The same issued. No file is uploaded. No error in node console.
The config is the same used in filezilla to transfer files. SFTP protocol. Everything working well with filezilla.
What I am doing wrong?
Looks like you may have a path problem over here.
"/test/test.txt"
The path specified will try to take file from root folder like "c:\test\test.txt".
Assuming you want the file to be taken from your project folder try this path:
"./test/test.txt"
Other things in your code are precisely the same as in mine and mine works.
For me, it was just silently failing, and intelli-sense was not available.
npm remove easy-ftp
npm install easy-ftp
npm audit fix --force until no more vulnerabilities
Afterwards, intelli-sense was available and it started working.

Resources