Update file in folder inside the s3bucket python - python-3.x

I have a folder inside the s3 bucket i need to update the file inside the existing folder
using python can any one assist

I found an answer to my own question. The code below works perfectly
from my local folder. I opened the folder and and use 'rb' (read binary) to store file.
s3 = boto3.client(
's3',
region_name='ap-south-1',
aws_access_key_id = S3_access_key,
aws_secret_access_key=S3_secret_key
)
path_data=os.path.join(MEDIA_ROOT, "screenShots",folder_name)
dir=os.listdir(path_data)
for file in dir:
with open(path_data+"/"+file, 'rb') as data:
s3.upload_fileobj(data, s3_Bucket_name, folder_name+"/"+file)
shutil.rmtree(path_data)

Related

Boto3 - Multi-file upload to specific S3 bucket "path" using CLI arguments

new coder here. For work, I receive a request to put certain files in an already established s3 bucket with a requested "path."
For example: "Create a path of (bucket name)/1/2/3/ with folder3 containing (requested files)"
I'm looking to create a Python3 script to upload multiple files from my local machine to a specified bucket and "path" using CLI arguments specifying the file(s), bucket name, and "path"/key - I understand s3 doesn't technically have a folder structure, and that you have to put your "folders" in as part of the key, which is why I put "path" in quotes.
I have a working script doing what I want it to do, but the bucket/key is hard coded at the moment and I'm looking to get away from that with the use and understanding of CLI arguments. This is what I have so far -- it just doesn't upload the file, though it builds the path in s3 successfully :/
EDIT: Below is the working version of what I was looking for!
import argparse
#import os
import boto3
def upload_to_s3(file_name, bucket, path):
s3 = boto3.client('s3')
s3.upload_file(file_name, bucket, path)
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument('--file_name')
parser.add_argument('--bucket')
parser.add_argument('--path')
args = parser.parse_args()
upload_to_s3(args.file_name, args.bucket, args.path)
my input is:
>>> python3 s3_upload_args_experiment.py --file_name test.txt --bucket mybucket2112 --path 1/2/3/test.txt
Everything executes properly!
Thank you much!

How can I generate a PDF with custom fonts using AWS Lambda?

I have an AWS Lambda function that generates PDFs using the html-pdf library with custom fonts.
At first, I imported my fonts externally from Google Fonts, but then the PDF's size has enlarged by ten times.
So I tried to import my fonts locally src('file:///var/task/fonts/...ttf/woff2') but still no luck.
Lastly, I trie to create fonts folder in the main project and then I added all of my fonts, plus the file fonts.config:
<?xml version="1.0"?>
<!DOCTYPE fontconfig SYSTEM "fonts.dtd">
<fontconfig>
<dir>/var/task/fonts/</dir>
<cachedir>/tmp/fonts-cache/</cachedir>
<config></config>
</fontconfig>
and set the following env:
FONTCONFIG_PATH = /var/task/fonts
but still no luck (I haven't installed fontconfig since I'm not sure how and if I need to).
My Runtime env is Node.js 8.1.0.
You can upload your fonts into an S3 bucket and then download them to the lambda's /tmp directory, during its execution. In case your lib creates .pkl files, you should first change your root directory to /tmp (lambda is not allowed to write in the default root directory).
The following Python code downloads your files from a /fonts directory in an S3 bucket to /tmp/fonts "local" directory.
import os
import boto3
os.chdir('/tmp')
os.mkdir(os.path.join('/tmp/', 'fonts'))
s3 = boto3.resource('s3')
s3_client = boto3.client('s3')
my_bucket = s3.Bucket("bucket_name")
for file in my_bucket.objects.filter(Prefix="fonts/"):
filename = file.key
short_filename = filename.replace('fonts/','')
if(len(short_filename) > 0):
s3_client.download_file(
bucket,
filename,
"/tmp/fonts/" + short_filename,
)

I am trying to load multiple csv files from multiple paths from my network drive to python for pre processing

How to create a configuration feed file with all the folder paths to look for and read that config file through python?
you can create a config.py file to save all your configuration values like
-----config.py------------
filename = "testnames.xlsx"
sheet_name = "WRDS"
To use it in your actual py file
-------------actual file-----------------
import config as cfg
file_name = cfg.filename
sheet_name = cfg.sheet_name

Reading file from S3 bucket and writing in current directory using lambda function

Want to read a file from S3 bucket (ex: text.txt) using lambda
write file in current directory, to location(ex: __dirname + 'text.txt') using lambda
I am able to read file
let txtfilepath = __dirname + 'text.txt'
var params = {   
enter code here
Bucket: bucketname,   
Key: filepathInS3
}; 
S3.getObject(params, function(err, data){   
if (err)
    console.error(err.code, "-", err.message);
    return (err);  
enter code here
fs.writeFile(txtfilepath, data.Body, function(err){
if(err)
console.log(err.code, "-", err.message);
return (err);   
});
});
getting error - read-only file system
The only directory you can write to in the Lambda execution environment is /tmp, all other folders are read-only.

Zipfile file in cloud(amazon s3) without writing it first to local file(no write privileges)

I need to zip some files in amazon s3 without needing to write them to file locally first. Ideally my code worked in development but i don't have many write privileges in production.
folder = output_dir
files = fs.glob(folder)
f = BytesIO()
zip = zipfile.ZipFile(f, 'a', zipfile.ZIP_DEFLATED)
for file in files:
filename = os.path.basename(file)
image = fs.get(file, filename)
zip.write(filename)
zip.close()
the proplem is at this line in production
image = fs.get(file, filename)
Because i don't have write privileges.
My last resort is to write to /tmp/ directory which i have privileges to.
Is there a way to zip files from a url path or directly in the cloud?
I ended up using python tempfile which ended up being a perfect solution.
Using NamedTemporaryFile gave me the guarantee to create named and system visible temporary files that could be deleted automatically. No manual work.

Resources