How to copy folder having files, also having sub-folders with different files to aws S3 bukcet with same folder level structure using python language - python-3.x

Hi all can you please help to figure out this issue
How to copy a folder having py files, and aslo having sub folder level files which are present in a particular path with the same folder structure to asw s3 bucket path. The files should be reflected as same in the way how they are look like folder level that should be same in s3 bucket path as well

There is a fully fledged Python library for this. Install it via
pip install s3
The documentation is well written and you should have no trouble following it. The examples section shows how to upload file to S3
storage.write("example", remote_name, headers=headers)
You should be able to instruct the package to upload the folders and subfolders while maintaining their folder structure. You can also use os.walk to walk through your files and directories if you need to individually pick which files and folders need to be uploaded.

Related

Extracting data from depot files generated by Perfroce

Many years ago I had a Perforce server hosting an Unreal Engine 4 project, but it's no longer active and I unfortunately don't have access to it. All I have left are some depot folders. There's a specific folder with a bunch of FBX files that I need access to, but the file shows as a folder named something like this: file.uasset,d and file.fbx,d and within them are zip files.
Is there anyway for me to convert these folders into actual FBX files? Any tools or anything out there? Or do I need a server to upload these onto a depot for perfroce to understand what to do with them? Any help would be greatly appreciated!
I've tried opening them in Perforce without a workspace or server and there wasn't much I could do with them.
If you have the server root folder (the one with the db.* files), you may be able to start up P4D and just access the depot normally. If you have a checkpoint file, you can use that to reconstruct the db.* files.
If all you have are the depot archives, you can unzip the files inside them (using regular old gzip or similar) to retrieve the original content. E.g. if you have file.fbx,d/1.1234.gz, you can unzip that and you'll have the content of file.fbx as of change 1234. Each gzip file is a complete revision on its own; you don't need to glue them together or anything like that.
Note that without the database (the db.* files), you may not be able to put together the exact original structure of the depot; the back-end archive files don't exactly correspond to the depot layout since archive files may be "lazy copied" to multiple locations in the depot.

uploading laravel-8 project in 000webhost error

I am trying to upload my laravel-8 project in 000webhost.com but my Public folder and resources and storage and routes and tests and vendor folder not uploading. I make my whole project in a zip folder and upload it public_html and extract my project than those folders not showing. but in my zip file, those folders and files are present what can I do?
i have this issue recently. Try to do this.
1st, open your laravel-8 project file and select all files except "node_modules".
2nd, right click and zip (make sure that the archiving method is .zip because 000webhost can't extract RAR archives.
for better understanding, open this link below:
https://www.000webhost.com/forum/t/deploy-laravel-project-on-000webhost/127323

Append files to existing S3 bucket folder via Spark

I am working in Spark where we need to write the data to S3 bucket after performing some tranformations. I know that while writing dtaa to HDFS/S3 via Spark throws an exception if the folder path already exists. So in our case if S3://bucket_name/folder already exists while writing the data to the same S3 bucket path, it will throw an exception.
Now the possible solution is to use mode as OVERWRITE while writing through Spark. But that would delete all the files already present in it. I want to have a kind of APPEND functionality with the same folder. So if folder already has some files, then it would just add more files to it.
I am not sure if API out of the box gives any such functionality. Of course there is an option where I can create a temporary folder inside a folder and save the file. After that I can move that file to its parent folder and delete the temporary folder. But this kind of approach is not best.
So please suggest how to proceed with this.

How to download all the files from S3 bucket irrespective of file key using python

I am working on an automation piece where I need to download all files from a folder inside a S3 bucket irrespective of the file name. I understand that the using boto3 in python I can download a file like:
s3BucketObj = boto3.client('s3', region_name=awsRegion, aws_access_key_id=s3AccessKey, aws_secret_access_key=s3SecretKey)
s3BucketObj.download_file(bucketName, "abc.json", "/tmp/abc.json")
but I was then trying to download all files irrespective of what filename to be specified in this way:
s3BucketObj.download_file(bucketName, "test/*.json", "/test/")
I know the syntax above could be totally wrong but is there a simple way to do that?
I did find a thread which helps here but seems a bit complex: Boto3 to download all files from a S3 Bucket
There is no API call to Amazon S3 that can download multiple files.
The easiest way is to use the AWS Command-Line Interface (CLI), which has aws s3 cp --recursive and aws s3 sync commands. It will do everything for you.
If you choose to program it yourself, then Boto3 to download all files from a S3 Bucket is a good way to do it. This is because you need to do several things:
Loop through every object (there is no S3 API to copy multiple files)
Create a local directory if it doesn't exist
Download the object to the appropriate local directory
The task can be made simpler if you do not wish to reproduce the directory structure (eg if all objects are in the same path). In that case, you can simply loop through the objects and download each of them to the same directory.

How can I have Azure File Share automatically generate non-existing directories?

With AWS S3, I can upload a file test.png to any directory I like, regardless of whether or not it exists... because S3 will automatically generate the full path & directories.
For example, if I when I upload to S3, I use the path this/is/a/new/home/for/test.png, S3 will create directories this, is, a, ... and upload test.png to the correct folder.
I am migrating over to Azure, and I am looking to use their file storage. However, it seems that I must manually create EVERY directory... I could obviously do it programmatically by checking to see if the folder exists and if not, create it... but wow...why should I work so hard?
I did try:
file_service.create_file_from_path('testshare', 'some/long/path', 'test.png', 'path/to/local/location/of/test.png')
However, that complains that the directory does not exist... and will only work if I either manually create the directories or replace some/long/path with None.
Is it possible to just hand Azure a path and have it create the directories?
Azure Files closely mimics OS File System and thus in order to push a file in a directory, that directory must exist. What that means is if you need to create a file in a nested directory structure, that directory structure must exist. Azure File service will not create that structure for you.
A better option in your scenario would be to use Azure Blob Storage. It closely mimics Amazon S3 behavior that you mentioned above. You can create a Container (similar to Bucket in S3) and then upload a file with a name like this/is/a/new/home/for/test.png.
However please note that the folders are virtual in Blob Storage (same as S3) and not the real one. Essentially the name with which the blob (similar to Object in S3) will be saved is this/is/a/new/home/for/test.png.

Resources