Trying to pull in https paths into ffmpeg for node-js running on aws ec2 fails with 'input file does not exist' - node.js

I have a script set up to load in a video file from an s3 bucket into a node script running on an ec2 instance, but I am having issues trying to get ffmpeg to accept the url. I know that my aws-sdk integration is working ok as I can read objects and write objects to the bucket, and I am generating a signed url to the object in order to easily pass the path through but it seems to be failing with an 'input file does not exist' error.
If I use this generated signed path in a browser however, the video file can be found.
Has anyone else come across this issue? I could probably try and pipe through the external file to a new file local to the ec2 instance but if I can get round that extra step that would be great!

Related

How TensorFlow read file from s3 bytestream

I have done a deep learning model in TensorFlow for image recognition, and this one works reading an image file from local directory with tf.read_file() method, but I need now that the file be read by TensorFlow since a variable that is a Byte-Streaming that extract the image file since an S3 Bucket of Amazon without storage the streaming in local directory
You should be able to pass in the fully formed s3 path to tf.read_file(), like:
s3://bucket-name/path/to/file.jpeg where bucket-name is the name of your s3 bucket, and path/to/file.jpeg is where it's stored in your bucket. It seems possible you might be running into some access permissions issue, depending on if your bucket is private. You can follow https://github.com/tensorflow/examples/blob/master/community/en/docs/deploy/s3.md to set up your credentials
Is there an error you ran into when doing this?

Amazonka: How to generate S3:// uri from Network.AWS.S3.Types.Object?

I've been using turtle to call "aws s3 ls" and I'm trying to figure out how to replace that with amazonka.
Absolute s3 urls were central to how my program worked. I now know how to get objects and filter them, but I don't know how to convert an object to an S3 url to integrate with my existing program.
I came across the getFile function and tried downloading a file from s3.
Perhaps I had something wrong, but it didn't seem like just the S3 Bucket and S3 Object key were enough to download a given file. If I'm wrong about that I need to double check my configuration.

Amazon S3 403 Forbidden Error for KML files but not JPG files

I am able to successfully upload (put object) jpg files to S3 with a particular code path, but receive a 403 forbidden error when using the same code path to upload a KML file. I am not restricting file types explicitly with "bucket policy," but feel that this must somehow be tied to bucket policy or CORS configuration.
I was using code based off the Heroku tutorial for uploading images to Amazon S3. The issue ended up being that the '+' symbol in the appropriate mime type is "application/vnd.google-earth.kml+xml" and the + symbol was being replaced with a space when fetching the file-type query parameter for our own S3 endpoint to generate signed requests. We were able to quickly fix this by just forcing the ContentType to be "application/vnd.google-earth.kml+xml" for all kml files going to our endpoint for generating signed S3 requests.

Setting Metadata in Google Cloud Storage (Export from BigQuery)

I am trying to update the metadata (programatically, from Python) of several CSV/JSON files that are exported from BigQuery. The application that exports the data is the same with the one modifying the files (thus using the same server certificate). The export goes all well, that is until I try to use the objects.patch() method to set the metadata I want. The problem is that I keep getting the following error:
apiclient.errors.HttpError: <HttpError 403 when requesting https://www.googleapis.com/storage/v1/b/<bucket>/<file>?alt=json returned "Forbidden">
Obviously, this has something to do with bucket or file permissions, but I can't manage to get around it. How come if the same certificate is being used in writing files and updating file metadata, i'm unable to update it? The bucket is created with the same certificate.
If that's the exact URL you're using, it's a URL problem: you're missing the /o/ between the bucket name and the object name.

With AWS S3 MultiPart upload to a named directory using C# and the .Net SDK

The following fails with this error message:
"The specified upload does not exist. The upload ID may be invalid, or the upload may have been aborted or completed."
UploadPartRequest uploadRequest = new UploadPartRequest()
.WithBucketName(IniValues.Instance.TargetBucketName)
.WithKey("junk/20070125.log")
.WithUploadId(initResponse.UploadId)
.WithPartNumber(i)
.WithPartSize(partSize)
.WithFilePosition(filePosition)
.WithFilePath("C:\\InetTemp\\Logs\\20070125.log");
The problem is with the ".WithKey("junk/20070125.log")". If I strip out the "junk/" it works perfectly.
So the question is, how to upload a file to a specific AWS directory? All the documentation I found shows tha correct way to be to prepend the directory name and a forward slash. What am I missing?
It turns out I was adding the folder name to the string after calling InitiateMultipartUploadRequest. Once I changed the key value to be consistent across the upload calls it began to work.

Resources