I'm not sure if this is possible, but can I set up a "time limit" on Amazon CloudFront w/ RTMP? For example, if a movie is 10 mins, I want to show the first two mins only.
Can this be done through CloudFront or do I need to create a proxy script?
Thanks
No, the closest to what you want is ability to sign URLs which allows to limit how long URL is valid.
But unfortunately:
RTMP distributions: CloudFront checks the expiration time in a signed URL at the start of a play event. If a client starts to play a media file before the expiration time passes, CloudFront allows the entire media file to play. However, depending on the media player, pausing and restarting might trigger another play event. Skipping to another position in the media file will trigger another play event. If the subsequent play event occurs after the expiration time passes, CloudFront won't serve the media file.
(from official manual: http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-signed-urls-overview.html)
Related
I'm using Azure Media Player for video playback and that works great. However, the media player css/js/wof files do not have any cache-control headers set. They come from the amp cdn (amp.azure.net). Am i doing something wrong? I cannot find any information whatsoever regarding Azure Media Player and client side caching. What is the recommended way to set up client side caching when using amp.azure.net ?
According to https://developer.mozilla.org/en-US/docs/Web/HTTP/Caching, "HTTP is designed to cache as much as possible, so even if no Cache-Control is given, responses will get stored and reused if certain conditions are met. This is called heuristic caching." In the response to the azuremediaplayer.min.js, azuremediaplayer.min.css, and the woff2 file, I see no cache control directive as you mentioned. Therefore, there are no specific restrictions on caching. In most cases this means that all three files should be cached normally.
I tried using the "Seek" API (/playbackSessions/{sessionId}/playbackSession/seek) to seek to a certain time within the track (that is loaded from an Amazon S3 bucket, not within the local network), and received the following error: "ERROR_DISALLOWED_BY_POLICY ".
In the reference it's mentioned that this occurs due to "scrubbing is not allowed".
How can I "allow scrubbing"?
I've also noticed that "Resume" (after pause), is playing the track from the start. So, my guess is that Pause/Resume/Seek is only allowed within the network. Is that the case? Any way to pause & resume a track, or seek a certain time, while using a track "outside" of the local network (A CDN link as Amazon-S3 bucket files)?
Thanks.
Playback policies are set by the content provider. In the case of Cloud Queue-based playback, these policies are set in the /context or /itemWindow responses, as described here.
I'm building an ELK stack (for the first time) to track end-user REST API usage for a CloudFront distribution (in front of an S3 origin). Users pass a refresh token as part of their request and I was hoping to use this token to identify which users were making which request. Unfortunately, it looks like CloudFront access logs are missing some header information (particularly Authorization/Accept in my use case). This leaves me with three questions:
Is there a way to tell CloudFront to log additional items? It appears the answer is no.
As an alternative strategy, I tried modifying the request object with lambda#edge (in Viewer Request) to move the header information into the query string (so that it would get logged) but any manipulation in lambda#edge does not seem to be reflected in the log. (though it is reflected in the Origin Request function). Should this be possible?
If doing what I want is impossible, I think the alternative approach is forgo CloudFront logs completely and just fire an http request to logstash with every user request, but I feel like this could be easy to overload.
Thanks
After a few days of research and reaching out to Amazon, I was finally able to answer my own questions:
CloudFront logs can't be customized, they are what they are.
See 1.
It turns out that customization is the wrong approach. What I really need to do is aggregate two separate logs that have the information I need into a single logstash entry. It turns out that the Viewer Response lambda#edge function contains a requestId property (actually event.Records[0].cf.config.requestId) which matches the CloudFront log x-edge-request-id column. So while I haven't finished implementing it yet, these two columns can be used in the logstash config for aggregation. I just need to make sure I set up a Viewer Response event that logs out a consistent format that I can then part with logstash. I'm using the logstash-input-cloudwatch_logs to retrieve teh cloudwatch logs.
The user selects some options then clicks "download". At that point, my php script starts preparing the file, and it can take 5-10 minutes before the file is ready and starts downloading. I want to notify the user with a sound that the download has started.
How can I do that?
According to this question:
Is there a way to detect the start of a download in JavaScript?
There is no programmatic way to detect when a download begins. That question is six years old now, so perhaps it is out of date, but I could not find any more recent information to contradict it.
An alternative approach would be to break the download process into two parts so that you can control when the actual data transfer begins:
Instead of initiating the download immediately, have the button send an AJAX request to the server asking it to prepare the file for download.
The server should not reply to the AJAX immediately, but should prepare the file and save it in a temporary file storage area with a unique generated name/ID.
Once the file is ready, the server should reply to the AJAX with the name/ID of the file.
On the client, the AJAX completion callback can play the sound, since it knows the download is about to begin.
It then uses window.open() to request the file from the server.
Now the server can respond with the appropriate headers as you used to do.
Finally, the server can delete the file from temporary storage (or just wait for a cron job to do it).
I have an image upload view on my client (ember.js) that send the resized image to nodejs rest api;
it works well but it is easy for someone expert to force upload of a non-resized image;
I would like to keep the resize process on the client because this allows users to select heavy-weight images, that are resized locally and uploaded only after that, when they are lightweight;
If someone else uses something like this, I'm interested on how it is possible to make this as safe as possible;
As a rule of thumb when developing web applications is never ever trust any data coming from the client side, always try to do a check in your server side!
Use authentication, this ensures that user only allow to upload data to their own account and not fiddling others files.
Add a special message passing between your server and client, a simple example would be
i. send a post API request first (that contains the image information and targeted compressed size) to your server indicating that your client is starting to compress the picture
ii. when uploading, add a metadata to include the complete compressed image, and check the uploaded image with your server if it is within the accepted threshold, else discard it
You could enhance the security of the message passing to be more complicated!
This would be my simple security, anyone else got better solution? :)
Approaches here also work for file uploads. You can use a combination of checking:
content-length header and/or (i.e. req.headers['content-length'] > x)
reading stream size as it's being read by server. (i.e req.on('data'))
If the stream data exceeds a certain size you can respond accordingly. Check out something like Multer for file uploads, specifically the limits section. Best approach would probably the second option.