We have set up an upload page on a client's site to upload videos to YouTube using the browser based upload rather than direct upload.
We have run some test uploads for videos around 500Mb in size which have uploaded fine, but we had a shot at a much large one - 1.9Gb - and that failed.
The only error we could see was in the return URL which ended "status=400&error=TOKEN_EXPIRED".
Now, the videos do take a long time to upload. On average a 400Mb video takes just under 2 hours, so we reckoned on a 1.9Gb video to take getting on for 10 hours.
Could the issue be that the token which Youtube returns is only valid for a certain period of time, and because of the length of time taken to upload the 1.9Gb files, it simply expired?
I can't find any anwser to this on the Google forum and in any case that forum is now closed to new posts.
If anyone can shed some light on this we would be grateful.
How long was the upload going for when you got this token?
Youtube did have a token timeout at like 4.5hours, not sure if they still do or not.
Related
I am trying to solve a problem where you need to record screens in real time and keep on sending the data to the backend which will store the video as an s3 object(any cloud store).
I did research it, but everywhere I see people are recording the video and send it as a single file after recording is completed, the problem here is the file may be very big to send it as a single file, hence I want it to get saved in real-time in s3.
I have also seen Webrtc which helps in peer to peer communication.
any suggestions around this to implement in GO or Nodejs will be helpful.
Thanks
What you can do is using an SFU. Which will be used to send screen data to using webrtc and save it to a file server-side.
You can use mediasoup for this.
Here is a working example: https://github.com/ethand91/mediasoup3-record-demo
You should check Multipart upload overview.
No matter how large the video will be, you only need to upload each 5M data as a part to S3. Although it doesn't work exactly like a stream, it's almost a stream.
For the GO sdk, please check S3 Golang SDK
I have a node.js with express running on Heroku linked to a github repository, it is serving a website which also contains a "gallery" section.
The pictures in the gallery are loaded in very high res by other non tech-savvy admins, to prevent huge data usage from mobile users.
I would like the express.js server to downscale and compress the images coming from a certain path when requested by a normal get request before sending them as reply.
Could you help me understand how can i "intercept" those requests ? or at least route me in a certain direction ?
Sorry to ask it here and like this, but i tryed looking many wikis and some questions here on stackoverflow but none seems to talk about what i'm searching
(at least from my understanding).
Thank you for your time!
I have a Firebase app with a Cloud Function that generates some thumbnails when an image is uploaded to a particular bucket.
I keep getting these errors, pretty much nonstop:
My question is, and granted I am somewhat new to the Google Cloud Platform, how many times does DNS resolution happen? Does it happen on any upload and downloaded between Firebase and Google Cloud Storage?
All my operations are between Firebase and Google Cloud Storage (i.e. - download from bucket, resize in temp space, and upload back to bucket), and I have a check to make sure not to automatically return if an image begins with 'thumb_' to avoid infinite loop.
That being said, I believe I have this error because initially I accidentally did get myself into an infinite loop and blow out my quota.
Here is some more info about DNS resolutions, I'm not entirely sure how to interpret it but it appears 'DNS resolutions per 100 seconds' is exceeded, but 'DNS resolutions per day is not'
I think the quota limts are 40,000 per 100 seconds Either you are having so many calls, or there is some bug in your code which might be making too many calls unknowingly.
Is there a way to upload large movie file (around 5GB) and at some point interrupt the upload process, log out and come back to the website to resume de upload after 2/3 hours?
Of course, resume should be made from the same computer and referencing the same file.
It's doable using socket.io, and the HTML5 filereader API. With the filereader API you can slice and dice your video file in smaller chunks and then send those through web sockets to Node.js. Because this chunks are well defined and enumerable, you can resume the upload whenever you want as long as the server keeps track of the current chunks downloaded. You may find this interesting tutorial very similar to what your asking for.
Is there a url I can go to to check the status of video I am uploading to YouTube via the API?
I went to this page
https://developers.google.com/youtube/2.0/developers_guide_protocol_checking_video_status
which told me to go to this URL
https://gdata.youtube.com/feeds/api/users//uploads
But all I got back is an RSS feed of videos that are already uploaded (Published).
I am looking for unpublished videos and the associated progress.
I am using resumable upload so I am think if 10% of the video got uploaded I should be able to see that somewhere?
In Java sample you can request progress status via MediaHttpUploader.getProgress() or MediaHttpUploader.getNumBytesUploaded() "https://code.google.com/p/youtube-api-samples/source/browse/samples/java/youtube-cmdline-uploadvideo-sample/src/main/java/com/google/api/services/samples/youtube/cmdline/youtube_cmdline_uploadvideo_sample/UploadVideo.java#213"
C# would be really similar.