I am trying to load thumbnail using Glide but it is taking too long. The first few thumbnails load instantly but you have to wait quite a while for rest of them to load.
Glide.with(Context)
.load(new File(videoFiles.get(position).getPath()))
.placeholder(android.R.drawable.alert_dark_frame)
.error(android.R.drawable.stat_notify_error)
.into(holder.thumbnail);
I have also tried Picasso but the result is not much different.
Picasso.get()
.load(new File(videoFiles.get(position).getPath()))
.placeholder(alert_dark_frame)
.error(stat_notify_error)
.into(holder.thumbnail);
what implementation should I change to load them faster as expected?
Related
The intended way to take a screenshot via Splinter is pretty straightforward, and I understand that in the context of mimicking a web-browser a screenshot basically means saving an image to a file, but I was wondering if I could throw away that IO concern by directly reading the screenshot into a Python PIL object when I invoke browser.screenshot() . The reason for this is that I would perform some processing on the image regardless so saving it to disk and reading it from disk seems like a step I could short-circuit.
browser = Browser()
screenshot_path = browser.screenshot('absolute_path/your_screenshot.png')
Something like
screenshot_pil = browser.screenshot('path_to', inmemory=True)
Not sure if I missed this in the documentation, but there is a function screenshot_as_png() that seems to do what I want but I'm not sure how to access it through the namespace of a Browser object
I have a Google Colab notebook with PyTorch code running in it.
At the beginning of the train function, I create, save and download word_to_ix and tag_to_ix dictionaries without a problem, using the following code:
from google.colab import files
torch.save(tag_to_ix, pos_dict_path)
files.download(pos_dict_path)
torch.save(word_to_ix, word_dict_path)
files.download(word_dict_path)
I train the model, and then try to download it with the code:
torch.save(model.state_dict(), model_path)
files.download(model_path)
Then I get a MessageError: TypeError: Failed to fetch.
Obviously, the problem is not with the third party cookies (as suggested here), because the first files are downloaded without a problem. (I actually also tried adding the link in my Allow section, but, surprise surprise, it made no difference.)
I was originally trying to save the model as is (which, to my understanding, saves it as a Pickle), and I thought maybe Colab files doesn't handle downloading Pickles well, but as you can see above, I'm now trying to save a dict object (which is also what word_to_ix and tag_to_ix) are, and it's still not working.
Downloading the file manually with right-click isn't a solution, because sometimes I leave the code running while I do other things, and by the time I get back to it, the runtime has disconnected, and the files are gone.
Any suggestions?
I want to create thumbnail from a video at a certain time in node js, now I am using node module "video-thumb"
This is the code that uses node-thumb module to create thumbnail
And this is the result however the result is shown but don't know where the snapshot.png
So basically my front end is displaying images that are about 3-5mb. I need to find a way to make their size way smaller.
Basically I have the file object on node.js uploading to amazon aws. I just need to compress it before it uploads.
Also whats the best way to resize images on node.js?
The best way is use Canvas Node js module. This module is up to 3 times faster than ImageMagic.
Images manipulation comparation - https://github.com/ivanoff/images-manipulation-performance
author's results:
canvas.js : 4.001 img/sec;
gm-imagemagic.js : 1.206 img/sec;
gm.js : 1.536 img/sec;
lwip.js : 0.406 img/sec;
I have an app that needs to render frames from a video/movie into a CGBitmapContext with an arbitrary CGAffineTransform. I'd like it to have a decent frame rate, like 20fps at least.
I've tried using AVURLAsset and [AVAssetImageGenerator copyCGImageAtTime:], and as the documentation for this method clearly states, it's quite slow, taking me down to 5fps sometimes.
What is a better way to do this? I'm THINKING that I could set up an AVPlayer with an AVPlayerLayer, then use [CGLayer renderInContext:] with my transform. Would this work? Or perhaps does a AVPlayerLayer not run when it notices that it's not being shown on the screen?
Any other ways to suggest?
I ended up getting lovely, quick UIImages from the frames of a video by:
1) Creating an AVURLAsset with the video's URL.
2) Creating an AVAssetReader with the asset.
3) Setting the readers's timeRange property.
4) Creating an AVAssetReaderTrackOutput with the first track from the asset.
5) Adding the output to the reader.
Then for each frame:
6) Calling [output copyNextSampleBuffer].
7) Passing the sample buffer into CMSampleBufferGetImageBuffer.
8) Passing the image buffer into CVPixelBufferLockBaseAddress, read-only
9) Getting the base address of the image buffer with CVPixelBufferGetBaseAddress
10) Calling CGBitmapContextCreate with dimensions from the image buffer, passing the base address in as the location of the CGBitmap's pixels.
11) Calling CGBitmapContextCreateImage to get the CGImageRef.
I was very pleased to find that this works surprisingly well for scrubbing. If the user wants to go back to an earlier part of the video, simply create a new AVAssetReader with the new time range and go. It's quite fast!