I have a web page with an IMG tag and a link my-server.com/animationID.gif.
When someone opens a web page, my server generates a new GIF animation which appears on the web page.
I'm using the gifencoder node package to generate dynamic 60-frame animations.
The animation updates every second, so I don't really see a good way to cache it...
It takes 1-3 seconds to generate the animation which is very slow.
A few years ago I used services like countdownmail and mailtimers which generate a 60 frame countdown timers. Somehow, they manage to generate it very fast in less than 0.5-1 second.
After some debugging it seems that the addFrame method takes the most of time (and it's called 60 times).
encoder.addFrame(ctx);
Is there a way to increase the generation speed or cache the animation?
Related
as I said in the title, I need to record my screen from an electron app.
my needs are:
high quality (720p or 1080p)
minimum size
record audio + screen + mic
low impact on PC hardware while recording
no need for any wait after the recorder stopped
by minimum size I mean about 400MB on 720p and 700MB on 1080p for a 3 to 4 hours recording. we already could achieve this by bandicam and obs and it's possible
I already tried:
the simple MediaStreamRecorder API using RecordRTC.Js; produces huge file sizes, like 1GB per hour for 720p video.
compressing the output video using FFmpeg; it can take up to 1 hour for 3 hours recording
save every chunk with 'ondataavailable' event and right after, run FFmpeg and convert and reduce the size and append all the compressed files (also by FFmpeg); there are two problems. 1, because of different PTS but it can be fixed by tunning compress command args. 2, the main problem is the audio data headers are only available in the first chunk and this approach causes a video that only has audio for the first few seconds
recording the video with FFmpeg itself; the end-users need to change some things manually (Stereo Mix), the configs are too complex, it causes the whole PC to work slower while recording (like fps drop; even if I set -threads to 1), in some cases after recording is finished it needs many times to wrap it all up
searched through the internet to find applications that can be used from the command line; I couldn't find much, the famous applications like bandicam and obs have command line args but there are not many args to play with and I can't set many options which leads to other problems
I don't know what else I can do, please tell me if u know a way or simple tool that can be used through CLI to achieve this and guide me through this
I end up using the portable mode of high-level 3d-party applications like obs-studio and adding them to our final package. I also created a js file to control the application using CLI
this way I could pre-set my options (such as crf value, etc) and now our average output size for a 3:30 hour value with 1080p resolution is about 700MB which is impressive
I am using SkiaSharp to load an SVG drawing. It's a site plan, so is reasonably complex, and it takes a long time to load. On my Samsung Galaxy phone, it's about 3 seconds, and during that time the phone completely locks up, which is quite unacceptable.
I am using SkiaSharp.Extended.Svg.SKSvg, but cannot find an asynchronous version of the Load() method. Is there one? Or maybe a better way of doing this?
I am overlaying obejcts on top of the site plan and it has taken me some considerable time to get all the scaling and alignment sorted, so if at all possible, I'd like to stick with SkiaSharp rather than start with something completely different from scratch.
Thanks for your help!
3 seconds does sound a bit long...
It may not be the SVG part, but rather the loading of the file off the file system.
Either way, you might be able to just load the whole think in a background task (pseudocode):
var picture = Task.Run(() => {
var stream = LoadStream();
var svg = LoadSvg(stream);
return svg.GetPicture();
});
Below is the command that I am running to index pages.
bin/nutch crawl bin/urls -solr http://localhost:8983/solr/ -dir crawl -depth 2 -topN 15
The fetching happens pretty quickly but LinkDb:adding segments and SolrIndexer steps are taking lot of time, as I run above command repeatedly the time increases. My requirement is such that I want to index pages as fast as possible because links disappear pretty quickly (within 2 mins). I want to decrease this time to a very small figure, what should I do to make this possible?
If I only wanted to index URL and title of the page, will doing so do any good to indexing speed?
Thanks
If you have a static seedlist then you can delete "crawl" folder each time you want to run the nutch! it would save a lot's of time for you!
every time you run nutch your segments growth so linkdb gonna take more time!
Also you can create a thread and pass this part of job to it, but you have to handle segmenting buy yourself!
I'm trying to debug a laggy machine vision camera by writing text timestamps to a terminal window and then observing how long it takes for the camera to 'detect' the screen change. My monitor has a 60hz refresh rate, so the screen is updated every ~17ms. Is there a way to determine at what point within that 17ms window the refresh timer currently is for an X11 application.
EDIT: After wrestling with the problem for nearly a day, I think the real question I should have asked was how to generate a visual signal that was sufficiently fast to test the camera images. My working hypothesis was that the camera was buffering frames before transmitting them, as the video stream seemed to lag behind other synchronised digital events (in this case, output signals to a robotic controller)
'xrefresh' is a tool which can trigger a refresh event on an X server. It does this by painting a global window of a specified color and then removing it, causing all subsequent windows to repaint. Even with this, I was still getting very inconsistent results when trying to correlate the captured frames against the monitor output, no matter what I tried to do, the video stream seemed to lag behind what I expected the monitor state to be. This could mean that either the camera was slow to capture or the monitor was slow to update. Fortunately, I eventually hit upon the idea of using the keyboard leds to verify the synchronicity of the camera frames. ('xset led' and 'xset -led'). This showed me immediately that in fact my computer monitor was slow to update, instead of the camera lagging behind.
I'm developing an app (XNA Game) for the XBOX, which is a pretty simple app. The startpage contains tiles with moving gif images. Those gif images are actually all png images, which gets loaded once by every tile, and put in an array. Then, using a defined delay, these images are played (using a counter which increases every time a delay passes).
This all works well, however, I noticed some small lag every x seconds in the movement of the GIF images. I then started to add some benchmarking stuff:
http://gyazo.com/f5fe0da3ff81bd45c0c52d963feb91d8
As you can see, the FPS is pretty low for such a simple program (This is in debug, when running the app from the Xbox itself, I get an avg of 62fps).
2 important settings:
Graphics.SynchronizeWithVerticalRetrace = false;
IsFixedTimeStep = false;
Changing isFixedTimeStep to true increases the lag. The settings tile has wheels which rotate, and you can see the wheels go back a little every x seconds. The same counts for SynchronizeWVR, also increases lag.
I noticed a connection between the lag and the moment the garbage collector kicks in, every time it kicks in, there is a lag...
Don't mind the MAX HMU(Heap memory usage), as this is takes the amount of the start, the avg is more realistic.
Here is another screen from the performance monitor, however I don't understand much from this tool, first time I'm using it... Hope it helps:
http://gyazo.com/f70a3d400657ac61e6e9f2caaaf17587
After a little research I found the culprit.
I have custom components that all derive from GameComponent, and who get added to the Component list of the main Game class.
This was one (of a total of 2) major problem, causing to update everything that wasn't needing an update. (The draw method was the only one who kept the page state in mind, and only drew if needed).
I fixed this by using different "screens" (or pages as I called them), wich are the only components who derive from GameComponent.
Then I only update the page wich is active, and the custom components on that page also get updated. Problem fixed.
The second big problem, is the following;
I made a class which helps me on positioning stuff on the screen, relative that is, with percentages and stuff like that. Parent containers, aligns & v-aligns etc etc.
That class had properties, for size & vectors, but instead of saving the calculated value in a backing field, I recalculated them everytime I accessed a property. But calculating complex stuff like that uses references (to parent & child containers for example) wich made it very hard for the CLR, because it had alot of work to do.
I now rebuilt the whole positioning class to a fully functional optimized class, with different flags for recalculating when necessairy, and instead of drops of 20fps, I now get an average of 170+fps!