I am developing the framework for Image processing using the Deep learning neural network YOLOv7. I have 20000 live data stream images per second, which is produced by the real-time machine. I want to handle these 20000Frames in minimal time using the current GPU architecture. Is this possible for I can increase the processing FPS using any open-source tools-techniques? multithreading or Kafka Concept.
Any suggestion or idea will be appreciated.
Note: I am using the NGINX server and docker-Kubernetes combo.
Related
I am using JMeter to generate load for Azure Event Hub to do performance testing. I want to have constant load in Event Hub( at the time of message ingestion). I tried follwoing options.
Constant Throughput Timer
Number of active threads(users) -100 and ramp up time - 20 seconds.
I am not getting constant load in Event Hub. Getting too much spikes in Event hub. Please suggest a way to get constant load in Event hub via JMeter.
Regards,
Amit
JMeter is capable of creating a constant load pattern, just make sure to follow JMeter Best Practices and recommendations from 9 Easy Solutions for a JMeter Load Test “Out of Memory” Failure article, the essential points are:
Run JMeter in non-GUI mode
Ensure that JMeter has enough headroom to operate in terms of CPU, RAM, network and disk IO, etc. This can be done using JMeter PerfMon Plugin
It might also be the case your application and/or middleware configuration is not appropriate for high constant load, check out i.e. Concurrent, High Throughput Performance Testing with JMeter where the guy initially had load pattern like this:
and after tuning his application and JMeter he got to the following result:
I am running a small Node.js server on a Heroku Hobby tier dyno. This tier allocates 512mb of RAM. This service takes in a moderate size of JSON data (~5000 JSON objects, each is ~5kb, 5000*5kb = 25mb), runs some analysis on it and outputs about 20 metrics. It's not a trivial amount of input data but it's also definitely not GBs of files. I'm confused where I'm running into the memory limits here. I run one continuous request for all 20 metrics. Does garbage collection not happen until the end of the request? I'm creating a lot of Date objects probably around 2 per JSON object so about 10,000 total and I do this upfront and save them for the whole process so I'm not constantly re-creating these dates. Could this be the issue I'm running into? Any suggestions for how to optimize there?
By the way, I know Node.js isn't the best tool for data processing in general and we're already looking to move to a Python based server for the libraries and multithreaded environment. But, until that's up and running, I'd love to be able to understand and improve the situation I'm currently running into on Node.js.
Thanks!
I have a video on demand platform hosted in a server with 1GB Bandwidth / 11GB RAM / 4 CPU Cores.
I'm reaching now 10.000 users/day and the stream start being slow.
I'm using nodeJS on server-side with send module for streaming.
I want to provide the best streaming experience for my users, so what I need to do to make the stream faster.
I have been using Azure Media Services to upload files, encode them to multi-bitrate mp4 and then expose them as smooth streaming units using locators. My problem is that the encoding process is taking a lot of time, say 25-30 minutes for a 30MB file. The mp4 files that I will actually use will be much bigger and I suppose that would take time in hours.
Is there a way to speed up this process, using some other encoders or other means ?
What speed of Encoding units are you using? Are you using just the free shared pool with no reserved units?
Take a look at the different sizes available (S1, S2, S3) here: https://learn.microsoft.com/en-us/azure/media-services/media-services-scale-media-processing-overview#choosing-between-different-reserved-unit-types
S3 size units are the fastest.
Hope someone can help out here. I have been trying out the IIS Smooth Streaming for many weeks. But without much success.
On-demand Smooth Streaming
No problems with streaming on-demand clips on LANNo problems with streaming on-demand clips across the web
Live streaming using file source
No problems with live streaming a file on LANCannot live stream a file across the webInitiated a publishing point using AWS-EC2Connected Encoder Pro to publishing pointPublishing point never gets past "Starting"
Live streaming using live webcam
Slight problem with live streaming from my webcam on LAN10 seconds lagAfter like 20 seconds, silverlight client hangs and stops requesting for chunksHTTP 412 - precondition failedOnly way to rectify is to refresh the browser
Cannot live stream from webcam across the webInitiated a publish point using AWS-EC2Connected Encoder Pro to publishing pointPublishing point never gets past "Starting"
Things I have tried to rectify network problems
Connecting my laptop directly to the gateway, rather than through a routerShutting down Windows firewall on laptopInitiating a AWS-EC2 with no firewallWireShark indicates HTTP404 and HTTP501 error when "connecting" to the publishing point from the encoder
My LAN specs
Running Encoder, and IIS Streaming Server on Boot Camp MacBook Pro, i7, 2.2GHz, 4GB RAMRunning Silverlight Client on i5, 2.53GHz, 4GB RAMOutput Stream: Default configurations for H.264 IIS Smooth Streaming Low Bandwidth
To test streaming stuff you really need to use separate PCs as the streamer/encoder, transport server and client. Or at least start off that way. You are asking a bit too much out of that macbook pro there, especially when it comes to I/O.