Depth map multithreading - multithreading

I am trying to make my depth map video faster and so I ended up to multithreading.
I am having difficulties to understand this topic as I don't succeed to find an example enough clear for me.
Imagining a simple code as :
img1= cv2.VideoCapture(0)
img2= cv2.VideoCapture(1)
stereo= cv2.StereoSGBM.compute(img1,img2)

Related

Why does coord.should_stop() always return True?

I am currently trying to make EV-FlowNet work on my computer. EV-FlowNet is an open-source neural network for event-based optical flow estimation (full code here: https://github.com/daniilidis-group/EV-FlowNet). It is based on tensorflow but
unfortunately, I have no experience with this library so I have a hard time figuring out why things are not working. I have downloaded the trained network, the input data and the ground truth and have positioned them in the folders listed in the README file. I am trying to run 'test.py' and it runs without errors. However, it never enters into the main loop in which the results are visualized.
The condition for the main loop is this:
while not coord.should_stop():
coord is defined like this:
coord = tf.train.Coordinator()
and the threads are defined like this:
threads = tf.train.start_queue_runners(sess=sess, coord=coord)
I have tried googling it but all I could find was that the threads stop if any of them call coord.request_stop(). Since I can't find anything in the code that would make them stop, I am don't understand why coord.should_stop() is true from the very beginning. I know this questions is quite vague but since I have no experience with tensorflow I am not sure what other information might be required. This is why I have included the link to the entire code. Thanks in advance!
`

Smooth look_at() in Godot

I am working in GDScript, and I'm trying to get the player to aim their weapon in the direction of the mouse cursor. The look_at() function is great, but it's not the smooth movement I'm looking for.
So, instead I've been experimenting with the lerp() math operation to make it smooth... but its very jittery and buggy movement which is obviously not ideal.
func _process(delta):
`rotation_degrees = lerp(rotation_degrees,rad2deg(get_angle_to(get_global_mouse_position())),aim_speed)
`
1 See the image of my code here.
I can't work out if I'm doing something really stupid as I'm fairly new to Godot. Any help would be massively appreciated!
Please use_physics_process() instead of _process()
_process runs at the fastest possible rate whenever it gets the CPU. That makes it jittery. It is not ideal for player movements since you will need consistency in your frames.
_physics_process() is consistent as it is frame independent and is ideal for player movement.
if you want more details I recommend you check this video by Godot Tutorials
Btw look_at() should work just fine :)

pixi.js how to do batch rendering

I'm looking for help with batch rendering. I understand it batches together objects such as sprites, graphics, etc for performance purposes. I have a very large number of objects to display. However, the only assistance I found was this which is so hard to follow. Is there any simple tutorials or guidance out there for this? Ideally, I want a very simple example that batches together 2 sprites and renders them in one draw call. Many thx.
I spoke to the creator of pixi-viewport and he said that pixi has batching built in.
Performance improvements were instead gained by:
Using pixi-cull (a good perf boost if you are also using pixi-viewport)
Switching from Graphics to Sprites
minimising the use of sprite.interactive = true
using text.renderable = false when text is zoomed out (again zooming relates to pixi-viewport)

APCS final project: Converting an audio file to a simpler MIDI file

Lets say I have the audio file for Happy Birthday. I want to convert that audio file into an audio file that sounds like this : happy birthday.
First, I'd like to know if I have the ability to program this? Can a highschooler who's almost finished with APCS program this?
If I can:
How would I change the bpm of the song? I've searched through a bunch of websites, but they weren't very helpful.
I know that audio files can be represented in waveforms. How would I scan for each individual wave in an audio file (I need this to isolate the notes)?
This is a very ambitious project, actually. One reason is that it involves using digital signal processing tools like FFT (Fast fourier transforms) to analyze the sound to pick out the pitches. You might be able to find a library that can do this, but as far as coding such a tool, that would involve a steep learning curve.
If you would like to look further into this, there is a good online resource called "The Scientists and Engineers Guide to Digital Signal Processing". I was able to work through and understand the discrete fourier transform with only high school math (lots of trig) and a bit of calculus. It was a lift, though.
Trying to analyze rhythm is also no easy task. Even with advanced tools provided in professional notation system such as Finale, people have trouble playing rhythms in time well enough for the best transcription tools. Algorithms that "quantize" the beats help but also limit the amount of detail that can be included in the playback.
My guess is that as interesting and worthwhile as this project would be, to bring it to completion before the semester ends would require putting together prebuilt pieces. A lot of programming is done that way, these days.
If you scale the project back to something like just getting your code to analyze a short sample of a single note and give its pitch, that would be both impressive and doable with a lot of work. It could be done with a DFT algorithm instead of requiring FFT, reducing the amount of info you'd have to acquire first. That way, you'd only have to work your way up to understanding and implementing the material on this link which is about calculating the DFT. Notice that there is example code in BASIC. The code examples throughout this book are a big help.

Creating a drawn audio reactive visual

I'm looking to create this project in processing, however, I'm finding the terminology a bit hard. I'm not sure how to call the effect where the line is staying permanently throughout the song to 'draw' the music data.
I would appreciate any guidance on what tutorials I could look at or an answer from someone.
My aim is to create something as close to this as possible:
https://www.youtube.com/watch?v=Bb5PTitqtlc&t=58s
Stack Overflow isn't really designed for general "how do I do this" type questions. It's for specific "I tried X, expected Y, but got Z instead" type questions. But I'll try to help in a general sense:
You need to break your problem down into smaller pieces and then take those pieces on one at a time. Write down exactly what you want to happen, in English, and that will be an algorithm that you can think about implementing with code.
Get something simple working. Can you write a simple sketch that plays a song? Then work your way forward in small steps. Can you write a simple sketch that prints out some numeric values based on the song that's playing? Separately from that, can you create a very simple visualization using hard-coded numbers? Get all of that working separately before you think about combining them into a sketch that shows a visualization based on a song that's playing.
Then if you get stuck, you can post a more specific question along with a MCVE. Good luck.

Resources