How do I create an animated 3D day night cycle in godot? - graphics

I am trying to create a 3d game in godot and I have been wondering how I would go about creating a day/night cycle. I would assume I would use shaders but I have found no resources on the subject. My idea to have a directional light as the sun but from there I do not know where to go. EX. how would I know when the sky should be a certain color?
Thank you in advance.

This is a very broad topic and therefore as a whole not very suitable for a stack overflow question. I'll focus on the last part of the question ("how would I know when the sky should be a certain color"), because that is a well focussed question. You will have to figure out the rest on your own (directional light is certainly an option).
Godot does not keep the time for you. But you get the length of the current frame in the delta parameter of the _process function. You can sum up the value of delta over every frame which gives you the number of real seconds since the game started.
Usually time passes faster in game than in the real world. You need to decide on a factor that specifies how much faster. If you divide the summed up delta time by this factor you get the number of in-game seconds since the game started. Take that modulo the number of in-game seconds in an in-game day and you have the time of day.
How to map the time of day to sky colours is up to you again.

Related

Possible to find velocity of person in video or camera using openpose

Question is, I want to calculate the speed of my arm for Slap detection. So I am using openpose to get the body points (here total points: 25) using body_25 model and using this along with the time I want to deduce the speed of my arm, i googled through openpose, stackoverflow, github.But could not succeed?
Velocity = Distance / Time = dx/dt
dx = frame3_bodypoints - frame_1_bodypoints;
dt = ?
I don't know how to find this from the openpose, is there a way I can find this? Any thoughts, would be great help!
I've never used OpenPose. But Newtonian physics would indicate that a slap corresponds to a sudden change in velocity of the hand.
I think it's a reasonable first approximation to assume that the Δt between frames is constant. Instantaneous variation in frame rate is called jitter. I would expect jitter to be small for modern recording devices. In any case, I don't know how to get instantaneous frame rate with the tools (OpenCV, PIL) that I am familiar with. I couldn't find any references to frame rate or time in the OpenPose docs.
For calculating velocity and delta-velocity, you have choices. Straight up linear velocity of the hand might be the easiest. For position changes use the geometric mean of positions (Δs = sqrt((x2-x1)^2 + (y2-y1)^2).
You could also calculate an angular velocity between the hand and the elbow, but that would be a little more involved and prone to noise.

Object tracking for velocity measurement

I am looking for a real time image processing to measure the velocity and output it to another control system.
I have attached an image of a yellow stripe. This has markings on the surface that I would like to automatically detect and use for calculation. In the first step the material moves only in one direction. Here for example to the right. For me only the horizontal part of the movement is of interest, so quasi only the velocity along an x-axis. But the material moves relatively fast. At maximum speed in 28 ms the current mark (spike) is at the position of the one in front of it.
The idea is to use a Raspberry Pi 4 with a camera at the maximum of 120 fps. So every 8.3 ms a picture is generated and should make it possible to clearly detect the movement of the marks.
My questions are:
Is it possible to process the images and detaction that fast to get the velocity in nearly real time? And which algorithem should I use for this configuration? Because it would be best if I could use two or three markers per image and average the velocity of them.
And I would like to use the velocity as an input signal for another system. What is the easiest and fastest way to send the information directly to another control system?

Tuning Parameters to Optimize Score without CNN

I am trying to create an Agent in Rust that uses a scoring function to determine the best move on a 2D uniform cost grid. The specifics of the game aren't very relevant, other than knowing that each turn you can choose to make one of 4 moves (up, down, left or right) and you are competing against other AIs who are playing on the same board. Currently the AI makes "branches" of possible paths it could make into the future using several different simple algorithms such as using A* to find enemies or food. Several characteristics are saved as the future simulations run including the number of enemies we killed on that branch, amount of food we ate and how long the future branch lasted before we died.
Once we are ready to make our move, we give each future predicting branch a score and go in the direction with the highest average score. This score is essentially a sum of each characteristic mentioned previously multiplied by a constant. For example the score may be 30 * number of food eaten + 100 * number of enemies killed. However, the number 30 and 100 were chosen almost at random through experimentation. If the snake died from not eating food then I increase the score multiplier for eating food for example. However, there are 10 different characteristics each with their own weight. Figuring out the relationship between them all manually is both time consuming and doesn't easily converge onto the optimal strategy.
Here in lies my issue. I would like to find a way to "train" the values for the AI through a process sort of like Q-Learning. There is a very clear terminal condition when you win or lose which helps. My currently idea is creating a table with 100 possible values of each parameter, then play 100 games with each combination and record the win rate. However, this would take (1000 choose 10) * 100 games or 2.6E25 games. It seems like there should be a smarted way to eliminate bad combinations using some form of loss minimization. If anybody has suggestions on tuning these parameters without a neural network, it would be greatly appreciated.

Visualising Audio - How to do it

I want to know how this: https://youtu.be/aFXcQvdAe08 was done.
Any ideas​ how this was created?
In really high level terms, I think it could happen like this:
The entire song is divided up into windows of some fraction of a second.
For each window, a Fourier transform or some equivalent transform the audio signal from the time to frequency domain.
The transformed signal is mirrored radially around the origin, where the "fatness" of each circle could correspond to "how tall the respective hump is" in the frequency graph.
So, to make the animation "smooth" one would probably have at least 10 windows a second. Hope that helps. All images are own work.

Obstacle avoidance using 2 fixed cameras on a robot

I will be start working on a robotics project which involves a mobile robot that has mounted 2 cameras (1.3 MP) fixed at a distance of 0.5m in between.I also have a few ultrasonic sensors, but they have only a 10 metter range and my enviroment is rather large (as an example, take a large warehouse with many pillars, boxes, walls .etc) .My main task is to identify obstacles and also find a roughly "best" route that the robot must take in order to navigate in a "rough" enviroment (the ground floor is not smooth at all). All the image processing is not made on the robot, but on a computer with NVIDIA GT425 2Gb Ram.
My questions are :
Should I mount the cameras on a rotative suport, so that they take pictures on a wider angle?
It is posible creating a reasonable 3D reconstruction based on only 2 views at such a small distance in between? If so, to what degree I can use this for obstacle avoidance and a best route construction?
If a roughly accurate 3D representation of the enviroment can be made, how can it be used as creating a map of the enviroment? (Consider the following example: the robot must sweep an fairly large area and it would be energy efficient if it would not go through the same place (or course) twice;however when a 3D reconstruction is made from one direction, how can it tell if it has already been there if it comes from the opposite direction )
I have found this response on a similar question , but I am still concerned with the accuracy of 3D reconstruction (for example a couple of boxes situated at 100m considering the small resolution and distance between the cameras).
I am just starting gathering information for this project, so if you haved worked on something similar please give me some guidelines (and some links:D) on how should I approach this specific task.
Thanks in advance,
Tamash
If you want to do obstacle avoidance, it is probably easiest to use the ultrasonic sensors. If the robot is moving at speeds suitable for a human environment then their range of 10m gives you ample time to stop the robot. Keep in mind that no system will guarantee that you don't accidentally hit something.
(2) It is posible creating a reasonable 3D reconstruction based on only 2 views at such a small distance in between? If so, to what degree I can use this for obstacle avoidance and a best route construction?
Yes, this is possible. Have a look at ROS and their vSLAM. http://www.ros.org/wiki/vslam and http://www.ros.org/wiki/slam_gmapping would be two of many possible resources.
however when a 3D reconstruction is made from one direction, how can it tell if it has already been there if it comes from the opposite direction
Well, you are trying to find your position given a measurement and a map. That should be possible, and it wouldn't matter from which direction the map was created. However, there is the loop closure problem. Because you are creating a 3D map at the same time as you are trying to find your way around, you don't know whether you are at a new place or at a place you have seen before.
CONCLUSION
This is a difficult task!
Actually, it's more than one. First you have simple obstacle avoidance (i.e. Don't drive into things.). Then you want to do simultaneous localisation and mapping (SLAM, read Wikipedia on that) and finally you want to do path planning (i.e. sweeping the floor without covering area twice).
I hope that helps?
I'd say no if you mean each eye rotating independently. You won't get the accuracy you need to do the stereo correspondence and make calibration a nightmare. But if you want the whole "head" of the robot to pivot, then that may be doable. But you should have some good encoders on the joints.
If you use ROS, there are some tools which help you turn the two stereo images into a 3d point cloud. http://www.ros.org/wiki/stereo_image_proc. There is a tradeoff between your baseline (the distance between the cameras) and your resolution at different ranges. large baseline = greater resolution at large distances, but it also has a large minimum distance. I don't think i would expect more than a few centimeters of accuracy from a static stereo rig. and this accuracy only gets worse when you compound there robot's location uncertainty.
2.5. for mapping and obstacle avoidance the first thing i would try to do is segment out the ground plane. the ground plane goes to mapping, and everything above is an obstacle. check out PCL for some point cloud operating functions: http://pointclouds.org/
if you can't simply put a planar laser on the robot like a SICK or Hokuyo, then i might try to convert the 3d point cloud into a pseudo-laser-scan then use some off the shelf SLAM instead of trying to do visual slam. i think you'll have better results.
Other thoughts:
now that the Microsoft Kinect has been released, it is usually easier (and cheaper) to simply use that to get a 3d point cloud instead of doing actual stereo.
This project sounds a lot like the DARPA LAGR program. (learning applied to ground robots). That program is over, but you may be able to track down papers published from it.

Resources