this is an odd one that I can't work out.
I want to plot the swing low of an high time frame chart, say 1h. on any lower timeframes.
my aim ,to keep the program clean is to use a custom library, but for this post ,to keep things easy I just use a function in the indicator.
I wanted this to be independent of the timeframe I am looking at, so if I am on 1 minute I still want the 1h plot
BUT if my chart is on 1h , perfect
if my chart is higher than 1h , as expected the plot is there but behave inconsistently
if my chart is 45 min , perfect
if my chart is 40 min , perfect
if my chart is 35 min , perfect
from 30 minutes to 1 minute no plot :WHY???????
please help and thank you so much for any insight
[us30hi60, us30lo60] = request.security("US30", "60", [high, low])
//this is the input,
type Swing //my custom object, more variables in main indicator
int x_Time
float y_Price
sw_low_new // this is the function and inputs,
(float low_input,
string time_frame, // main value reference for swing low
bool plot_lines,
int dt)=>
var a= array.new<Swing>() //create an empty array of NEW swing lows objects
var b= array.new<Swing>()
for swingarray in a// check each swing low istradeed below
if low_input[0] < swingarray.y_Price
newswl_taken=Swing.new(array.get(a,0).x_Time,array.get(a,0).y_Price)
array.remove(a,array.indexof(a,swingarray)) // remove the array sw new
array.unshift(b,newswl_taken)// fill taken array
if low_input[2]>low_input[1] and low_input[1]<low_input[0] //find a new swing point
newswl= Swing.new(time -1*dt,low_input[1])
array.unshift(a,newswl) // put the swing low object into the array (a)
[a,b] //return the 2 arrays
[us500_1m_sw_low_arr,us500_1m_sw_low_taken_arr]=sw_low_new(us500lo60,"60",true,15000)
plot(array.size(us500_1m_sw_low_arr)>2 ?array.get(us500_1m_sw_low_arr,0).y_Price:na )
Related
I have a flying car that I want to lose some HP when colliding with objects.
I connected car's RigidBody2D to this function to do that
func _on_Car_body_entered(body):
var force = linear_velocity.length()
var dmg = pow(force / 100, 2) - 0.25
if dmg <= 0: return
Health = Health - dmg
Now, since I don't have to be precise I'm just using current velocity as the force, though this is up for change.
After getting my 'force of impact', I put it into damage calculating formula and if damage is above 0, decrease HP by damage.
This works fine in most cases
BUT
I noticed that if car's going fast horizontally and just barely touch the ground (that's perfectly horizontal), car gets hit with a lot of damage, because I'm using the length of the velocity vector.
Ofcourse, this case can be managed by using just the Y component of the velocity vector, but then it removes any horizontal collisions, and vice versa, and it also leads me on to the path of programming vertical and horizontal collisions, and ofcourse those are not the only 2 directions of colisions I need.
Is there a way to remove the sliding factor from this equation?
You can get the sine of the angle between your velocity and the collision normal, and then take the absolute of that.
# 0 When sliding along the wall. 1 when hitting the wall head on
var slide_factor = abs(cos(vel_last_frame.angle_to(collision_normal)))
This will give you a value from 0 to 1. When you are just sliding along the wall, this value will be 0, and when you hit the wall straight on, it will be 1.
I am using the velocity from the last frame here so that it gets the velocity just before the collision. I get it by setting vel_last_frame to linear_velocity inside the _physics_process function.
You can only get the collision normal inside the _integrate_forces function using PhysicsDirectBodyState.get_local_contact_normal(), so you need to make a variable that can be accessed in this function and the _on_Car_body_entered function. Note that you need to set contact_monitor to true and contacts_reported to at least 1 for this function to work.
var collision_normal
func _integrate_forces(state):
# Check if there is a collision
if state.get_contact_count():
# contact_monitor must be true and contacts_reported must be at least 1 for this to work
collision_normal = state.get_contact_local_normal(0)
Now in the _on_Car_body_entered_function, you can multiply dmg by sliding_factor to scale it less depending on how much you are sliding against the wall.
func _on_Car_body_entered(body):
var force = linear_velocity.length()
# 0 When sliding along the wall. 1 when hitting the wall head on
var slide_factor = abs(cos(vel_last_frame.angle_to(collision_normal)))
var dmg = pow(force / 100, 2) - 0.25
# Reduce dmg depending on how much you are sliding against the wall
dmg *= slide_factor
if dmg <= 0: return
Health = Health - dmg
Found a solution for my problem here
This gives me a predictable force range to work with.
I copied all code for 2D collision, just added damage calculation
Range of forces my objects produce is <3000 for small collisions like scratches and bumps, ~10k for beginner friendly damage, and 20k+ for when I really slam the gas pedal, so I just convert that force to damage that I want.
Best part is that I don't have to use the body_entered from RigidBody2D, because now all my cars have this calculation in them, so when 2 of them collide they both get damaged.
extends RigidBody2D
var collision_force : Vector2 = Vector2.ZERO
var previous_linear_velocity : Vector2 = Vector2.ZERO
func _integrate_forces(state : Physics2DDirectBodyState)->void:
collision_force = Vector2.ZERO
if state.get_contact_count() > 0:
var dv : Vector2 = state.linear_velocity - previous_linear_velocity
collision_force = dv / (state.inverse_mass * state.step)
var dmg = collision_force.length() / 2000 - 2
if dmg > 0:
set_hp(Health - dmg)
emit_signal("Damaged")
previous_linear_velocity = state.linear_velocity
**OLD ANSWER**
RUBBER DUCK HERE
In script for car I added a new variable var last_linear_velocity = Vector2()
Then stored the last velocity in _process
func _process(delta):
last_linear_velocity = linear_velocity
Not in _integrate_forces because if I put it there then the new and last velocities are the same.
And just changed how force is calculated in the function mentioned above, so it looks like this
func _on_Car_body_entered(body):
var force = last_linear_velocity.length() - linear_velocity.length()
var dmg = pow(force / 100, 2) - 0.25
if dmg <= 0: return
Health = Health - dmg
Now I get a nice predicable range of values and can transform that to damage.
NOTE
I noticed that sometimes when collision occures the difference between the last and current velocity lengths is negative, as in - car is accelerating.
Anyway, this works for me for now.
If you find a better solutions do post it, as I couldn't find a solution to this problem online elswhere.
There is a idle tone of a car. I want to make accelerating and deccelerating that sound by changing ffts.
How can achieve this. I only know C and little bit C++.
start with your idle sound array of raw audio [array1] ( this is the payload of a WAV file in PCM format of the car idling ) which will be in the time domain
feed this array1 into a FFT call which will return a new array [array2] which will be the frequency domain representation of the same underlying data as array1 ... where in this new array array2, element zero represents zero Hertz and the freq increment (incr_freq), separating each element, is defined by source sound array1 parms as per
incr_freq := sample_rate / number_of_samples_in_array1
... value of each element of array2 will be a complex number from which you can calc the magnitude and phase of the given freq ... to be clear frequency with regard to array2 is derived based on element position starting with element zero which is the DC bias and can be ignored ... knowing the frequency increment above (incr_freq) lets show the first few elements of array2
complex_number_0 := array2[0] // element 0 DC bias ignore this element
// its frequency_0 = 0
complex_number_1 := array2[1] // element 1
// its at frequency1 = frequency_0 + incr_freq
complex_number_2 := array2[2] // element 2
// its at frequency2 = frequency_1 + incr_freq
now identify the top X magnitudes in array2 (nugget1) ... these are the dominant frequencies which are most responsible to capture the essence of the car sound ... we save for later the element value of array2 for the X elements ... we calc magnitude using below which is inside loop across all elements of array2
for index_fft, curr_complex := range complex_fft {
curr_real = real(curr_complex) // pluck out real portion of imaginary number
curr_imag = imag(curr_complex) // ditto for imaginary part of complex number
curr_mag = 2.0 * math.Sqrt(curr_real*curr_real+curr_imag*curr_imag) / number_of_samples_array2
// ... more goodness here
}
now feed the array2 FFT array into an inverse FFT call which will return an array [array3] once again in the time domain ( raw audio )
if you do not alter the data of array2 your array3 will be the same (to a first approximation) as array1 ... now jack up your array2 to impart the acc or dec before sending it into that inverse FFT call
the secret sauce of how to alter array2 is left as an exercise ... my guess is put into a loop your synthesis of array3 from array2 which gets immediately rendered to your speakers inside this loop (loop_secret_sauce) ... where you increment (acc) or decrement(dec) the top X frequencies as identified above as nugget1 ... meaning as a whole shift the entire set of freq of all of the top X frequencies as defined by their magnitude ... give a non linear aspect to this shift of the set of X frequencies ... possibly not only increment or dec the freq of this set but also muck about with their relative magnitudes as well as introduce a wider swatch of frequencies in this loop
to give yourself traction in making this secret sauce use as array1 several different recordings of the car when its at idle or acc or dec and compare its array2 and use the diff between idle, acc, dec inside this sauce loop
here we drill down on mechanics ... when source audio is idle we iterate across its array2 and identify the top X elements of array2 with the greatest magnitude ... these X elements of array2 get saved into array top_mag_idle ... do same for source audio of acc and save in array top_mag_acc ... critical step ... examine difference between the elements stored in top_mag_idle versus top_mag_acc ... this transition between elements of top_mag_idle into top_mag_acc is your secret sauce which you will put into loop_secret_sauce ... to get concrete here when you loop across loop_secret_sauce and update array2 elements to reflect top_mag_idle the audio will sound idle over time when you continue looping across array2 to synthesize array3 and transition to updating array2 elements to reflect top_mag_acc the sound will be of an accelerating car
perhaps to gain intuition on the secret sauce consider this ... imagine listening to a car on idle ... as with any complex system which generates audio it will have a set of dominant frequencies meaning there are a set of say 5 different frequencies with the greatest magnitude ( loudest freqs ) ... similar to a pianist playing a cord on a piano where the shape of her hand and fingers remain static yet she is repeatedly tapping the keyboard ... now the car starts to accelerate ... the analogy here is she continues to repeatedly tap the keyboard with that same static hand and finger layout yet now she slides her hand up to the right along the keyboard as she continues to tap the keyboard ... in your code inside loop_secret_sauce the original set of freqs (top_mag_idle) will generate the idle car sound when you synthesize array3 from array2 ... then to implement acc you increment in unison all freqs in top_mag_idle and repeat synthesis of array3 from array2 this will give you the acc sound
until you get this working I would only use mono ( one channel not stereo )
sounds like an interesting project ... have fun !!!
I am currently working on a graphing program in MATLAB that takes input and maps a point to x-y space using this input. However, the program should also output a continuous tone whose frequency varies depending on the location of the point.
I was able to get the tone generation done, however could not get the tone to work continuously due to the nature of the program. (Code in between tone generations) I thought I could solve this using a parfor loop with the code that alters the frequency in one iteration of the loop, and the code that generates the tone in another but cannot seem to get it due to the following error:
Warning: The temporary variable frequency will be cleared at the
beginning of each iteration of the parfor loop. Any value assigned to
it before the loop will be lost. If frequency is used before it is
assigned in the parfor loop, a runtime error will occur. See Parallel
for Loops in MATLAB, "Temporary Variables".
In multiThreadingtest at 5 Error using multiThreadingtest (line 5) Reference to a cleared variable frequency.
Caused by:
Reference to a cleared variable
frequency.
And my code:
global frequency
frequency = 100;
parfor ii=1:2
if ii==1
Fs = 1000;
nSeconds = 5;
y = 100*sin(linspace(0, nSeconds*frequency*2*pi, round(nSeconds*Fs)));
sound(y, Fs);
elseif ii==2
frequency = 100
pause(2);
frequency = 200
pause(2);
frequency = 300
pause(2);
end
end
The solution may not come from multithreading, but from the use of another function to output a tone(audioplayer, play, stop). 'audioplayer/play' has the ability to output sounds that overlap in time. So basically, a pseudo code would be:
get the value of the input
generate/play a corresponding 5 second tone
detect if any change in the input
if no change & elapsed time close to 5 seconds
generate/play an identical 5 second tone
if change
generate a new 5 second tone
%no overlapping
stop old
play new
%overlapping (few milliseconds)
play new
stop old
The matlab code showing the 'sound'/'play' differences.
Fs = 1000;
nSeconds = 5;
frequency = 100;
y1 = 100*sin(linspace(0, nSeconds*frequency*2*pi, round(nSeconds*Fs)));
aud1 = audioplayer(y1, Fs);
frequency = 200;
y2 = 100*sin(linspace(0, nSeconds*frequency*2*pi, round(nSeconds*Fs)));
aud2 = audioplayer(y2, Fs);
% overlapping sound impossible
sound(y1, Fs);
pause(1)
sound(y2, Fs);
% overlapping sound possible
play(aud1);
pause(1);
disp('can compute here');
play(aud2);
pause(1);
stop(aud1);
pause(1);
stop(aud2);
I have a graph with zoom features. My main observation was that the x-axis updated its scale based on my current zoom level. I wanted the y-axis to do this too, so enabled zoom.y(y) , the undesired side affect being that now the user can zoom out in all directions, even into negative values "below" the graph.
http://jsfiddle.net/ericps/xJ3Ke/5/
var zoom = d3.behavior.zoom().scaleExtent([0.2, 5])
.on("zoom", draw); doesn't seem to really take the y-axis into account. And the user can still drag the chart anywhere in any direction to infinity.
One idea I thought of was independent of having zoom.y(y) enabled, and simply requires redrawing the y-axis based on what it is in the currently visible range. Like some kind of redraw based on the position of the X axis only. I don't want up and down scrolling at all now, only left and right
aside from commenting out //zoom.y(y) how would this be done? Insight appreciated.
All you need to do is update the y scale domain in your draw method.
The zoom function will modify the associated scales and set their domain to simulate a zoom. So you can get your x visible data bounds by doing x.invert(0) and x.invert(width), for example. If you converted your data to use Date's instead of strings then this is what I would suggest you use to filter, it woudl probably be more efficient.
As it is though, you can still use the x scale to filter to your visible data, find the y-axis extents of those values, and set your y scales domain to match accordingly. And in fact you can do all this in just a few lines (in your zoom update callback):
var yExtent = d3.extent(data.filter(function(d) {
var dt = x(d.date);
return dt > 0 && dt < width;
}), function(d) { return d.value; });
y.domain(yExtent).nice();
You can try it out here
To better explain what is going on:
The zoom behaviour listens to mouse events and modifies the range of the associated scales.
The scales are used by the axes which draw them as lines with ticks, and the scales are also used by the data associated with your paths and areas as you've set them up in callbacks.
So when the zoom changes it fires a callback and the basic method is what you had:
svg.select("g.x.axis").call(xAxis);
svg.select("g.y.axis").call(yAxis);
svg.select("path.area").attr("d", area);
svg.select("path.line").attr("d", line);
we redraw the x- and y- axes with the newly updated domains and we redraw (recompute) the area and the line - also with the newly domained x- and y- scales.
So to get the behaviour you wanted we take away the default zoom behaviour on the y scale and instead we will modify the y scales domain ourselves whenever we get a zoom or pan: conveniently we already have a callback for those actions because of the zoom behaviour.
The first step to compute our y scale's domain is to figure out which data values are visible. The x axis has been configured to output to a range of 0 to width and the zoom behaviour has updated the x scale's domain so that only a subset of the original domain outputs to this range. So we use the javascript array's filter method to pull out only those data objects whose mapping puts them in our visible range:
data.filter(function(d) {
var dt = x(d.date);
return dt > 0 && dt < width;
}
Then we use the handy d3 extent method to return the min and max values in an array. But because our array is all objects we need an accessor function so that the extents method has some numbers to actually compare (this is a common pattern in D3)
d3.extents(filteredData, function(d) { return d.value; });
So now we know the min and max values for all the data points that are drawn given our current x scale. The last bit is then just to set the y scale's domain and continue as normal!
y.domain(yExtent).nice();
The nice method I found in the api because it's the kind of thing you want a scale to do and d3 often does things for you that you want to do.
A great tutorial for figuring out some of this stuff is: http://alignedleft.com/tutorials/
It is worth stepping through even the parts you think you know already.
I want to plot the frequency spectrum of a music file (like they do for example in Audacity). Hence I want the frequency in Hertz on the x-axis and the amplitude (or desibel) on the y-axis.
I devide the song (about 20 million samples) into blocks of 4096 samples at a time. These blocks will result in 2049 (N/2 + 1) complex numbers (sine and cosine -> real and imaginary part). So now I have these thousands of individual 2049-arrays, how do I combine them?
Lets say I do the FFT 5000 times resulting in 5000 2049-arrays of complex numbers. Do I plus all the values of the 5000 arrays and then take the magnitude of the combined 2049-array? Do I then sacle the x-axis with the songs sample rate / 2 (eg: 22050 for a 44100hz file)?
Any information will be appriciated
What application are you using for this? I assume you are not doing this by hand, so here is a Matlab example:
>> fbins = fs/N * (0:(N/2 - 1)); % Where N is the number of fft samples
now you can perform
>> plot(fbins, abs(fftOfSignal(1:N/2)))
Stolen
edit: check this out http://www.codeproject.com/Articles/9388/How-to-implement-the-FFT-algorithm
Wow I've written a load about this just recently.
I even turned it into a blog post available here.
My explanation is leaning towards spectrograms but its just as easy to render a chart like you describe!
I might not be correct on this one, but as far as I'm aware, you have 2 ways to get the spectrum of the whole song.
1) Do a single FFT on the whole song, which will give you an extremely good frequency resolution, but is in practice not efficient, and you don't need this kind of resolution anyway.
2) Divide it into small chunks (like 4096 samples blocks, as you said), get the FFT for each of those and average the spectra. You will compromise on the frequency resolution, but make the calculation more manageable (and also decrease the variance of the spectrum). Wilhelmsen link's describes how to compute an FFT in C++, and I think some library already exists to do that, like FFTW (but I never managed to compile it, to be fair =) ).
To obtain the magnitude spectrum, average the energy (square of the magnitude) accross all you chunks for every single bins. To get the result in dB, just 10 * log10 the results. That is of course assuming that you are not interested in the phase spectrum. I think this is known as the Barlett's method.
I would do something like this:
// At this point you have the FFT chunks
float sum[N/2+1];
// For each bin
for (int binIndex = 0; binIndex < N/2 + 1; binIndex++)
{
for (int chunkIndex = 0; chunkIndex < chunkNb; chunkIndex++)
{
// Get the magnitude of the complex number
float magnitude = FFTChunk[chunkIndex].bins[binIndex].real * FFTChunk[chunkIndex].bins[binIndex].real
+ FFTChunk[chunkIndex].bins[binIndex].im * FFTChunk[chunkIndex].bins[binIndex].im;
magnitude = sqrt(magnitude);
// Add the energy
sum[binIndex] += magnitude * magnitude;
}
// Average the energy;
sum[binIndex] /= chunkNb;
}
// Then get the values in decibel
for (int binIndex = 0; binIndex < N/2 + 1; binIndex++)
{
sum[binIndex] = 10 * log10f(sum[binIndex]);
}
Hope this answers your question.
Edit: Goz's post will give you plenty of information on the matter =)
Commonly, you would take just one of the arrays, corresponding to the point in time of the music in which you are interested. The you would calculate the log of the magnitude of each complex array element. Plot the N/2 results as Y values, and scale the X axis from 0 to Fs/2 (where Fs is the sampling rate).