frequency to time conversion using MATLAB - excel

I would like to convert my data in frequency domain into time domain. In this attached excel sheet (book1.xlxs) Column A is Frequency. Column B and C is real and imaginary data (B+jC). Also attached you can see my code. But its not working. I would like to have the my result something shown in figure in time domain (green curve part-1).
[num, data, raw] = xlsread('Book1.xlsx');
ln=length(raw)-1; %find the length of the sequence
xk=zeros(1,ln); %initilise an array of same size as that of input sequence
ixk=zeros(1,ln); %initilise an array of same size as that of input sequence
rx = zeros(1,ln); %real value of fft
ix = zeros(1,ln); %imaginary value of fft
for i= 2:length(raw)
rx(i-1) = cell2mat(raw(i,2));
ix(i-1) = cell2mat(raw(i,3));
xk(i-1) = sqrt(rx(i-1)^2 + ix(i-1)^2);
end
for n=0:ln-1
for k=0:ln-1
ixk(n+1)=ixk(n+1)+(xk(k+1)*exp(i*2*pi*k*n/ln));
end
end
ixk=10*log(ixk./ln);
t=0:ln-1
plot(t, ixk)
In this image this code should give me the result similar to the green curve-part1

Instead of doing the FFT yourself, you could use the built-in Matlab functions to do it - much easier.
A good example from Mathworks is given here. The following is some code I have based myself on. The passed-in parameter f is your time domain trace, and fsampling is your sampling rate. The passed-out parameters freq and finv are your frequency vector and fourier transform, respectively.
function [freq, finv] = FourierTransform(f,fsampling)
% Fast Fourier Transform
fsampling = round(fsampling);
finv = fft(f,fsampling);
finv = finv(1:length(finv)/2+1); % Truncate out only the second half, due to symmetry
finv(2:end - 1) = 2*finv(2:end - 1); % Adjust amplitude to account for truncation
finv = finv./length(f);
freq = 0:fsampling/2;
end

Related

How to fix the issue of plotting a 2D sine wave in python

I want to generate 2D travelling sine wave. To do this, I've set the parameters for the plane wave and generate wave for any time instants like as follows:
import numpy as np
import random
import matplotlib.pyplot as plt
f = 10 # frequency
fs = 100 # sample frequency
Ts = 1/fs # sample period
t = np.arange(0,0.5, Ts) # time index
c = 50 # speed of wave
w = 2*np.pi *f # angular frequency
k = w/c # wave number
resolution = 0.02
x = np.arange(-5, 5, resolution)
y = np.arange(-5, 5, resolution)
dx = np.array(x); M = len(dx)
dy = np.array(y); N = len(dy)
[xx, yy] = np.meshgrid(x, y);
theta = np.pi / 4 # direction of propagation
kx = k* np.cos(theta)
ky = k * np.sin(theta)
So, the plane wave would be
plane_wave = np.sin(kx * xx + ky * yy - w * t[1])
plt.figure();
plt.imshow(plane_wave,cmap='seismic',origin='lower', aspect='auto')
that gives a smooth plane wave as shown in . Also, the sine wave variation with plt.figure(); plt.plot(plane_wave[2,:]) time is given in .
However, when I want to append plane waves at different time instants then there is some discontinuity arises in figure 03 & 04 , and I want to get rid of from this problem.
I'm new in python and any help will be highly appreciated. Thanks in advance.
arr = []
for count in range(len(t)):
p = np.sin(kx * xx + ky * yy - w * t[count]); # plane wave
arr.append(p)
arr = np.array(arr)
print(arr.shape)
pp,q,r = arr.shape
sig = np.reshape(arr, (-1, r))
print('The signal shape is :', sig.shape)
plt.figure(); plt.imshow(sig.transpose(),cmap='seismic',origin='lower', aspect='auto')
plt.xlabel('X'); plt.ylabel('Y')
plt.figure(); plt.plot(sig[2,:])
This is not that much a problem of programming. It has to do more with the fact that you are using the physical quantities in a somewhat unusual way. Your plots are absolutely fine and correct.
What you seem to have misunderstood is the fact that you are talking about a 2D problem with a third dimension added for time. This is by no means wrong but if you try to append the snapshot of the 2D wave side-by-side you are using (again) the x spatial dimension to represent temporal variations. This leads to an inconsistency of the use of that coordinate axis. Now, to make this more intuitive, consider the two time instances separately. Does it not coincide with your intuition that all points on the 2D plane must have different amplitudes (unless of course the time has progressed by a multiple of the period of the wave)? This is the case indeed. Thus, when you try to append the two snapshots, a discontinuity is exhibited. In order to avoid that you have to either use a time step equal to one period, which I believe is of no practical use, or a constant time step that will make the phase of the wave on the left border of the image in the current time equal to the phase of the wave on the right border of the image in the previous time step. Yet, this will always be a constant time step, alternating the phase (on the edges of the image) between the two said values.
The same applies to the 1D case because you use the two coordinate axes to represent the wave (x is the x spatial dimension and y is used to represent the amplitude). This is what can be seen in your last plot.
Now, what would be the solution you may ask. The solution is provided by simple inspection of the mathematical formula of the wave function. In 2D, it is a scalar function of three variables (that is, takes as input three values and outputs one) and so you need at least four dimensions to represent it. Alas, we can't perceive a fourth spatial dimension, but this is not a problem in your case as the output of the function is represented with colors. Then there are three dimensions that could be used to represent the temporal evolution of your function. All you have to do is to create a 3D array where the third dimension represents time and all 2D snapshots will be stored in the first two dimensions.
When it comes to visual representation of the results you could either use some kind of waterfall plots where the z-axis will represent time or utilize the fourth dimension we can perceive, time that is, to create an animation of the evolution of the wave.
I am not very familiar with Python, so I will only provide a generic naive implementation. I am sure a lot of people here could provide some simplification and/or optimisation of the following snippet. I assume that everything in your first two blocks of code is available so changes have to be done only in the last block you present
arr = np.zeros((len(xx), len(yy), len(t))) # Initialise the array to hold the temporal evolution of the snapshots
for i in range(len(t)):
arr[:, :, i] = np.sin(kx * xx + ky * yy - w * t[i])
# Below you can plot the figures with any function you prefer or make an animation out of it

Is there a faster way to calculate the distance between elements in the same matrix with a Gaussian function?

Starting from an M matrix of shape 7000 x 2, I calculate the following quantity:
I do it in the following way (the variance sigma is arbitrary):
W = np.zeros((M.shape[0], M.shape[0]))
elements_sum_by_i = np.zeros((M.shape[0]))
for i in range (0,M.shape[0]):
#normalization
for k in range (0, M.shape[0]):
elements_sum_by_i[k] = math.exp(-(np.linalg.norm(M[i,:] - M[k,:])**2)/(2*sigma**2))
sum_by_i = sum(elements_sum_by_i)
#calculation
for j in range (0,M.shape[0]):
W[i,j] = (math.exp(-(np.linalg.norm(M[i,:] - M[j,:]))**2/(2*sigma**2)))/(sum_by_i)
The problem is that it is really very slow (takes about 30 minutes). Is there a faster way to do this calculation?
May be you can extract some ideas from the following comments:
1) Calculate the Log(W[i,j]) with the simplifications of the formula, the exponents disappear, the processing should be quicker.
2) Take the exponent of it: Exp(Log(W{i,j])) == W[i,j]
3) Use variables for values that are constants inside the iterations like sigma = 2*sigma**2, that you can compute at start outside of the iterations.
Important, before any change, memorize the result so that your new development can be tested on the final matrix result that you already know, I suppose, is correct.
Good luck.

Calculate regression between 2 columns, then move to the next columns, then the next two, etc

I am utilizing the slope from the regression to calibrate my Y's to my X's. The point of the script is to blind test the calibration to determine accuracy. I am calculating a regression line between two columns in Excel, with 64 total points/samples. I have set it up where one of the 64 points will be dropped from the regression, so a slope will be calcuated from the 63 samples and that slope will be used to calibrate the Y value. I can then compare that calibrated Y to the true "X" value. I have set up a "fir i in rage(len(y))" command that basically drops one of the 64 points, run the regression, calculate accuracy, then the script drops another one of the 64 points, and this continues until each individual points have been drops and check for accuracy.
The Problem: I want to continue the loop that I talked about above, but I want to incorporate an additional loop that does all of the calculations I mentioned above, and then moves over to the next column, and repeats the action.
For example, lets say I am calculating the regression between column A1 and F1 (as stated above). After that is finished, I want it to omve one column to the right and calculations another regression, say for columns B1 and G1. Then for C1 and H1, etc.
#this was my attempt
#Columnx = Excel_XRF.iloc[: , 1]
#Columny = Excel_XRF.iloc[: , 17]
#mask = ~np.isnan(Columnx) & ~np.isnan(Columny)
#Columnx_clean = Columnx[mask]
#Columny_clean = Columny[mask]
varx = Excel_XRF["Tix"]
vary = Excel_XRF["Tiy"]
Excel_Index = 1
Excel_Index2 = 1
mask = ~np.isnan(varx) & ~np.isnan(vary)
varx_clean = varx[mask]
vary_clean = vary[mask]
#initiate array to be filled for coefficients, r^2, and percentError
coef = []
r2 = []
PercDiff = []
for i in range(len(vary_clean)):
yTest = vary_clean[i] #tested okay value from data
xTrue = varx_clean[i] #tested "True" value from data
#Remove single point iteratively from data set
test1 = np.append(varx_clean[:i], varx_clean[i+1:])
test2 = np.append(vary_clean[:i], vary_clean[i+1:])
#Put shortened data into linear regression model (fit intercept to 0)
model =LinearRegression(fit_intercept=False).fit(test1.reshape(-1,1), test2)
coef.append(model.coef_)
r2.append(model.score(test1.reshape(-1,1), test2))
xTest = yTest/model.coef_
PercDiff.append(((abs(xTest - xTrue)/(xTrue))*100)[0])
#Unnest
coefficients = list(chain.from_iterable(coef))

Use of fft on sound using Julia

I have some code in Julia I've just wrote:
using FFTW
using Plots
using WAV, PlotlyJS
snd, sampFreq = wavread("input.wav")
N, _ = size(snd)
t = 0:1/(N-1):1;
s = snd[:,1]
y = fft(s)
y1 = copy(y)
for i = 1:N
if abs(y1[i]) > 800
y1[i] = 0
end
end
s_new = real(ifft(y1))
wavwrite(s_new, "output1.wav", Fs = sampFreq)
y2 = copy(y)
for i = 1:N
if abs(y2[i]) < 800
y2[i] = 0
end
end
s_new = real(ifft(y2))
wavwrite(s_new, "output2.wav", Fs = sampFreq)
sticks((abs.(y1)))
sticks!((abs.(y2)))
s1,k1 = wavread("output1.wav")
s2,k2 = wavread("output2.wav")
for i = 1:N
s1[i] += s2[i]
end
wavwrite(s1, "output3.wav", Fs = sampFreq)
it's the code that reads file input.wav, next do fft on the sound, dividing it into two files output1 with only frequencies > 800 and output2 with frequencies < 800.
In next part I merge the two files into output3. I expected something similar to input, but what I get sounds terrible (I mean it sounds like input, but is quieter and with hum bigger than expected).
My question is on which part of a code I loose the most information about input and is it a way to improve it, to get as output3 something almost like input?
You appear to not understand what the fft (fast fourier transform) returns. It returns a vector of amplitudes, not frequencies. The vector's components correspond to a the amplitude of a sine wave at a frequency that you can find using the fftfreq() function, but be sure to provide the fftfreq() function with its second argument, your sampFreq variable.
To decompose the sound, then, you need to zero the vector components you do not want, based on what fftfreq() tells you the frequencies corresponding to the bins (vector postions in the vector returned by fft().
You will still see a big drop in sound quality with reversing the process with ifft, because the fft will basically average parts of the signal by splitting it into the frequency dimension's bins.
I suggest a tutorial on fft() before you fix your code further -- you can google several of these.

Filtering signal: how to restrict filter that last point of output must equal the last point of input

Please help my poor knowledge of signal processing.
I want to smoothen some data. Here is my code:
import numpy as np
from scipy.signal import butter, filtfilt
def testButterworth(nyf, x, y):
b, a = butter(4, 1.5/nyf)
fl = filtfilt(b, a, y)
return fl
if __name__ == '__main__':
positions_recorded = np.loadtxt('original_positions.txt', delimiter='\n')
number_of_points = len(positions_recorded)
end = 10
dt = end/float(number_of_points)
nyf = 0.5/dt
x = np.linspace(0, end, number_of_points)
y = positions_recorded
fl = testButterworth(nyf, x, y)
I am pretty satisfied with results except one point:
it is absolutely crucial to me that the start and end point in returned values equal to the start and end point of input. How can I introduce this restriction?
UPD 15-Dec-14 12:04:
my original data looks like this
Applying the filter and zooming into last part of the graph gives following result:
So, at the moment I just care about the last point that must be equal to original point. I try to append copy of data to the end of original list this way:
the result is as expected even worse.
Then I try to append data this way:
And the slice where one period ends and next one begins, looks like that:
To do this, you're always going to cheat somehow, since the true filter applied to the true data doesn't behave the way you require.
One of the best ways to cheat with your data is to assume it's periodic. This has the advantages that: 1) it's consistent with the data you actually have and all your changing is to append data to the region you don't know about (so assuming it's periodic as as reasonable as anything else -- although may violate some unstated or implicit assumptions); 2) the result will be consistent with your filter.
You can usually get by with this by appending copies of your data to the beginning and end of your real data, or just small pieces, depending on your filter.
Since the FFT assumes that the data is periodic anyway, that's often a quick and easy approach, and is fully accurate (whereas concatenating the data is an estimation of an infinitely periodic waveform). Here's an example of the FFT approach for a step filter.
import numpy as np
import matplotlib.pyplot as plt
x = np.arange(0, 128)
y = (np.sin(.22*(x+10))>0).astype(np.float)
# filter
y2 = np.fft.fft(y)
f0 = np.fft.fftfreq(len(x))
y2[(f0<-.25) | (f0>.25)] = 0
y3 = abs(np.fft.ifft(y2))
plt.plot(x, y)
plt.plot(x, y3)
plt.xlim(-10, 140)
plt.ylim(-.1, 1.1)
plt.show()
Note how the end points bend towards each other at either end, even though this is not consistent with the periodicity of the waveform (since the segments at either end are very truncated). This can also be seen by adjusting waveform so that the ends are the same (here I used x+30 instead of x+10, and here the ends don't need to bend to match-up so they stay at level with the end of the data.
Note, also, to have the endpoints actually be exactly equal you would have to extend this plot by one point (at either end), since it periodic with exactly the wavelength of the original waveform. Doing this is not ad hoc though, and the result will be entirely consistent with your analysis, but just representing one extra point of what was assumed to be infinite repeats all along.
Finally, this FFT trick works best with waveforms of length 2n. Other lengths may be zero padded in the FFT. In this case, just doing concatenations to either end as I mentioned at first might be the best way to go.
The question is how to filter data and require that the left endpoint of the filtered result matches the left endpoint of the data, and same for the right endpoint. (That is, in general, the filtered result should be close to most of the data points, but not necessarily exactly match any of them, but what if you need a match at both endpoints?)
To make the filtered result exactly match the endpoints of a curve, one could add a padding of points at either end of the curve and adjust the y-position of this padding so that the endpoints of the valid part of the filter exactly matched the end points of the original data (without the padding).
In general, this can be done by either iterating towards a solution, adjusting the padding y-position until the ends line up, or by calculating a few values and then interpolating to determine the y-positions that would be required for the matched endpoints. I'll do the second approach.
Here's the code I used, where I simulated the data as a sine wave with two flat pieces on either side (note, that these flat pieces are not the padding, but I'm just trying to make data that looks a bit like the OPs).
import numpy as np
from scipy.signal import butter, filtfilt
import matplotlib.pyplot as plt
#### op's code
def testButterworth(nyf, x, y):
#b, a = butter(4, 1.5/nyf)
b, a = butter(4, 1.5/nyf)
fl = filtfilt(b, a, y)
return fl
def do_fit(data):
positions_recorded = data
#positions_recorded = np.loadtxt('original_positions.txt', delimiter='\n')
number_of_points = len(positions_recorded)
end = 10
dt = end/float(number_of_points)
nyf = 0.5/dt
x = np.linspace(0, end, number_of_points)
y = positions_recorded
fx = testButterworth(nyf, x, y)
return fx
### simulate some data (op should have done this too!)
def sim_data():
t = np.linspace(.1*np.pi, (2.-.1)*np.pi, 100)
y = np.sin(t)
c = np.ones(10, dtype=np.float)
z = np.concatenate((c*y[0], y, c*y[-1]))
return z
### code to find the required offset padding
def fit_with_pads(v, data, n=1):
c = np.ones(n, dtype=np.float)
z = np.concatenate((c*v[0], data, c*v[1]))
fx = do_fit(z)
return fx
def get_errors(data, fx):
n = (len(fx)-len(data))//2
return np.array((fx[n]-data[0], fx[-n]-data[-1]))
def vary_padding(data, span=.005, n=100):
errors = np.zeros((4, n)) # Lpad, Rpad, Lerror, Rerror
offsets = np.linspace(-span, span, n)
for i in range(n):
vL, vR = data[0]+offsets[i], data[-1]+offsets[i]
fx = fit_with_pads((vL, vR), data, n=1)
errs = get_errors(data, fx)
errors[:,i] = np.array((vL, vR, errs[0], errs[1]))
return errors
if __name__ == '__main__':
data = sim_data()
fx = do_fit(data)
errors = vary_padding(data)
plt.plot(errors[0], errors[2], 'x-')
plt.plot(errors[1], errors[3], 'o-')
oR = -0.30958
oL = 0.30887
fp = fit_with_pads((oL, oR), data, n=1)[1:-1]
plt.figure()
plt.plot(data, 'b')
plt.plot(fx, 'g')
plt.plot(fp, 'r')
plt.show()
Here, for the padding I only used a single point on either side (n=1). Then I calculate the error for a range of values shifting the padding up and down from the first and last data points.
For the plots:
First I plot the offset vs error (between the fit and the desired data value). To find the offset to use, I just zoomed in on the two lines to find the x-value of the y zero crossing, but to do this more accurately, one could calculate the zero crossing from this data:
Here's the plot of the original "data", the fit (green) and the adjusted fit (red):
and zoomed in the RHS:
The important point here is that the red (adjusted fit) and blue (original data) endpoints match, even though the pure fit doesn't.
Is this a valid approach? Of the various options, this seems the most reasonable since one isn't usually making any claims about the data that isn't being shown, and also for show region has an accurately applied filter. For example, FFTs usually assume the data is zero or periodic beyond the boundaries. Certainly, though, to be precise one should explain what was done.

Resources