Getting multiple sinosidual waves using Fourier Transform Python - python-3.x

According to http://www.thefouriertransform.com/
" The Fourier Transform shows that any waveform can be re-written as the sum of sinusoidal functions. "
I have some signals (each have shape of 256,64) that I want to break down into sub-signals and I want to use those sub-signals then to generate the real signal back. I am doing it right now like this:-
#getting data
with open('../f', 'rb') as fp:
f=pickle.load(fp)
from scipy.fftpack import fft, dct
f=f[0]
tf=fft(f)
x=np.reshape(np.abs(tf),(256,64))
plt.plot(x)
plt.show()
print(x.shape) #same shape as f
But I am getting output in the same shape as of the real signal but with some imaginary values which are discarded ultimately. I have looked at other Fourier questions here but none of them gave satisfying result, they just transformed the input signal. What am I doing wrong? Any help will be much appreciated.

To see the sinusoidal components, you need to plot sine waves.
x = a * sin(t)
not a reshaped FFT result.
If you don't care about phase, the number of sinewave plots will be half the length of your FFT + 1, which each sinewave of a frequency calculated from the bin center of each FFT result element (index times samplerate divided by length), and its amplitude given by the abs() of the FFT bin.

Related

How to fix the issue of plotting a 2D sine wave in python

I want to generate 2D travelling sine wave. To do this, I've set the parameters for the plane wave and generate wave for any time instants like as follows:
import numpy as np
import random
import matplotlib.pyplot as plt
f = 10 # frequency
fs = 100 # sample frequency
Ts = 1/fs # sample period
t = np.arange(0,0.5, Ts) # time index
c = 50 # speed of wave
w = 2*np.pi *f # angular frequency
k = w/c # wave number
resolution = 0.02
x = np.arange(-5, 5, resolution)
y = np.arange(-5, 5, resolution)
dx = np.array(x); M = len(dx)
dy = np.array(y); N = len(dy)
[xx, yy] = np.meshgrid(x, y);
theta = np.pi / 4 # direction of propagation
kx = k* np.cos(theta)
ky = k * np.sin(theta)
So, the plane wave would be
plane_wave = np.sin(kx * xx + ky * yy - w * t[1])
plt.figure();
plt.imshow(plane_wave,cmap='seismic',origin='lower', aspect='auto')
that gives a smooth plane wave as shown in . Also, the sine wave variation with plt.figure(); plt.plot(plane_wave[2,:]) time is given in .
However, when I want to append plane waves at different time instants then there is some discontinuity arises in figure 03 & 04 , and I want to get rid of from this problem.
I'm new in python and any help will be highly appreciated. Thanks in advance.
arr = []
for count in range(len(t)):
p = np.sin(kx * xx + ky * yy - w * t[count]); # plane wave
arr.append(p)
arr = np.array(arr)
print(arr.shape)
pp,q,r = arr.shape
sig = np.reshape(arr, (-1, r))
print('The signal shape is :', sig.shape)
plt.figure(); plt.imshow(sig.transpose(),cmap='seismic',origin='lower', aspect='auto')
plt.xlabel('X'); plt.ylabel('Y')
plt.figure(); plt.plot(sig[2,:])
This is not that much a problem of programming. It has to do more with the fact that you are using the physical quantities in a somewhat unusual way. Your plots are absolutely fine and correct.
What you seem to have misunderstood is the fact that you are talking about a 2D problem with a third dimension added for time. This is by no means wrong but if you try to append the snapshot of the 2D wave side-by-side you are using (again) the x spatial dimension to represent temporal variations. This leads to an inconsistency of the use of that coordinate axis. Now, to make this more intuitive, consider the two time instances separately. Does it not coincide with your intuition that all points on the 2D plane must have different amplitudes (unless of course the time has progressed by a multiple of the period of the wave)? This is the case indeed. Thus, when you try to append the two snapshots, a discontinuity is exhibited. In order to avoid that you have to either use a time step equal to one period, which I believe is of no practical use, or a constant time step that will make the phase of the wave on the left border of the image in the current time equal to the phase of the wave on the right border of the image in the previous time step. Yet, this will always be a constant time step, alternating the phase (on the edges of the image) between the two said values.
The same applies to the 1D case because you use the two coordinate axes to represent the wave (x is the x spatial dimension and y is used to represent the amplitude). This is what can be seen in your last plot.
Now, what would be the solution you may ask. The solution is provided by simple inspection of the mathematical formula of the wave function. In 2D, it is a scalar function of three variables (that is, takes as input three values and outputs one) and so you need at least four dimensions to represent it. Alas, we can't perceive a fourth spatial dimension, but this is not a problem in your case as the output of the function is represented with colors. Then there are three dimensions that could be used to represent the temporal evolution of your function. All you have to do is to create a 3D array where the third dimension represents time and all 2D snapshots will be stored in the first two dimensions.
When it comes to visual representation of the results you could either use some kind of waterfall plots where the z-axis will represent time or utilize the fourth dimension we can perceive, time that is, to create an animation of the evolution of the wave.
I am not very familiar with Python, so I will only provide a generic naive implementation. I am sure a lot of people here could provide some simplification and/or optimisation of the following snippet. I assume that everything in your first two blocks of code is available so changes have to be done only in the last block you present
arr = np.zeros((len(xx), len(yy), len(t))) # Initialise the array to hold the temporal evolution of the snapshots
for i in range(len(t)):
arr[:, :, i] = np.sin(kx * xx + ky * yy - w * t[i])
# Below you can plot the figures with any function you prefer or make an animation out of it

Is it possible to describe with 1 parameter when a wave is sinusoidal or square in Python?

I am using scipy and I managed to filter the data with the fft package cutting the high frequencies, but that is only useful to transform the data, instead of that I want to get just 1 parameter after the analysis.
Let's have a look at some simple code to explain what I mean:
from scipy import fftpack
import numpy as np
import pandas as pd
from scipy import signal
t = np.linspace(0, 2*np.pi, 100, endpoint=True)
sq1 = signal.square(np.pi*t)
sin1 = np.sin(np.pi*t)
fft_sq1 = fftpack.dct(sq1,norm="ortho")
fft_sin1 = fftpack.dct(sin1, norm="ortho")
After applying the fast fourier transform (direct cosine) I get fft_sq1 and fft_sin1, which are arrays 100 elements long. Manipulating those coefficientes I can use later the fftpack.idct() and obtain a curve that does not contain noise.
The problem with this is that I get too much frequencies, I get 100 parameters I have to filter and after that I get again the curve.
Instead of that I am interested in a filter that returns me just 1 value:
0 if the curve is completely square
1 if the curve is exactly like a sinusoid
Does something comes to your mind?
Obviously there are infinite curves in between, if the periodic signal is more flat the number will be closer to 0 and if the curve is more round the number will be closer to 1.
Thanks!!

What is the upsampling method called 'area' used for?

The PyTorch function torch.nn.functional.interpolate contains several modes for upsampling, such as: nearest, linear, bilinear, bicubic, trilinear, area.
What is the area upsampling modes used for?
As jodag said, it is resizing using adaptive average pooling. While the answer at the link aims to explain what adaptive average pooling is, I find the explanation a bit vague.
TL;DR the area mode of torch.nn.functional.interpolate is probably one of the most intuitive ways to think of when one wants to downsample an image.
You can think of it as applying an averaging Low-Pass Filter(LPF) to the original image and then sampling. Applying an LPF before sampling is to prevent potential aliasing in the downsampled image. Aliasing can result in Moiré patterns in the downscaled image.
It is probably called "area" because it (roughly) preserves the area ratio between the input and output shapes when averaging the input pixels. More specifically, every pixel in the output image will be the average of a respective region in the input image where the 1/area of this region will be roughly the ratio between output image's area and input image's area.
Furthermore, the interpolate function with mode = 'area' calls the source function adaptie_avg_pool2d (implemented in C++) which assigns each pixel in the output tensor the average of all pixel intensities within a computed region of the input. That region is computed per pixel and can vary in size for different pixels. The way it is computed is by multiplying the output pixel's height and width by the ratio between the input and output (in that order) height and width (respectively) and then taking once the floor (for the region's starting index) and once the ceil (for the region's ending index) of the resulting value.
Here's an in-depth analysis of what happens in nn.AdaptiveAvgPool2d:
First of all, as stated there you can find the source code for adaptive average pooling (in C++) here: source
Taking a look at the function where the magic happens (or at least the magic on CPU for a single frame), static void adaptive_avg_pool2d_single_out_frame, we have 5 nested loops, running over channel dimension, then width, then height and within the body of the 3rd loop the magic happens:
First compute the region within the input image which is used to calculate the value of the current pixel (recall we had width and height loop to run over all pixels in the output).
How is this done?
Using a simple computation of start and end indices for height and width as follows: floor((input_height/output_height) * current_output_pixel_height) for the start and ceil((input_height/output_height) * (current_output_pixel_height+1)) and similarly for the width.
Then, all that is done is to simply average the intensities of all pixels in that region and current channel and place the result in the current output pixel.
I wrote a simple Python snippet that does the same thing, in the same fashion (loops, naive) and produces equivalent results. It takes tensor a and uses adaptive average pool to resize a to shape output_shape in 2 ways - once using the built-in nn.AdaptiveAvgPool2d and once with my translation into Python of the source function in C++: static void adaptive_avg_pool2d_single_out_frame. Built-in function's result is saved into b and my translation is saved into b_hat. You can see that the results are equivalent (you can further play with the spatial shapes and validate this):
import torch
from math import floor, ceil
from torch import nn
a = torch.randn(1, 3, 15, 17)
out_shape = (10, 11)
b = nn.AdaptiveAvgPool2d(out_shape)(a)
b_hat = torch.zeros(b.shape)
for d in range(a.shape[1]):
for w in range(b_hat.shape[3]):
for h in range(b_hat.shape[2]):
startW = floor(w * a.shape[3] / out_shape[1])
endW = ceil((w + 1) * a.shape[3] / out_shape[1])
startH = floor(h * a.shape[2] / out_shape[0])
endH = ceil((h + 1) * a.shape[2] / out_shape[0])
b_hat[0, d, h, w] = torch.mean(a[0, d, startH: endH, startW: endW])
'''
Prints Mean Squared Error = 0 (or a very small number, due to precision error)
as both outputs are the same, proof of output equivalence:
'''
print(nn.MSELoss()(b_hat, b))
Looking at the source code it appears area interpolation is equivalent to resizing a tensor via adaptive average pooling. You can refer to this question for an explanation of adaptive average pooling. Therefore area interpolation is more applicable to downsampling than upsampling.

Normalization - Signal with different sampling rates

I am trying to solve a signal processing problem. I have a signal like this
My job is to use FFT to plot the frequency vs. signal. This is what I have coded so far:
def Extract_Data(filepath, pattern):
data = []
with open(filepath) as file:
for line in file:
m = re.match(pattern, line)
if m:
data.append(list(map(float, m.groups())))
#print(data)
data = np.asarray(data)
#Convert lists to arrays
variable_array = data[:,1]
time_array = data[:,0]
return variable_array, time_array
def analysis_FFT(filepath, pattern):
signal, time = Extract_Data(filepath, pattern)
signal_FFT = np.fft.fft(signal)
N = len(signal_FFT)
T = time[-1]
#Frequencies
signal_freq = np.fft.fftfreq(N, d = T/N)
#Shift the frequencies
signal_freq_shift = np.fft.fftshift(signal_freq)
#Real and imagniary part of the signal
signal_real = signal_FFT.real
signal_imag = signal_FFT.imag
signal_abs = pow(signal_real, 2) + pow(signal_imag, 2)
#Shift the signal
signal_shift = np.fft.fftshift(signal_FFT)
#signal_shift = np.fft.fftshift(signal_FFT)
#Spectrum
signal_spectrum = np.abs(signal_shift)
What I really concern about is the sampling rate. As you look at the plot, it looks like the sampling rate of the first ~0.002s is not the same as the rest of the signal. So I'm thinking maybe I need to normalize the signal
However, when I use np.fft.fftfreq(N, d =T/N), it seems like np.fft.ffreq assumes the signal has the same sampling rate throughout the domain. So I'm not sure how I could normalize the signal with np.fft. Any suggestions?
Cheers.
This is what I got when I plotted shifted frequency [Hz] with shifted signal
I generated a synthetic signal similar to yours and plotted, like you did the spectrum over the whole time. Your plot was good as it pertains to the whole spectrum, just appears to not give the absolute value.
import numpy as np
import matplotlib.pyplot as p
%matplotlib inline
T=0.05 # 1/20 sec
n=5000 # 5000 Sa, so 100kSa/sec sampling frequency
sf=n/T
d=T/n
t=np.linspace(0,T,n)
fr=260 # Hz
y1= - np.cos(2*np.pi*fr*t) * np.exp(- 20* t)
y2= 3*np.sin(2*np.pi*10*fr*t+0.5) *np.exp(-2e6*(t-0.001)**2)
y=(y1+y2)/30
f=np.fft.fftshift(np.fft.fft(y))
freq=np.fft.fftshift(np.fft.fftfreq(n,d))
p.figure(figsize=(12,8))
p.subplot(311)
p.plot(t,y ,color='green', lw=1 )
p.xlabel('time (sec)')
p.ylabel('Velocity (m/s)')
p.subplot(312)
p.plot(freq,np.abs(f)/n)
p.xlabel('freq (Hz)')
p.ylabel('Velocity (m/s)');
p.subplot(313)
s=slice(n//2-500,n//2+500,1)
p.plot(freq[s],np.abs(f)[s]/n)
p.xlabel('freq (Hz)')
p.ylabel('Velocity (m/s)');
On the bottom, I zoomed in a bit to show the two main frequency components. Note that we are showing the positive and negative frequencies (only the positive ones, times 2x are physical). The Gaussians at 2600 Hz indicate the frequency spectrum of the burst (FT of Gaussian is Gaussian). The straight lines at 260 Hz indicate the slow base frequency (FT of sine is a delta).
That, however hides the timing of the two separate frequency components, the short (in my case Gaussian) burst at the start at about 2.6 kHz and the decaying low tone at about 260 Hz. The spectrogram plots spectra of short pieces (nperseg) of your signal in vertical as stripes where color indicates intensity. You can set some overlap between the time frames,which should be some fraction of the segment length. By stacking these stripes over time, you get a plot of the spectral change over time.
from scipy.signal import spectrogram
f, t, Sxx = spectrogram(y,sf,nperseg=256,noverlap=64)
p.pcolormesh(t, f[:20], Sxx[:20,:])
#p.pcolormesh(t, f, Sxx)
p.ylabel('Frequency [Hz]')
p.xlabel('Time [sec]')
p.show()
It is instructive to try and generate the spectrogram yourself with the help of just the FFT. Otherwise the settings of the spectrogram function might not be very intuitive at first.

np.fft.fft not working properly

I'm trying to use python 3.x to do an fft from some data. But when I plot I get my original data (?) not the data's fft. I'm using matlab so I can compare the results.
I've already tried many examples from this site but nothing seems to work. I'm not used to work with python. How can I get a plot similar to matlab's? I don't care if I get -f/2 to f/2 or 0 to f/2 spectrum.
My data
import scipy.io
import numpy as np
import matplotlib.pyplot as plt
mat = scipy.io.loadmat('sinal2.mat')
sinal2 = mat['sinal2']
Fs = 1000
L = 1997
T = 1.0/1000.0
fsig = np.fft.fft(sinal2)
freq = np.fft.fftfreq(len(sinal2), 1/Fs)
plt.figure()
plt.plot( freq, np.abs(fsig))
plt.figure()
plt.plot(freq, np.angle(fsig))
plt.show()
FFT from python:
FFT from matlab:
The imported signal sinal2 has a size (1997,1). In case of 2 dimensional arrays like this, numpy.fft.fft by default computes the FFT along the last axis. In this case that means computing 1997 FFTs of size 1. As you may know a 1-point FFT is an identity mapping (meaning the FFT of a single value gives the same value), hence the resulting 2D array is identical to the original array.
To avoid this, you can either specify the other axis explicitly:
fsig = np.fft.fft(sinal2, axis=0)
Or otherwise convert the data to a single dimensional array, then compute the FFT of a 1D array:
sinal2 = singal2[:,0]
fsig = np.fft.fft(sinal2)
On a final note, you FFT plot shows a horizontal line connecting the upper and lower halfs of the frequency spectrum. See my answer to another question to address this problem. Since you mention that you really only need half the spectrum, you could also truncate the result to the first N//2+1 points:
plt.plot( freq[0:len(freq)//2+1], np.abs(fsig[0:len(fsig)//2+1]))

Resources