Bit Error Probability - telecommunication

I want to ask for any help to solve my problem. I want to compute BER (Bit error rate) for a nine combination points as illustrated below. I used the computation for SER and then I convert it to BER, but the result was incorrect. Any site or suggestion please?
Many thanks
Othman
My code is:
clear all clc
SNR = 0:40;
SNRL = 10.^(SNR./10);
Eb=1;
sigma = sqrt(2*Eb./SNRL);
d2 = 0.3;
Pe = 14/9*erfc((d2)./sqrt(2*sigma.^2))+2/9*erfc((0)./sqrt(2*sigma.^2));
semilogy(SNR,Pe) grid on hold on

Related

How to find the variance of the sample mean, given the population mean, standard deviation and sample size

Problem Statement : Given a sample of size n = 60 taken from a continuous population distribution with mean 56 and standard deviation 25, find the variance of the sample mean.
I tried the below code but as expected, there is no fixed answer. And my answer is shown incorrect.
dist = scipy.stats.norm(loc=56, scale=25)
sample = dist.rvs(60)
x = np.var(sample)
Err = math.sqrt(25/60)
dist = scipy.stats.norm(loc=56, scale=Err)
Variance = np.variance(dist)
Its something around 10.52

Cosmic ray removal in spectra

Python developers
I am working on spectroscopy in a university. My experimental 1-D data sometimes shows "cosmic ray", 3-pixel ultra-high intensity, which is not what I want to analyze. So I want to remove this kind of weird peaks.
Does anybody know how to fix this issue in Python 3?
Thanks in advance!!
A simple solution could be to use the algorithm proposed by Whitaker and Hayes, in which they use modified z scores on the derivative of the spectrum. This medium post explains how it works and its implementation in python https://towardsdatascience.com/removing-spikes-from-raman-spectra-8a9fdda0ac22 .
The idea is to calculate the modified z scores of the spectra derivatives and apply a threshold to detect the cosmic spikes. Afterwards, a fixer is applied to remove the spike points and replace it by the mean values of the surrounding pixels.
# definition of a function to calculate the modified z score.
def modified_z_score(intensity):
median_int = np.median(intensity)
mad_int = np.median([np.abs(intensity - median_int)])
modified_z_scores = 0.6745 * (intensity - median_int) / mad_int
return modified_z_scores
# Once the spike detection works, the spectrum can be fixed by calculating the average of the previous and the next point to the spike. y is the intensity values of a spectrum, m is the window which we will use to calculate the mean.
def fixer(y,m):
threshold = 7 # binarization threshold.
spikes = abs(np.array(modified_z_score(np.diff(y)))) > threshold
y_out = y.copy() # So we don't overwrite y
for i in np.arange(len(spikes)):
if spikes[i] != 0: # If we have an spike in position i
w = np.arange(i-m,i+1+m) # we select 2 m + 1 points around our spike
w2 = w[spikes[w] == 0] # From such interval, we choose the ones which are not spikes
y_out[i] = np.mean(y[w2]) # and we average the value
return y_out
The answer depends a on what your data looks like: If you have access to two-dimensional CCD readouts that the one-dimensional spectra were created from, then you can use the lacosmic module to get rid of the cosmic rays there. If you have only one-dimensional spectra, but multiple spectra from the same source, then a quick ad-hoc fix is to make a rough normalisation of the spectra and remove those pixels that are several times brighter than the corresponding pixels in the other spectra. If you have only one one-dimensional spectrum from each source, then a less reliable option is to remove all pixels that are much brighter than their neighbours. (Depending on the shape of your cosmics, you may even want to remove the nearest 5 pixels or something, to catch the wings of the cosmic ray peak as well).

Memory Leak when running PYMC3 in a FOR LOOP

I'm using PYMC3 to fit tennis players' serve ace rates using Bayesian fitting to a Beta curve. Each time the code loops through a player, the memory use increases a little. I'm trying to do this for 400+ players over 3 different surfaces and I run out of memory after about 200 players. I don't understand why the memory doesn't get re-set after each loop iteration as I don't think I'm using info from prior loop iterations.
I think the issue may be to do with the Trace. I saw advice somewhere that I should not have trace = pm.sample(...) but rather just pm.sample(...) and then grab that data after the program has run. I'm not sure how to implement that fix and I'm hoping there's a more straightforward solution out there to what I imagine would be a fairly common problem (though I haven't seen questions on it much online).
The relevant bits of the code are shown below. Thanks in advance for your help.
import pymc3 as pm
prior_parameters = beta.fit(chart_data, floc = 0, fscale = 1)
prior_a, prior_b = prior_parameters[0:2]
for i in range(server_by_surface_pct.shape[0]):
#srv_count is number of serves taken by player i on surface j
srv_count = pivot_srv_count.iat[i, j]
#Go to next iteration of loop if no serves for player i on surface j
if np.isnan(srv_count):
continue
#ace_pct is the percent of serves from player i on surface j that are aces
ace_pct = server_by_surface_pct.iat[i,j]
#calculate ace_count (number of aces) by player i on surface j
ace_count = round(srv_count*ace_pct,0)
#zero aces is possible so replace NANs with ZERO
if np.isnan(ace_count):
ace_count = 0.0
#pm = PYMC3 -- this is the Bayesian fitting model
with pm.Model() as model:
theta_prior = pm.Beta('prior', prior_a, prior_b)
observations = pm.Binomial('obs',n = srv_count, p = theta_prior, observed = ace_count)
start = pm.find_MAP()
step = pm.NUTS(scaling=start)
trace = pm.sample(1000, step=step, start=start, progressbar=True)
#mean of the trace is the new fitted serve percent for player i on surface j
server_by_surface_pct_fitted.iat[i,j] = np.mean(trace['prior'])

frequency to time conversion using MATLAB

I would like to convert my data in frequency domain into time domain. In this attached excel sheet (book1.xlxs) Column A is Frequency. Column B and C is real and imaginary data (B+jC). Also attached you can see my code. But its not working. I would like to have the my result something shown in figure in time domain (green curve part-1).
[num, data, raw] = xlsread('Book1.xlsx');
ln=length(raw)-1; %find the length of the sequence
xk=zeros(1,ln); %initilise an array of same size as that of input sequence
ixk=zeros(1,ln); %initilise an array of same size as that of input sequence
rx = zeros(1,ln); %real value of fft
ix = zeros(1,ln); %imaginary value of fft
for i= 2:length(raw)
rx(i-1) = cell2mat(raw(i,2));
ix(i-1) = cell2mat(raw(i,3));
xk(i-1) = sqrt(rx(i-1)^2 + ix(i-1)^2);
end
for n=0:ln-1
for k=0:ln-1
ixk(n+1)=ixk(n+1)+(xk(k+1)*exp(i*2*pi*k*n/ln));
end
end
ixk=10*log(ixk./ln);
t=0:ln-1
plot(t, ixk)
In this image this code should give me the result similar to the green curve-part1
Instead of doing the FFT yourself, you could use the built-in Matlab functions to do it - much easier.
A good example from Mathworks is given here. The following is some code I have based myself on. The passed-in parameter f is your time domain trace, and fsampling is your sampling rate. The passed-out parameters freq and finv are your frequency vector and fourier transform, respectively.
function [freq, finv] = FourierTransform(f,fsampling)
% Fast Fourier Transform
fsampling = round(fsampling);
finv = fft(f,fsampling);
finv = finv(1:length(finv)/2+1); % Truncate out only the second half, due to symmetry
finv(2:end - 1) = 2*finv(2:end - 1); % Adjust amplitude to account for truncation
finv = finv./length(f);
freq = 0:fsampling/2;
end

glossy reflection in ray tracing

I am doing a project about ray tracing, right now I can do some basic rendering.
The image below have:
mirror reflection,
refraction,
texture mapping
and shadow.
I am trying to do the glossy reflection, so far this is what I am getting.
Could anyone tell me if there is any problem in this glossy reflection image?
In comparison, the image below is from the mirror reflection
This is my code about glossy reflection, basically, once a primary ray intersect with an object.
From this intersection, it will randomly shot another 80 rays, and take the average of this 80 rays color. The problem I am having with this code is the magnitude of x and y, I have to divide them by some value, in this case 16, such that the glossy reflect ray wouldn't be too random. Is there anything wrong with this logic?
Colour c(0, 0, 0);
for (int i = 0; i < 80; i++) {
Ray3D testRay;
double a = rand() / (double) RAND_MAX;
double b = rand() / (double) RAND_MAX;
double theta = acos(pow((1 - a), ray.intersection.mat->reflectivity));
double phi = 2 * M_PI * b;
double x = sin(phi) * cos(theta)/16;
double y = sin(phi) * sin(theta)/16;
double z = cos(phi);
Vector3D u = reflect.dir.cross(ray.intersection.normal);
Vector3D v = reflect.dir.cross(u);
testRay.dir = x * u + y * v + reflect.dir;
testRay.dir.normalize();
testRay.origin = reflect.origin;
testRay.nbounces = reflect.nbounces;
c = c + (ray.intersection.mat->reflectivity)*shadeRay(testRay);
}
col = col + c / 80;
Apart from the hard coded constants which are never great when coding, there is a more subtle issue, although your images overall look good.
Monte-Carlo integration consists in summing the integrand divided by the probability density function (pdf) that generated these samples. There are thus two problems in your code:
you haven't divided by the pdf although you seem to have used a pdf for Phong models (if I recognized it well ; at least it is not a uniform pdf)
you have further scaled your x and y components by 1./16. for apparently no reason which further changes your pdf.
The idea is that if you are able to sample your rays exactly according to Phong's model times the cosine law then you don't even have to multiply your integrand by the BRDF. In practice, there are no exact formula that allow to sample exactly a BRDF (apart from Lambertian ones), so you need to compute:
pixel value = sum BRDF*cosine*incoming_light / pdf
which mostly cancels out if BRDF*cosine = pdf.
Of course, your images overall look good, so if you're not interested in physical plausibility, that may as well be good.
A good source about the various pdfs used in computer graphics and Monte-Carlo integration (with the appropriate formulas) is the Global Illumination Compendium by Philip Dutré.

Resources