So I am applying a noise reduction algorithm for a specific signal using octave and this is the code as it goes.
clc;
clear;
pkg load signal
x = csvread("Ascan.csv");
tres = 50 /length(x);
t = [0:tres:50-tres];
MHz = 10;
fres = 1/tres;
f1 = 0.5*MHz;
f2 = 6*MHz;
numberofOverlaps = 50;
freqChange = (f2-f1)/numberofOverlaps;
count =0;
fs = 50 * MHz;
Rpass = 1;
Rstop = 26;
fp1 = f1:freqChange:f2-freqChange;
fp2 = f1+2*freqChange:freqChange:f2+freqChange;
for i = 1:numberofOverlaps
fs1 = fp1(i) - 1*MHz;
fs2 = fp2(i) + 1*MHz;
if fs1<0
fs1 =0;
endif
fpass{i} = [fp1(i) fp2(i)];
fstop{i} = [fs1 fs2];
Wpass = 2 /fs * fpass{i};
Wstop = 2/fs * fstop{i};
[n,Wp,Ws] = buttord(Wpass, Wstop,Rpass,Rstop);
[b,a] = butter(n,Wp);
filtered{i} = filter(b,a,x);
fftfiltered{i} =fftfilt(b,x);
end
for L = 1:length(x)
for k = 1:numberofOverlaps
m1(k)= filtered{k}(L);
m2(k)= fftfiltered{k}(L);
endfor
minimalistic(L) = min(m1(k));
minimal(L) = min(m2(k));
endfor
figure(1);
subplot(1,3,1);
plot(t,x);
title("Unfiltered");
xlabel('Time in us');
ylabel('Amplitude');
subplot(1,3,2);
plot(t,minimalistic);
title("filtered using filter function");
xlabel('Time in us');
ylabel('Amplitude');
subplot(1,3,3);
plot(t,minimal);
title("filtered using fftFilt function");
xlabel('Time in us');
ylabel('Amplitude');
In the following code, I am getting this as the output
The signal in column 1 is the input signal and the signal in column 3 is the desired output, but this uses the function fftfilt(b,x)
and why does it not work the same way with filter(b,a,x)
whose output is shown in column 2.