A signal may be defined as any observable change in a quantity over space or time, even if it does not carry information. They can mainly be classified into two types :
An analog signal is a continuous stream of values. There are multiple possible values.
A digital signal is a discrete stream of values. There are only certain possible values.
This project aims to generate a sinusoidal signal, add Additive White Gaussian Noise (AWGN) to it and denoise it using Autoencoder models.
import numpy as np
import matplotlib.pyplot as plt
To generate a sample sinusoidal signal we can use the code below
t = np.linspace(1,100,1000)
v = 10*np.sin(t/(2*np.pi))
This would generate a signal which would look like
Now we calculate the power of the above-generated signal by using
w = v ** 2
The generated signal would now look like
Now to convert the power from watts to dB we would use the following code
w_db = 10 * np.log10(w)
The power plotted in dB would look like
We choose a target SnR or Signal to Noise Ratio, calculate the avg power and convert it to dB. Then we calculate the noise avg in dB, convert it to watts then sample from a normal distribution using the calculated parameters and add it to the generated signal to get the noisy signal.
target_snr_db = 20
# Calculate signal power and convert to dB
sig_avg_watts = np.mean(w)
sig_avg_db = 10 * np.log10(sig_avg_watts)
# Calculate noise according to [2] then convert to watts
noise_avg_db = sig_avg_db - target_snr_db
noise_avg_watts = 10 ** (noise_avg_db / 10)
# Generate an sample of white noise
mean_noise = 0
noise_volts = np.random.normal(mean_noise, np.sqrt(noise_avg_watts), len(w))
# Noise up the original signal
y_volts = v + noise_volts
After this, the signal with noise would look like
For training the Deep Learning model we would need some data samples. So, we would randomly generate those samples by defining functions that use the above signal and noise generator logic.
def signal_gen():
l = np.random.randint(1, 100)
t = np.linspace(1,l,1000)
v = 10*np.sin(t/(2*np.pi)) / 1000
return v
def noise_gen(v):
w = v ** 2
target_snr_db = 20
sig_avg_watts = np.mean(w)
sig_avg_db = 10 * np.log10(sig_avg_watts)
noise_avg_db = sig_avg_db - target_snr_db
noise_avg_watts = 10 ** (noise_avg_db / 10)
mean_noise = 0
noise_volts = np.random.normal(mean_noise, np.sqrt(noise_avg_watts), len(w))
y_volts = v + noise_volts
return y_volts
To view a sample from the generated dataset we can use the following code snippet.
v = signal_gen()
plt.subplot(2,1,1)
plt.title("Random Signal")
plt.plot( v)
plt.show()
plt.subplot(2,1,2)
plt.title("Random Signal with noise")
plt.plot(noise_gen(v))
plt.show()
The generated result would look something like this
To generate the dataset we use the following code snippet.
signal = []
noisy_signal = []
for i in range(1000):
v = signal_gen()
signal.append(v)
noisy_signal.append(noise_gen(v))
To perform this denoising we use a simple linear autoencoder model. This would have 1 encoder layer and 1 decoder layer. The size of each input sample would be 1000 and there would be a total of 1000 data points for the model.
import torch.nn as nn
import torch.nn.functional as F
class DeNoise(nn.Module):
def __init__(self):
super(DeNoise, self).__init__()
self.lin1 = nn.Linear(1000, 800)
self.lin_t1 = nn.Linear(800, 1000)
def forward(self, x):
x = F.tanh(self.lin1(x))
x = self.lin_t1(x)
return x
model = DeNoise().cuda()
print(model)
Here we use a tanh activation function as we know that the max and min boundaries of a sinusoidal function a -1 & 1. Having similar boundaries we found it to be best suited for this application.
import torch
criterion = nn.MSELoss()
optimizer = torch.optim.Adam(model.parameters(), lr=0.001)
def train(n_epochs , model):
training_loss = []
for epoch in range(n_epochs):
trainloss = 0.0
for sig, noisig in zip(signal, noisy_signal):
sig = torch.Tensor(sig).cuda()
noisig = torch.Tensor(noisig).cuda()
optimizer.zero_grad()
output = model(noisig)
loss = criterion(output , sig)
loss.backward()
optimizer.step()
trainloss += loss.item()
print("Epoch: {} , Training Loss: {}".format(epoch + 1 , trainloss / len(signal)))
training_loss.append(trainloss / len(signal))
plt.plot(training_loss)
print("Training Completed !!!")
train(10, model)
The result from the training would look like
def plot(i):
pred = model(torch.Tensor(signal[i]).cuda()).cpu()
plt.subplot(4,1,1)
plt.title("Original Signal")
plt.xlabel("Voltage")
plt.ylabel("Time")
plt.plot(signal[i])
plt.show()
plt.subplot(4,1,2)
plt.title("Noisy Signal")
plt.xlabel("Voltage")
plt.ylabel("Time")
plt.plot(noisy_signal[i])
plt.show()
plt.subplot(4,1,3)
plt.title("Predicted Signal")
plt.xlabel("Voltage")
plt.ylabel("Time")
plt.plot(pred.detach().numpy())
plt.show()
The above block would generate results that would look like
So, in this project, we have successfully implemented a signal denoiser that uses a PyTorch-based deep learning model.
Code
GitHub: https://github.com/srimanthtenneti/Autoencoders/blob/main/Signal_Denoiser.ipynb
Please feel free to connect.
Contact
LinkedIn : https://www.linkedin.com/in/srimanth-tenneti-662b7117b/