I realized one day that software alone became mechanical. I developed a taste for simplicity and problems started becoming less like learning experiences, and turned into mechanical exercises that resembled busy work more than true exploration. I realized I was working always in the abstract. I took detours, sure. I got multiple cyber security/pentesting certs, and sure it was interesting. But, it didn't scratch that itch in the end. I realized I needed to start touching the physical world. Interacting with physical phenomena. So, I set out to start building ELI (electromagnetic lookup interface). Start with RF, DSP, equations, noise, Raspberry Pi's with SDRs. It's a steep hill at the top and I'm at the bottom of a deep hole. These articles act as a way for me to metabolize and attempt to explain concepts I learn on my journey. Because, if I can explain what I learn, the better chance it will stick. The book I'm currently working out of is "Understand Digital Signal Processing" by Richard Lyons. It is a good book, and I enjoy it. I'll attempt to go chapter by chapter to explain concepts and what I learn. This exists more as a medium to force me to cross my t's and dot my i's within my learning journey. With this series, I will code what is needed with inline code blocks or when advanced, provide a repo. I code in Rust. With that, lets begin. Discrete Sequences and Notation discrete time signal: an independent time variable that is quantized so that a value of a signal is only known in discrete instances. discrete time signal: an independent time variable that is quantized so that a value of a signal is only known in discrete instances. discrete time signal So, what is discrete? Well, plainly we can say it is something being separated into individual pieces instead of being continuous. discrete So, a discrete time signal is just a continuous signal being defined at a specific instance. discrete time signal What about quantized? Pretty much rounding a measurement to the nearest value your system can measure. quantized So put together a discrete time signal is just fitting pieces of a continuous signal into places where your system can measure it. Like much of academic terminology, it a really fancy sounding term for a pretty simple idea. discrete time signal Let's see it visualized: The continuous line is analog, is flowy, it has no stop or end. The dots and lines represent discrete parts of that line that a system can measure and understand. Why do we need to do this? Well, computers need to sample at regular intervals, ts seconds (the s stands for sampling and t represents the period sampled). What does this kind of equation look like on paper? Lets break this down: x[n] - the value of the continuous signal measured at time n (n being the sampled number) x[n] n n f0 - the frequency f0 fs - sampling frequency fs It's very straight forward. So, lets convert this to code (rust) to drive home the point: use std::f64::consts::PI; fn main() { let f0 = 2.0; let fs = 12.0; let n_samples = 100; let mut signal = Vec::with_capacity(n_samples); for n in 0..n_samples { let n = n as f64; let sample = (2.0 * PI * f0 * n / fs).sin(); signal.push(sample); } println!("{:?}", signal); } use std::f64::consts::PI; fn main() { let f0 = 2.0; let fs = 12.0; let n_samples = 100; let mut signal = Vec::with_capacity(n_samples); for n in 0..n_samples { let n = n as f64; let sample = (2.0 * PI * f0 * n / fs).sin(); signal.push(sample); } println!("{:?}", signal); } Notation really is the barrier. Frequency Domain Above, the continuous analog line and the discrete points lived in the time domain. Now, we should explore the frequency domain. frequency domain Let's look at a representation: Let's separate these domains distinctly: time domain: time domain: what does a signal do over time? what does a signal do over time? what does a signal do over time frequency domain: frequency domain: what frequencies exist within a signal and how strong are they? what frequencies exist within a signal and how strong are they? what frequencies exist within a signal and how strong are they? At this point, we should differentiate amplitude and magnitude: amplitude: amplitude: how far and in what direction a variable is away from zero how far and in what direction a variable is away from zero how far and in what direction a variable is away from zero magnitude magnitude how far a variable is away from zero how far a variable is away from zero how far a variable is away from zero The distinction comes down to amplitude being able to be negative or positive and magnitude always being positive. Frequency domain graphs are extremely useful because they plainly tell us what exists and how powerful it is. powerful Block Diagrams A block diagram is a simple visual way to describe how a signal moves through a system. Each block represents an operation performed on the signal, and the arrows show how the signal flows from one step to the next. A DSP block typically includes three things: signals (data moving through) blocks (operations applied) arrows (direction the signal is moving) signals (data moving through) blocks (operations applied) arrows (direction the signal is moving) Representation I want to note that you will see the following in the typical summation block: That Σ symbol (sigma) means 'sum' in math notation. It looks more intimidating than it is so let us represent this in code: fn main() { let x1 = vec![1.0, 2.0, 3.0, 4.0]; let x2 = vec![0.5, 1.0, 1.5, 2.0]; let x3 = vec![2.0, 2.0, 2.0, 2.0]; let mut y = Vec::with_capacity(x1.len()); for n in 0..x1.len() { let y_n = x1[n] + x2[n] + x3[n]; y.push(y_n); } println!("y = {:?}", y); } fn main() { let x1 = vec![1.0, 2.0, 3.0, 4.0]; let x2 = vec![0.5, 1.0, 1.5, 2.0]; let x3 = vec![2.0, 2.0, 2.0, 2.0]; let mut y = Vec::with_capacity(x1.len()); for n in 0..x1.len() { let y_n = x1[n] + x2[n] + x3[n]; y.push(y_n); } println!("y = {:?}", y); } Very straight forward. Again, the barrier is the symbols. Once in code, it actually is really straight forward. Discrete Linear Time-Invariant Systems We defined discrete above, so lets dive into linear: Scaling works the way you would expect scaling input scales output in the same amount. Adding inputs adds outputs combined inputs = combined outputs in the same way. Scaling works the way you would expect scaling input scales output in the same amount. scaling input scales output in the same amount. scaling input scales output in the same amount. Adding inputs adds outputs combined inputs = combined outputs in the same way. combined inputs = combined outputs in the same way. combined inputs = combined outputs in the same way. Let us visualize a discrete linear signal Time Invariant Systems A time invariant system is one where a delay/shift on the input causes an equivalent delay on the output. To display the concept, below is a visual representation of what a discrete linear time-invariant system Discrete linear time-invariant systems are important because they behave predictably. Being linear, signals can be scaled/combined without changing how the system processes them. The time invariance means a system can respond the same way, regardless of the time the signal appears. These properties allow complex signals to be understood as a combo of simpler signals. This makes analysis and manipulation with tools like filtering and frequency analysis easier. With that, signal processing systems often exhibit the commutative property. This means the order of certain operations do not change the result. commutative property Example: adding two signals together produces the same output regardless of which signal is first added. While seeming very simple, it allows engineers to rearrange parts of the signal processing system without changing the outcome. As a system grows in complexity, the ability to reorder operations become useful for simplifying analysis and making efficient implementations. Example Impulse Response A very powerful idea in DSP is that a LTI (linear time-invariant) system can be completely described by an impulse response. To define it better, an impulse is a very short spike used to probe how a system behaves. impulse If we feed an impulse into a system and see its output, we get an impulse response. Once gained, we can determine how a system will react to every input signal. System responses to individual impulses can be added together to produce a final output. The mathematical operation that performs this process is convolution. impulse response convolution Real world convolution examples 1. Room acoustics When an audio engineer wants to understand how a particular room affects sound, they generate a sharp impulse (think balloon pop, hand clap, starter pistol if they have the audacity). That release or impulse contains energy across a wide range of frequencies. Microphones across the room record and the sharp impulse reveals reverberation, echo patterns and frequency correlation. Modern software for acoustics often uses this impulse to simulate room sound. 2. Mechanical engineers Mechanical engineers have a process called modal testing. If they want information on how a machine or structure vibrates, they strike it with an instrumented hammer to deliver a sharp impulse. Attached sensors to the machine or structure measure the impulse vibrations. The impulse vibrations reveal a system's natural frequency and damping characteristics which tell an engineer how a system will act under numerous operating conditions. modal testing 3. Electrical engineers Electrical engineers perform an impulse test as well. With analog circuits, very short voltage spikes are applied to a filter or amplifier where output is measured over time. Resulting waveform from this impulse, EE's can determine how a system processes impulses and what frequencies it amplifies. This wraps up the first chapter of learning and explaining. It is a lot, but slowly becoming easier. When doing this I realized I really needed to value the small wins. The small clicks of understanding. There are no fast wins in learning this, and it really needs patience. While a lot of effort, making these articles is a terrific way for me to metabolize chapter knowledge and maybe share it for people who are curious. Summing Up This wasn’t all of the first chapter, just the concepts I found important. I’m a huge noob to this, so if you have corrections, you are 100% free to comment and I’ll make corrections to the article. A lot of this is to try an cement my own understanding by writing publicly so I have the motivation to not look too foolish. Just a little foolish. I will continue to write through the chapters of the book because I find it beneficial for myself. It’d be cool if someone finds this helpful too. I will say that in this process, small wins are key. The process is slow, but those small clicks of understanding is the small payoff while I trek uphill. All of this is just an intrinsic motivation for applied knowledge, to get more skill, to understand the world better. At the end of the day, sometimes that is the most durable motivation.