Analog and digital signal

  • Thread starter Deleted member 2258824
  • Start date
D

Deleted member 2258824

Guest
So through out the years I have heard many different explanations on what are analog and digital signals some of them were simple but other ones went a little bit in depth.

Now for someone that has never heard of these two words before aka my son how would you explain it to him what are analog and digital signals and how do you know if the device/product is using analog or digital signal because if I would try to explain it to him I would just confuse him even more.

Thank you so much for the help
 
Solution
Analog is the real world. Things in our world move smoothly from one place to another along a continuous path, whether it is straight or wiggly. Take as an example a toy car tied to a string, and you pull the car around the room. If you used a video camera to record it movements from overhead, you'd see the car move around according to how you had pulled it. But the car never just suddenly stops being in one spot and magically appears at another.

Digital signals are ways to record information as numbers representing small samples of the real world. All such systems have to do several steps to achieve this. First that need information input from the real world to be measured and converted into numbers. In the case of our video of the...
Do you want the easy explanation or the correct explanation?

The easy explanation is that digital signals have discrete values or steps, instead of continuous variability like analog signals. It's like drawing on graph paper by filling in the blocks, instead of drawing a line (that's as fat as the blocks) anywhere you want.

The correct explanation gets rather math-heavy. When you convert an analog signal to digital, you're creating a mathematical representation of that continuous analog signal by using discrete values. When you do this, it turns out you can perfectly represent any continuous analog signal whose rate of change (frequency) is less than twice your rate of digital samples. This is called the Nyquist frequency.

So say you convert a 22 kHz audio recording (that is, an analog signal which only contains frequencies from 0 to 22,000 Hz - the limit of human hearing is about 20 kHz) to digital using a sampling rate of 44 kHz. Even though you've converted that continuous signal into bumpy graph paper, you can recreate that original continuous signal perfectly. There are lots of ways to wriggle the line through every block of the graph paper the digital signal recorded, but these wriggles only create frequencies higher than 22 kHz. The reconstructed signal below 22 kHz is identical to the original. It's not obvious why this is true if you think of it in regular time-space. But if you take a Fourier transform (convert it to frequency space) it becomes clear. Your sampling rate corresponds to a bandpass filter in frequency space.

That handles the time axis of your digital conversion (sampling frequency). Turns out the graph paper doesn't have to be a grid of squares. It can be a grid of rectangles. You can sample the vertical axis (amplitude, or volume) in fewer or more steps than the horizontal (time, or frequency). The amplitude axis (bit depth) ends up modifying the signal-to-noise ratio of your digital sample. Since analog recordings aren't perfect, they always have some noise in them - this is the static hiss you hear when you turn your speaker volume way up when no sound is playing. If your digital signal has sufficient bit depth, its SNR ratio will be high enough to equal or exceed that of the original analog signal, and the resulting digital recording will again be a perfect representation of the original analog signal (to the extent that your equipment can accurately capture that analog signal).

So yeah, if your son is young, just go with the filling in the blocks on graph paper explanation.
 

Paperdoc

Polypheme
Ambassador
Analog is the real world. Things in our world move smoothly from one place to another along a continuous path, whether it is straight or wiggly. Take as an example a toy car tied to a string, and you pull the car around the room. If you used a video camera to record it movements from overhead, you'd see the car move around according to how you had pulled it. But the car never just suddenly stops being in one spot and magically appears at another.

Digital signals are ways to record information as numbers representing small samples of the real world. All such systems have to do several steps to achieve this. First that need information input from the real world to be measured and converted into numbers. In the case of our video of the toy car, this might be taking one frozen frame of the video and using that to convert the position of the car on the floor into a numeric representation. This is like expressing its position on a graph paper that is a map the the floor. This is the first stage of "Analog-to-Digital" or "A to D" data conversion. Then the digital system must store that number in some kind of code - most commonly, a binary math method of recording those numbers - on some recording medium. If you were simply to do your own measurements with a ruler and write down the car's position in the frozen frame image as "x = 297, y = 23", that would be a digital record. If you had this process automated somehow so that the data were recorded in a computer file, that's also a digital data record.

Now, that one digital sample of a frozen frame of the video does not tell you any story, really. There was a whole video of the car's movement. To convert that whole video into a digital version of what it shows, you need a system that examines every single frozen frame of the original video and does the same task. To make it much easier to do and to re-construct the original data later, normally we arrange that the times between original frozen frame samples will all be the same. If we do this, whether by hand taking a long time until you have sheets of paper covered with hand-written car positions, or using some automated system that stores all the information in a computer file, we end up with a series of data records that each represents a brief "snapshot" of reality, converted into some numeric data. The timing of the snapshots is uniformly spaced.

Now, what about "playback". If you replay the original video on a screen, you'll see the car moving smoothly around the floor. To replay the digital version of that, though, will require that we read the data from each frozen frame, draw what that says, then arrange all the resulting images in a sequence and view them one after another. What we actually see is the car at one position, then jumped to a new position in the next drawing, then to new position in the next drawing, etc. But if we do this relatively quickly, our minds interpret things according to what we already have learned about reality. Our minds assume that the car was not merely jumping from spot to spot. It was moving smoothly through all the positions of the sequence of drawings, and we "see" what looks like the original reality. Of course, how "real" it looks depends hugely on the amount of detail we chose to record in our original process of sampling and digitizing the individual frames of the video. I wrote above that all we were doing was converting the position of the toy car on the floor to x and y co-ordinates. But a complete video digitization system such as is really used for making such recordings captures vastly more data from every frame so that the final re-created drawings each looks exactly like the original frozen frame in every detail.

The same process in concept applies to all digital records of reality. For another example, take CD's of music. The original music signal is recorded in a studio by microphones converting the sound waves into electrical signals. The signal is really an analog (continuously varying) value of voltage against time. When it comes time to convert that information to digital form, the system breaks that long analog record into tiny short time slices. For each time slice it takes the voltage at that microsecond and converts it to a single digital representation, then stores it. "Microsecond"? Well, yes. Common ways to digitize audio signals use sampling rates of 44 kHz or higher. That means the analog signal is broken into 44,000 time slices for every second of the time, so one time slice is about 22.7 microseconds long, often shorter for high-quality recordings. So the result of the process is that the analog signal of voltage versus time becomes a long sequence of numbers, each representing the voltage in digital form at a tiny time slice along the way. That's the "A to D" phase of the process.

When it's time to play back the recording, we need a system that will go though the entire file - a sequence of numbers taken at fixed time spacings. For each entry it will need to use a "Digital - to - Analog" converter which can create an output voltage exactly matching what the digital record says for that time slice and feed it out to an amplifier, then proceed to the next time sample record. It must do this at exactly the same rate as the original sampling was done, so that the playback timing exactly matches the original analog record. This is the "D to A" phase of the process, and it reconstructs an analog signal from the digital records of all those time slices. Because of the limits of our own ears and of the analog amplifier equipment, we do not notice at all that the signal is little fixed spots of sound blurbs. We hear continuous music just like the original. A CD disk is simply the medium on which we can store and retrieve the digital data. The CD player reads off that data, performs the D to A conversion, and feeds the resulting analog signals to the audio amplifier / speaker system so we can listen.
 
Solution
D

Deleted member 2258824

Guest
Well I understood half of what you guys said but then you've lost me. Thanks for the really detailed explanation :D