Featured Post

Welcome to the Forensic Multimedia Analysis blog (formerly the Forensic Photoshop blog). With the latest developments in the analysis of m...

Tuesday, April 22, 2014

Predictive coding techniques

Independent frames, bi-directional frames, and predictive frames. Which would you rather deal with? How would you explain what's going on?

Last year, I shared this graphic with you. The graphic is an oversimplification, of course. As with everything in life, the devil's in the details. In terms of predictive frames, there are different ways of arriving at those pesky P-frames. Here's some information on a couple of the most popular methods scene in DCCTV.

Digital Modulation (DM): "The signal is first quantised into discrete levels, but the size of the step between adjacent samples is kept constant. The signal may therefore only make a transition from one level to an adjacent one. Once the quantization operation is performed, transmission of the signal can be achieved by sending a zero for a negative transition, and a one for a positive transition. Note that this means that the quantised signal must change at each sampling point.

The demodulator for a delta-modulated signal is simply a staircase generator. If a one is received, the staircase increments positively, and if a zero is received, negatively. This is usually followed by a lowpass filter."


"The key to using delta modulation is to make the right choice of step size and sampling period —an incorrect selection will mean that the signal changes too fast for the steps to follow, a situation called overloading. Important parameters are therefore the step size and the sampling period."


Disadvantages are thus:

  • Usually relatively poor result.
  • Edges and rapid changes are difficult to code.
  • Error propagation at the reconstruction.
  • Granularity noise, due to switching between to levels

If you encounter a DM encoded video with a lot of motion or small fine details that are completely gone, there's really no fixing it. 

Differential Pulse Code Modulation (DPCM): "According to the Nyquist sampling criterion, a signal must be sampled at a sampling rate that is at least twice the highest frequency in the signal to be able to reconstruct it without aliasing. The samples of a signal that is sampled at that rate or close to generally have little correlation between each other (knowing a sample does not give much information about the next sample). However, when a signal is highly oversampled (sampled at several times the Nyquist rate, the signal does not change a lot between from one sample to another. Consider, for example, a sine function that is sampled at the Nyquist rate. Consecutive samples of this signal may alternate over the whole range of amplitudes from –1 and 1. However, when this signal is sampled at a rate that is 100 times the Nyquist rate (sampling period is 1/100 of the sampling period in the previous case), consecutive samples will change a little from each other. This fact can be used to improve the performance of quantizers significantly by quantizing a signal that is the difference between consecutive samples instead of quantizing the original signal. This will result in either requiring a quantizer with much less number of bits (less information to transmit) or a quantizer with the same number of bits but much smaller quantization intervals (less quantization noise and much higher SNR)."

No comments: