Since its release last week, I’ve been playing quite a bit of Fallout 4. There’s an interesting mini-game (which was in previous iterations as well) for “hacking” computer terminals, where you must guess the passcode on a list of possibilities with a limited number of guesses. Each failed guess provides the number of correct letters (in both value and position) in that particular word, but not which letters were correct, allowing you to deduce the correct passcode similarly to the game “Mastermind.” A natural question is, “what is the best strategy for identifying the correct passcode?” We’ll ignore the possibility of dud removal and guess resets (which exist to simplify it a bit in game) for the analysis.

Reformulating this as a probability question offers a framework to design the best strategy. First, some definitions: $N$ denotes the number of words, $z$ denotes the correct word, and $x_i$ denotes a word on the list (in some consistent order). A simple approach suggests that we want to use the maximum likelihood (ML) estimate of $z$ to choose the next word based on all the words guessed so far and their results:

$\hat{z} = \underset{x_i}{\mathrm{argmax}}~~\mathrm{Pr}(z=x_i)$

However, for the first word, the probability prior is uniform—each word is equally likely. This might seem like the end of the line, so just pick the first word randomly (or always pick the first one on the list, for instance). However, future guesses depend on what this first guess tells us, so we’d be better off with an estimate which maximizes the mutual information between the guess and the unknown password. Using the concept of entropy (which I’ve discussed briefly before), we can formalize the notion of “mutual information” into a mathematical definition: $I(z, x) = H(z) - H(z|x)$. In this sense, “information” is what you gain by making an observation, and it is measured by how it affects the possible states for a latent variable to take. For more compact notation, let’s define $F_i=f(x_i)$ as the “result” random variable for a particular word, telling us how many letters matched, taking values $\{0,1,...,M\}$, where $M$ is the length of words in the current puzzle. Then, we can change our selection criteria to pick the maximum mutual information:

$\hat{z} = \underset{x_i}{\mathrm{argmin}}~~H(z|F_i)$

But, we haven’t talked about what “conditional entropy” might mean, so it’s not yet clear how to calculate $H(z | F_i)$, apart from it being the entropy after observing $F_i$‘s value. Conditional entropy is distinct from conditional probability in a subtle way: conditional probability is based on a specific observation, such as $F_i=1$, but conditional entropy is based on all possible observations and reflects how many possible system configurations there are after making an observation, regardless of what its value is. It’s a sum of the resulting entropy after each possible observation, weighted by the probability of that observation happening:

$H(Z | X) = \sum_{x\in X} p(x)H(Z | X = x)$

As an example, let’s consider a puzzle with $M=5$ and $N=10$. We know that $\forall x_i,\mathrm{Pr}(F_i=5)=p_{F_i}(5)=0.1$. If we define the similarity function $L(x_i, x_j)$ to be the number of letters that match in place and value for two words, and we define the group of sets $S^{k}_{i}=\{x_j:L(x_i,x_j)=k\}$ as the candidate sets, then we can find the probability distribution for $F_i$ by counting,

$p_{F_i}(k)=\frac{\vert{S^k_i}\vert}{N}$

As a sanity check, we know that $\vert{S^5_i}\vert=1$ because there are no duplicates, and therefore this equation matches our intuition for the probability of each word being an exact match. With the definition of $p_{F_i}(k)$ in hand, all that remains is finding $H(z | F_i=k)$, but luckily our definition for $S^k_i$ has already solved this problem! If $F_i=k$, then we know that the true solution is uniformly distributed in $S^k_i$, so

$H(z | F_i=k) = \log_2\vert{S^k_i}\vert$.

Finding the best guess is as simple as enumerating $S^k_i$ and then finding the $x_i$ which produces the minimum conditional entropy. For subsequent guesses, we simply augment the definition for the candidate sets by further stipulating that set members $x_j$ must also be in the observed set for all previous iterations. This is equivalent to taking the set intersection, but the notation gets even messier than we have so far, so I won’t list all the details here.

All that said, this is more of an interesting theoretical observation than a practical one. Counting all of the sets by hand generally takes longer than a simpler strategy, so it is not well suited for human use (I believe it is $O(n^2)$ operations for each guess), although a computer can do it effectively. Personally, I just go through and find all the emoticons to remove duds and then find a word that has one or two overlaps with others for my first guess, and the field narrows down very quickly.

Beyond its appearance in a Fallout 4 mini-game, the concept of “maximum mutual information” estimation has broad scientific applications. The most notable in my mind is in machine learning, where MMI is used for training classifiers, in particular, Hidden Markov Models (HMMs) such as those used in speech recognition. Given reasonable probability distributions, MMI estimates are able to handle situations where ML estimates appear ambiguous, and as such they are able to be used for “discriminative training.” Typically, an HMM training algorithm would receive labeled examples of each case and learn their statistics only. However, a discriminative trainer can also consider the labeled examples of other cases in order to improve classification when categories are very similar but semantically distinct.

Everything that happens in the world can be described in some way. Our descriptions range from informal and causal to precise and scientific, yet ultimately they all share one underlying characteristic: they carry an abstract idea known as “information” about what is being described. In building complex systems, whether out of people or machines, information sharing is central for building cooperative solutions. However, in any system, the rate at which information can be shared is limited. For example, on Twitter, you’re limited to 140 characters per message. With 802.11g you’re limited to 54 Mbps in ideal conditions. In mobile devices, the constraints go even further: transmitting data on the network requires some of our limited bandwidth and some of our limited energy from the battery.

Obviously this means that we want to transmit our information as efficiently as possible, or, in other words, we want to transmit a representation of the information that consumes the smallest amount of resources, such that the recipient can convert this representation back into a useful or meaningful form without losing any of the information. Luckily, the problem has been studied pretty extensively over the past 60-70 years and the solution is well known.

First, it’s important to realize that compression only matters if we don’t know exactly what we’re sending or receiving beforehand. If I knew exactly what was going to be broadcast on the news, I wouldn’t need to watch it to find out what happened around the world today, so nothing would need to be transmitted in the first place. This means that in some sense, information is a property of things we don’t know or can’t predict fully, and it represents the portion that is unknown. In order to quantify it, we’re going to need some math.

Let’s say I want to tell you what color my car is, in a world where there are only four colors: red, blue, yellow, and green. I could send you the color as an English word with one byte per letter, which would require 3, 4, 5, or 6 bytes, or we could be cleverer about it. Using a pre-arranged scheme for all situations where colors need to be shared, we agree on the convention that the binary values 00, 01, 10, and 11 map to our four possible colors. Suddenly, I can use only two bits (0.25 bytes, far more efficient) to tell you what color my car is, a huge improvement. Generalizing, this suggests that for any set $\chi$ of abstract symbols (colors, names, numbers, whatever), by assigning each a unique binary value, we can transmit a description of some value from the set using $\log_2(|\chi|)$ bits on average, if we have a pre-shared mapping. As long as we use the mapping multiple times it amortizes the initial cost of sharing the mapping, so we’re going to ignore it from here out. It’s also worthwhile to keep this limit in mind as a max threshold for “reasonable;” we could easily create an encoding that is worse than this, which means that we’ve failed quite spectacularly at our job.

But, if there are additional constraints on which symbols appear, we should probably be able to do better. Consider the extreme situation where 95% of cars produced are red, 3% blue, and only 1% each for yellow and green. If I needed to transmit color descriptions for my factory’s production of 10,000 vehicles, using the earlier scheme I’d need exactly 20,000 bits to do so by stringing together all of the colors in a single sequence. But, given that by the law of large numbers, I can expect roughly 9,500 cars to be red, so what if I use a different code, where red is assigned the bit string 0, blue is assigned 10, yellow is assigned 110, and green 111? Even though the representation for two of the colors is a bit longer in this scheme, the total average encoding length for a lot of 10,000 cars decreases to 10,700 bits (1*9500 + 2*300 + 3*100 + 3*100), almost an improvement of 50%! This suggests that the probabilities for each symbol should impact the compression mapping, because if some symbols are more common than others, we can make them shorter in exchange for making less common symbols longer and expect the average length of a message made from many symbols to decrease.

So, with that in mind, the next logical question is, how well can we do by adapting our compression scheme to the probability distribution for our set of symbols? And how do we find the mapping that achieves this best amount of compression? Consider a sequence of $n$ independent, identically distributed symbols taken from some source with known probability mass function $p(X=x)$, with $S$ total symbols for which the PMF is nonzero. If $n_i$ is the number of times that the $i$th symbol in the alphabet appears in the sequence, then by the law of large numbers we know that for large $n$ it converges almost surely to a specific value: $\Pr(n_i=np_i)\xrightarrow{n\to \infty}1$.

In order to obtain an estimate of the best possible compression rate, we will use the threshold for reasonable compression identified earlier: it should, on average, take no more than approximately $\log_2(|\chi|)$ bits to represent a value from a set $\chi$, so by finding the number of possible sequences, we can bound how many bits it would take to describe them. A further consequence of the law of large numbers is that because $\Pr(n_i=np_i)\xrightarrow{n\to \infty}1$ we also have $\Pr(n_i\neq np_i)\xrightarrow{n\to \infty}0$. This means that we can expect the set of possible sequences to contain only the possible permutations of a sequence containing $n_i$ realizations of each symbol. The probability of a specific sequence $X^n=x_1 x_2 \ldots x_{n-1} x_n$ can be expanded using the independence of each position and simplified by grouping like symbols in the resulting product:

$P(x^n)=\prod_{k=1}^{n}p(x_k)=\prod_{i=1}^{S} p_i^{n_i}=\prod_{i=1}^{S} p_i^{np_i}$

We still need to find the size of the set $\chi$ in order to find out how many bits we need. However, the probability we found above doesn’t depend on the specific permutation, so it is the same for every element of the set and thus the distribution of sequences within the set is uniform. For a uniform distribution over a set of size $|\chi|$, the probability of a specific element is $\frac{1}{|\chi|}$, so we can substitute the above probability for any element and expand in order to find out how many bits we need for a string of length $n$:

$B(n)=-\log_2(\prod_{i=1}^Sp_i^{np_i})=-n\sum_{i=1}^Sp_i\log_2(p_i)$

Frequently, we’re concerned with the number of bits required per symbol in the source sequence, so we divide $B(n)$ by $n$ to find $H(X)$, a quantity known as the entropy of the source $X$, which has PMF $P(X=x_i)=p_i$:

$H(X) = -\sum_{i=1}^Sp_i\log_2(p_i)$

The entropy, $H(X)$, is important because it establishes the lower bound on the number of bits that is required, on average, to accurately represent a symbol taken from the corresponding source $X$ when encoding a large number of symbols. $H(X)$ is non-negative, but it is not restricted to integers only; however, achieving less than one bit per symbol requires multiple neighboring symbols to be combined and encoded in groups, similarly to the method used above to obtain the expected bit rate. Unfortunately, that process cannot be used in practice for compression, because it requires enumerating an exponential number of strings (as a function of a variable tending towards infinity) in order to assign each sequence to a bit representation. Luckily, two very common, practical methods exist, Huffman Coding and Arithmetic Coding, that are guaranteed to achieve optimal performance.

For the car example mentioned earlier, the entropy works out to about 0.35 bits, which means there is significant room for improvement over the symbol-wise mapping I suggested, which only achieved a rate of 1.07 bits per symbol, but it would require grouping multiple car colors into a compound symbol, which quickly becomes tedious when working by hand. It is kind of amazing that using only ~3,500 bits, we could communicate the car colors that naively required 246,400 bits (=30,800 bytes) by encoding each letter of the English word with a single byte.

$H(X)$ also has other applications, including gambling, investing, lossy compression, communications systems, and signal processing, where it is generally used to establish the bounds for best- or worst-case performance. If you’re interested in a more rigorous definition of entropy and a more formal derivation of the bounds on lossless compression, plus some applications, I’d recommend reading Claude Shannon’s original paper on the subject, which effectively created the field of information theory.

The Wiener filter is well known as the optimal solution to the problem of estimating a random process when it is corrupted by another additive process, using only a linear combination of values of the measured process. Mathematically, this means that the Wiener filter constructs an estimator of some original signal $x(t)$ given $z(t)=x(t)+n(t)$ with the property that $\|\hat{x}(t)-x(t)\|$ is minimized among all such linear estimators, assuming only that both $x(t)$ and $n(t)$ are stationary and have known statistics (mean, variance, power spectral density, etc.). When more information about the structure of $x(t)$ is known, different estimators may be easier to implement (such as a Kalman filter for signals with a recursive structure).

Such a filter is very powerful—it is optimal, after all—when the necessary statistics are available and the input signals meet the requirements, but in practice, signals of interest are never stationary (rarely even wide sense stationary, although it is a useful approximation), and their statistics change frequently. Rather than going through the derivation of the filter, which is relatively straightforward and available on Wikipedia (linked above), I’d like to talk about how to adapt it to situations that do not meet the filter’s criteria and still obtain high quality results, and then provide a demonstration on one such signal.

The first problem to deal with is the assumption that a signal is stationary. True to form for engineering, the solution is to look at only a brief portion of the signal and approximate it as stationary. This has the unfortunate consequence of preventing us from defining the filter once and reusing it; instead, as the measured signal is sliced into approximately stationary segments, we must estimate the relevant statistics and construct an appropriate filter for each segment. If we do the filtering in the frequency domain, then for segments of length N we are able to do the entire operation with two length N FFTs (one forward and one reverse) and $O(N)$ arithmetic operations (mostly multiplication and division). This is comparable to other frequency domain filters and much faster than the $O(N^2)$ number of operations required for a time domain filter.

This approach creates a tradeoff. Because the signal is not stationary, we want to use short time slices to minimize changes. However, the filter operates by adjusting the amplitude of each bin after a transformation to the frequency domain. Therefore, we want as many bins as possible to afford the filter high resolution. Adjusting the sampling rate does not change the frequency resolution for a given amount of time, because the total time duration of any given buffer is $f_{s}N$. So, for fixed time duration, the length of the buffer will scale inversely with the sampling rate, and the bin spacing in an FFT will remain constant. The tradeoff, then, exists between how long each time slice will be and how much change in signal parameters we wish to tolerate. A longer time slice weakens the stationary approximation, but it also produces better frequency resolution. Both of these affect the quality of the resulting filtered signal.

The second problem is the assumption that the statistics are known beforehand. If we’re trying to do general signal identification, or simply “de-noising” of arbitrary incoming data (say, for sample, cleaning up voice recorded from a cell phone’s microphone in windy area, or reducing the effects of thermal noise in a data acquisition circuit), then we don’t know what the signal will look like beforehand. The solution here is a little bit more subtle. The normal formulation of the Wiener filter, in the Laplace domain, is

$G(s)= \frac{S_{z,x}(s)}{S_{z}(s)}$
$\hat{X}(s)=G(s) Z(s)$

In this case we assume that the cross-power spectral density, $S_{z,x}(s)$, between the measured process $z(t)$ and the true process $x(t)$ is known, and we assume that the power spectral density, $S_{z}(s)$, of the measured process $z(t)$ is known. In practice, we will estimate $S_{z}(s)$ from measured data, but as the statistics of $x(t)$ are unknown, we don’t know what $S_{z,x}(s)$ is (and can’t measure it directly). But, we do know the statistics of the noise. And, by (reasonable) assumption, the noise and the signal of interest are independent. Therefore, we can calculate several related spectra and make some substitutions into the definition of the original filter.

$S_z(s)=S_x(s)+S_n(s)$
$S_{z,x}(s)=S_x(s)$

If we substitute these into the filter definition to eliminate S_x(s), then we are able to construct and approximation of the filter based on the (known) noise PSD and an estimate of the signal PSD (if the signal PSD were known, it’d be exact, but as our PSD estimate contains errors, the filter definition will also contain errors).

$G(s)=\frac{S_z(s)-S_n(s)}{S_z(s)}$

You may ask: if we don’t know the signal PSD, how can we know the noise PSD? Realistically, we can’t. But, because the noise is stationary, we can construct an experiment to measure it once and then use it later. Simply identify a time when it is known that there is no signal present (i.e. ask the user to be silent for a few seconds), measure the noise, and store it as the noise PSD for future use. Adaptive methods can be used to further refine this approach (but are a topic for another day). It is also worth noting that the noise does not need to be Gaussian, nor does it have any other restrictions on its PSD. It only needs to be stationary, additive, and independent of the signal being estimated. You can exploit this to remove other types of interference as well.

One last thing before the demo. Using the PSD to construct the filter like this is subject to a number of caveats. The first is that the variance of each bin in a single PSD estimate is not zero. This is an important result whose consequences merit more detailed study, but the short of it is that the variance of each bin is essentially the same as the variance of each sample from which the PSD was constructed. A remedy for this is to use a more sophisticated method for estimating the PSD by combining multiple more-or-less independent estimates, generally using a windowing function. This reduces the variance and therefore improves the quality of the resulting filter. This, however, has consequences related to the trade-off between time slice length and stationary approximation. Because you must average PSDs computed from (some) different samples in order to reduce the variance, you are effectively using a longer time slice.

Based on the assigned final project in ECE 4110 at Cornell University, which was to use a Wiener filter to de-noise a recording of Einstein explaining the mass-energy equivalence with added Gaussian white noise of unknown power, I’ve put together a short clip comparing the measured (corrupted) signal, the result after filtering with a single un-windowed PSD estimate to construct the filter, and the result after filtering using two PSD estimates with 50% overlap (and thus an effective length of 1.5x the no-overlap condition) combined with a Hann window to construct the filter. There is a clear improvement in noise rejection using the overlapping PSD estimates, but some of the short vocal transitions are also much more subdued, illustrating the tradeoff very well.

Be warned, the first segment (unfiltered) is quite loud as the noise adds a lot of output power.

Here is the complete MATLAB code used to implement the non-overlapping filter


% Assumes einsteindistort.wav has been loaded with

% Anything that can divide the total number of samples evenly
sampleSize = 512;

% Delete old variables
% clf;
clear input;
clear inputSpectrum;
clear inputPSD;
clear noisePSD;
clear sampleNoise;
clear output;
clear outputSpectrum;
clear weinerCoefficients;

% These regions indicate where I have decided there is a large amount of
% silence, so we can extract the noise parameters here.
noiseRegions = [1 10000;
81000 94000;
149000 160000;
240000 257500;
347500 360000;
485000 499000;
632000 645000;
835000 855000;
917500 937500;
1010000 1025000;
1150000 116500];

% Now iterate over the noise regions and create noise start offsets for
% each one to extract all the possible noise PSDs
noiseStarts = zeros(length(noiseRegions(1,:)), 1);
z = 1;
for k = 1:length(noiseRegions(:,1))
for t = noiseRegions(k,1):sampleSize:noiseRegions(k,2)-sampleSize
noiseStarts(z) = t;
z = z + 1;
end
end

% In an effort to improve the PSD estimate of the noise, average the FFT of
% silent noisy sections in multiple parts of the recording.
noisePSD = zeros(sampleSize, 1);
for n = 1:length(noiseStarts)
sampleNoise = d(noiseStarts(n):noiseStarts(n)+sampleSize-1);
noisePSD = noisePSD + (2/length(noiseStarts)) * abs(fft(sampleNoise)).^2 / sampleSize;
end

% Force the PSD to be flat like white noise, for comparison
% noisePSD = ones(size(noisePSD))*mean(noisePSD);

% Now, break the signal into segments and try to denoise it with a
% noncausal weiner filter.
output = zeros(1, length(d));
for k = 1:length(d)/sampleSize
input = d(1+sampleSize*(k-1):sampleSize*k);
inputSpectrum = fft(input);
inputPSD = abs(inputSpectrum).^2/length(input);
weinerCoefficients = (inputPSD - noisePSD) ./ inputPSD;
weinerCoefficients(weinerCoefficients < 0) = 0;
outputSpectrum = inputSpectrum .* weinerCoefficients;

% Sometimes for small outputs ifft includes an imaginary value
output(1+sampleSize*(k-1):sampleSize*k) = real(ifft(outputSpectrum, 'symmetric'));
end

% Renormalize and write to a file
output = output/max(abs(output));
wavwrite(output, r, 'clean.wav');


To convert this implementation to use 50% overlapping filters, replace the filtering loop (below "Now, break the signal into segments…") with this snippet:

output = zeros(1, length(d));
windowFunc = hann(sampleSize);
k=1;
while sampleSize*(k-1)/2 + sampleSize < length(d)
input = d(1+sampleSize*(k-1)/2:sampleSize*(k-1)/2 + sampleSize);
inputSpectrum = fft(input .* windowFunc);
inputPSD = abs(inputSpectrum).^2/length(input);
weinerCoefficients = (inputPSD - noisePSD) ./ inputPSD;
weinerCoefficients(weinerCoefficients < 0) = 0;
outputSpectrum = inputSpectrum .* weinerCoefficients;

% Sometimes for small outputs ifft includes an imaginary value
output(1+sampleSize*(k-1)/2:sampleSize*(k-1)/2 + sampleSize) = output(1+sampleSize*(k-1)/2:sampleSize*(k-1)/2 + sampleSize) + ifft(outputSpectrum, 'symmetric')';
k = k +1;
end


The corrupted source file used for the project can be downloaded here for educational use.

This can be adapted to work with pretty much any signal simply by modifying the noiseRegions matrix, which is used to denote the limits of "no signal" areas to use for constructing a noise estimate.

One of the most useful things that didn’t come up enough in college was a very basic concept, central to almost any digital communications system: the numerically controlled oscillator (NCO), the digital counterpart to an analog oscillator. They are used in software defined radio in order to implement modulators/demodulators and they have a number of other applications in signal processing, such as arbitrary waveform synthesis and precise control for phased array radar or sonar systems. Noise performance in digital systems can be carefully controlled by adjusting the data type’s precision, whereas in analog systems, even if the circuit is designed to produce a minimum of intrinsic noise, external sources can contribute an uncontrolled amount of noise that is much more challenging to manage properly. As digital systems increase in speed, analog circuits will be reduced to minimal front ends for a set of high speed ADCs and DACs.

Luckily, NCOs are easy to understand intuitively (but surprisingly difficult to explain conceptually), which is probably why they weren’t covered in-depth in school, although they are usually not immediately obvious to someone who hasn’t already seen them. A basic NCO consists of a lookup table containing waveform data (usually a sinusoid) for exactly one period and a counter for indexing into the table. The rate of change of the counter determines the frequency of the output wave, in normalized units, because the output wave still exists in the discrete time domain. The counter is generally referred to as a ‘phase accumulator,’ or simply an accumulator, because it stores the current value of the sine’s phase, and the amount that it changes every cycle I normally refer to as the ‘phase.’ In this sense, one of the simplest explanations of how an NCO works is that it tracks the argument to $sin(2\pi \hat{f}n)$ in a counter and uses a look up table to calculate the corresponding value of $sin(2\pi \hat{f}n)$. The challenge, however, lies in the implementation.

Block Diagram for a Numerically Controlled Oscillator

Floating point hardware is expensive in terms of area and power. Software emulation of floating point is expensive in terms of time. And, of course, floating point numbers cannot be used to index into an array without an intermediate conversion process, which can consume a large number of cycles without dedicated hardware. As a result, most NCOs are implemented using fixed point arithmetic for the phase accumulator, even if the table stores floating point values for high end DSPs. Fixed point introduces the notion of “normalization” because the number of bits dedicated to integer and fractional values is fixed, limiting the numbers that can be represented. Ideally, the full range of the fixed point type is mapped to the full range of values to be represented by a multiplicative constant. In the case of NCOs, this is usually done by using a new set of units (as opposed to radians or degrees) to represent phase angle, based on the lookup table size.

Normally, the period of $sin(x)$ is $2\pi$. However, for a lookup table, the period is the length of the table, because the input is an integer index, and the table’s range spans a single cycle of $sin(x)$. Because the index must wrap to zero after passing the end of the table, it is convenient to choose a table size that is a power of 2 so that wrap around can be implemented with a single bitwise operation or exploit the overflow properties of the underlying hardware for free, rather than an if-statement, which generally requires more cycles or hardware to evaluate. Thus, for a B-bit index, the table contains $2^B$ entries, and the possible frequencies that can be generated are integer multiples of $\frac{1}{2^B}$ (the minimum change in the accumulator’s value is naturally one table entry).

There is, of course, a clear problem with this implementation when precise frequency control is necessary, such as in all of the applications I mentioned at the start. If I wanted to build a digital AM radio tuner, then my sampling frequency would theoretically need to be at least 3.3 MHz to cover the entire medium wave band, where most commercial AM stations exist (although in practice it would need to be much higher in order to improve performance). If I use a table with B=8, then my frequency resolution is 0.00390625 * 3.3 MHz = 12.89 kHz, which is insufficient to form a software demodulator because the intra-station spacing is only 10 kHz. However, because the table size grows exponentially with B, it is undesirable or impossible to increase B past a certain point, depending on the amount of memory available for the lookup table. Depending on the size of the data stored in the table, there are also noise floor issues that affect the utility of increasing B, but I will discuss the effects of word size and quantization error on NCO performance another time.

A better solution is to produce non-integer multiples of the fundamental frequency by changing the index step size dynamically. For instance, by advancing the phase accumulator by an alternating pattern of 1 and then 2 samples, the effective frequency of the output sinusoid is halfway between the frequency for 1 sample and the frequency for 2 samples, plus some noise. This makes use of a second, fractional counter that periodically increments the primary index counter. The easiest way to implement this is to simply concatenate the B-bit index with an F-bit fractional index to form a fixed point word, so that an overflow from the fractional index will automatically increment the real table index. Then, when performing table lookup, the combined fixed point word is quantized to an integer value by removing the fractional bits. More advanced NCOs can use these fractional bits to improve performance by rounding or interpolating between samples in the table. Generally, because the value of F does not affect the size of the table, but it does increase the possible frequency resolution, I find the minimum value for F to give the desired resolution and then round up to the next multiple of 8 for B+F (8, 16, 24, …). It is possible to implement odd size arithmetic (such as 27 bits), but almost always the code will require at least the use of primitives for working with the smallest supported word size, which means that there is no performance penalty for increasing F so that B+F changes from 27 to 32.

By adding the F-bit fractional index, the frequency resolution improves to integer multiples of $\frac{1}{B+F}$, with no change in storage requirements, allowing. The only challenge, then, is converting between a floating point normalized frequency in Hz and the corresponding fixed point representation in table units. This is normally only done once during initialization, so the penalty of emulating floating point can be ignored, as the code that runs inside a tight processing loop will only use fast fixed point hardware. Because normalized frequency (in Hz) is already constrained to the interval [0, 1), the conversion from Hz (f) to table units (p) is a simple ratio:

$\frac{f}{p} = \frac{1}{2^B}$

$2^{B}f=p$

If any fractional index bits are used, then they must be included before p is cast from a floating point value to an integer by multiplying the value by the normalization constant, $2^F$, which is efficiently calculated with a left shift by F bits. The resulting value is then stored as an integer type; normally I use unsigned integers because I only need to work with positive frequency. All subsequent fixed point operations are done using the standard integer primitives with the understanding that the "true" value of the integer being manipulated is actually the stored value divided by $2^F$. This becomes important if two fixed point values are multiplied together, because the result will implicitly be multiplied by the normalization constant twice and need to be renormalized before it can be used with other fixed point value. In order to use the fixed point phase accumulator to index into the table, the integer portion must be extracted first, which is done by dividing by the $2^F$. Luckily, this can be computed efficiently with a right shift by F bits.

In conclusion, because an NCO requires between 10 and 20 lines of C to implement, I've created a sample NCO that uses an 8.24 fixed point format with a lookup table that has 256 entries, to better illustrate the concept. The output is an integer waveform with values from 1 to 255, representing a sine wave with amplitude 127 that is offset by 128, which would normally be used with an 8-bit unipolar voltage output DAC to produce an analog signal biased around a half-scale virtual ground. This code was tested with Visual Studio 2005, but it should be portable to any microcontroller that has correct support for 32-bit types and 32-bit floating point numbers.

uint8_t sintable32[256];

struct nco32 {
uint32_t accumulator;
uint32_t phase;
uint8_t value;
};

void sintable32_init(void);
void nco_init32(struct nco32 *n, float freq);
void nco_set_freq32(struct nco32 *n, float freq);
void nco_step32(struct nco32 *n);

/**
* Initialize the sine table using slow library functions. The sine table is
* scaled to the full range of -127,127 to minimize the effects of
* quantization. It is also offset by 128 in order to only contain positive
* values and allow use with unsigned data types.
*/
void sintable32_init(void) {
int i;
for (i = 0; i < 256; ++i) {
sintable32[i] = (uint8_t) ((127.*(sin(2*PI*i/256.))+128.) + 0.5);
}
}

/**
* Initialize the oscillator data structure and set the target frequency
* Frequency must be positive (although I don't check this).
*/
void nco_init32(struct nco32 *n, float freq) {
n->accumulator = 0;
n->value = sintable32[0];
nco_set_freq32(n, freq);
}

/**
* Set the phase step parameter of the given NCO struct based on the
* desired value, given as a float. This changes its frequency in a phase
* continuous manner, but this function should not be used inside a
* critical loop for performance reasons. Instead, a chirp should be
* implemented by precomputing the correct change to the phase rate
* in fixed point and adding it after every sample.
*/
void nco_set_freq32(struct nco32 *n, float freq) {
// 256 table entries, 24 bits of fractional index; 2^24 = 16777216
n->phase = (uint32_t) (freq * 256. * 16777216. + 0.5);
}

/**
* Compute the next output value from the table and save it so that it
* can be referenced multiple times. Also, advance the accumulator by
* the phase step amount.
*/
void nco_step32(struct nco32 *n) {
uint8_t index;

// Convert from 8.24 fixed point to 8 bits of integer index
// via a truncation (cheaper to implement but noisier than rounding)
index = (n->accumulator >> 24) & 0xff;
n->value = sintable32[index];
n->accumulator += n->phase;
}

/**
* Example program, for a console, not a microcontroller, produces
* 200 samples and writes them to output.txt in comma-separated-value
* format. They can then be read into matlab to compare with ideal
* performance using floats for phase and an actual sine function.
* First parameter is the desired normalized frequency, in Hz.
*/
int main(int argc, char **argv) {
struct nco32 osc;
float freq;
int i;
FILE *nco_output;

freq = (float) atof(argv[1]);

sintable32_init();
nco_init32(&osc, freq);
nco_output = fopen("output.txt", "w");

for (i = 0; i < 200; ++i) {
nco_step32(&osc);
fprintf(nco_output, "%d,", osc.value);
}

fclose(nco_output);
return 0;
}


There are obvious improvements, such as a linear interpolation when computing the NCO's output, but I will save those for my discussion of NCOs, resolution, quantization, and noise performance, because they are not necessary for a basic oscillator (in particular, for 8 bit samples like this, the 8.24 format is overkill, and the output has an SNR of approximately 48 dB, depending on the frequency chosen, which is limited predominantly by the fact that only 8 bits are used on the output, fundamentally limiting it to no more than 55 dB, with some approximations).