|
Basic R-2R (composite) video DAC into source/input impedances. Voltage sources are used to simulate pins (pin output impedance doesn't appear to be listed in the datasheets and thus isn't modeled).
We want to use this to output a composite video signal, so we want our max output voltage (shown in blue) to be 1V with a source impedance of 75ohms when driving the matching 75ohm input impedance of the receiver.
With an 8-bit configuration where each pin outs 0 or 3.3V, the max output voltage of the DAC network (in the absense of any further impedances) should be Vout=Vref*(2^N-1)/(2^N), with Vref=3.3V and N=8. This yields Vout=3.3*255/256=~3.2871V, with an output impedance of R. This is true for basically any R, but only in isolation; when driving another circuit, the choice of R affects the output voltage/impedance. Since both of the target values for these quantities are known, we must choose a value that satisfies those constraints.
A simple way to handle this is to first reduce the problem scope a bit. Recognize that the two 75ohm impedances (each individually modeled as single 75ohm resistors) together form a 2:1 voltage divider. This allows us to model them as a single 150ohm resistor (2x75ohms in series), and changes our target DAC output voltage from 1V to 2V (to compensate for the 2:1 division; this is shown in green).
Thus, we have Vin=3.2871V, R2=150ohm, Vout=2V, and Vout=Vin*R2/(R1+R2). Solving for R1, we have R1=R2(1/(Vout/Vin)-1)=96.5332ohms=R, and 2R=193.0664ohms. Using these values, we see 0-1V with the correct impedance in sim.
Unfortunately, there's another constraint when we actually want to build the network: these resistor values aren't particularly common in real parts, but 100ohms and 200ohms are within 5% of R and 2R respectively, and those _are_ common (as is 75ohms). So, those values have been used instead. With those, we see 0-.986V instead, within 3% of our voltage target and more than acceptable for this use case. For simplicity, I'm ignoring the effects of resistor value inaccuracies, and assuming 5% tolerance on each should be negligible.
As for current draw, admittedly I wasn't entirely sure how to best check/sim this; essentially we want to make sure the maximum current drawn from each pin is below the max in the spec (12mA). I did it the dumb way and hand-checked some simple DAC values by fixing individual PWM widths to 0%/100%. I tried all 0, all 1, and a few one-hot configurations. It appears that the one-hot configurations produce the highest current draw for single pins, with the MSB one-hot configuration producing the highest draw at 11.6mA, which is within our target 12mA limit.
Other things to consider include the frequency response of the DAC and glitch/switching conditions. For the former, this DAC configuration behaves closest to an ideal zero-order-hold, which should act as an LPF near our digital nyquist with suppressed aliases. Ideally, we would want to completely remove (lowpass) the aliases and compensate for the lowpass behavior within the intended frequency range (eg. with a pre-emphasis filter), but for simplicity's sake I've completely ignored these effects. Similarly, glitch/switching conditions are also ignored for simplicity.
|