# What is Heuristic implementation

2017-11-17 17:50:17 zhengxiaowei 160

Heuristic implementations can broadly be considered as those that are not produced by rigorous application of Boolean logic theory. (Remember that any truth table can» in theory, be implemented in fundamental or minimised two-level form.) Heuristic implementations are found by looking at the overall problem, often considering the way in which the task would be tackled manually.

### Ripple carry adder

The ripple carry, or parallel adder, arises out of considering how we perform addition, and is therefore a heuristic solution. Two numbers can be added by beginning with the two least significant digits to produce their sum, plus a carry>oat bit (if necessary). Then the next two digits are added (together with any carry-in bit from the addition of the first two digits) to produce the next sum digit and any carry-out bit produced at this stage. This process is then repeated until the most significant digits are reached (see Section 2. 5. 1).

To implement this procedure, for binary arithmetic, what is required is a logic block which can take two input bits, and add them together with a carry-in bit to produce their sum and a carry-out bit. This is exactly what the full adder, described earlier in Section 4.1.5, does. Consequently by joining four full adders together, with the carry-out from one adder connected to the carry-in of the next, a four-bit adder can be produced. This is shown in Fig. 4.15.

This implementation is called a parallel adder because all of the inputs are entered into the circuit at the same time (i.e. in parallel, as opposed to serially which means one bit entered after another). The name ripple carry adder arises because of the way the carry signal is passed along, or ripples, from one full adder to the next. This is in fact the main disadvantage of the circuit because the output, S3 may depend upon a carry out which has rippled along, through the second and third adders^ after being generated from the addition of the first two bits. Consequently the outputs from the ripple carry adder cannot be guaranteed stable until enough time has elapsed3 to ensure that a carry, if generated, has propagated right through the circuit.

This rippling limits the operational speed of the circuit which is dependent upon the number of gates the carry signal has to pass through. Since each full adder is a two-level circuit, the full four-bit ripple carry adder is an eight-level implementation. So after applying the inputs to the adder, the correct output cannot be guaranteed to appear until a time equal to eight propagation delays of the gates being used has elapsed.

The advantage of the circuit is that as each full adder is composed of five gates4 then only 20 gates are needed. The ripple carry or parallel adder, is therefore a practical solution to the production of a four-bit adder. This circuit is an example of an iterative or systolic array, which is the name given to a combinational circuit that uses relatively simple blocks (the full adders) connected together to perform a more complex function.

The fourth possible implementation of a four-bit binary adder bears some resemblance to the ripple carry adder，but overcomes the problem of the ‘rippling’ carry by using extra circuitry to predict this rippling in advance. This gives a speed advantage at the expense of a more complex circuit, which is a demonstration of a general rule that any gain in performance in some aspect of a circuit is usually matched by a loss in performance of another.

Reconsidering the ripple carry adder and denoting the carry-out from each stage by Cn, where n is the stage number, and the initial carry-in bit as Ci, we begin r with the first stage and derive the Boolean expression for C0. (We know what it is from the Karnaugh map in Section 4.1.5). So:

Similarly for the second stage:

This expression demonstrates the problem of the ripple carry adder, because c depends upon C0 which must be produced first. However, we already have an expression for C0 in terms of the actual inputs to the adder, so we can substitute this into C, so removing the rippling problem. This gives:

This is a rather unwieldy expression, but we can simplify it by letting, for a general stage, j:

This gives all four carry-outs in terms of the inputs, which means they can be I produced us soon as the inputs are applied to the circuit. Hence, there is no ‘rippling’ delay, although there will still be a delay given by the number of levels I required to implement these expressions. (Two-level circuits could be used but, as shown in the following circuit diagram, other implementations are usually J employed.) From the above it is clear that there is a distinct pattern for the carry outs which can be used to continue this process further if required.

The use of P and G to simplify the above expressions was not an arbitrary choice and by looking again at the truth table for the carry-out, shown in Table 4.7, we can sec their origin. By looping the mintemns to produce the AB product we note that a carry-out is generated (hence the use of G) if:

G=AB

that is if both inputs are 1. The remaining two minterms for C0 (ABC and show that a carry-in to the full adder (i.e. Ci= 1) is propagated (hence the use of P)

if either A or B are 1. So, these two minterms are covered by the Boolean expression:

where P= (A +B). Note that this expression for P means that the ABCi minterm is covered by both G and P.5

The implementation of the four-bit look-ahead carry adder using the forms of the carry-outs derived above is shown in Fig. 4.16. As shown the circuit requires 19 gates with a maximum delay of four levels. Note: this does not include production of the final carry-out (C3); that some gates have four inputs; and that this implementation requires four three-input XOR gates which we know is not a basic Boolean operator.

A more practical implementation

The four-bit look-ahead carry adder is available (as are many of the circuits we have already discussed) as a single integrated circuit (IG).6 It is instructive to consider this circuit since it employs a different implementation that eliminates the need for three-input XOR gates. Looking at how this is achieved serves as a useful exercise in Boolean algebra.

The sum of the two bits A and B can be written as:

This means that to produce the sum term, S, rather than use a XOR gate, a two-input one fed with the above result (generated from the P and G terms which are needed anyway for the look-ahead carry) and the carry- in can be used.

From this it can be seen that a pattern is emerging, as before, but in this case for Cj To implement the circuit in this form requires 25 gates (excluding inverters) and has a longest delay, for Sit of four levels. Although this is more gates than the previous implementation, only two-input XOR gates are needed. (Remember that three-input XOR gates actually implement quite a complex Boolean function.)

Summary

In this section we have considered four ways of implementing a combinational digital circuit, namely a four-bit binary adder. This serves to illustrate that there are always several ways any circuit can be designed. We firstly considered a two- level implementation in both fundamental and minimised sum of products form, which both proved impractical. (A product of sums approach would have had the same outcome.)

A heuristic approach was then tried via consideration of the mechanics of the addition process. This led directly to the ripple carry, or parallel, adder which produces a practical implementation with a reasonable number of gates, but suffers from the problem of a rippling carry which reduces the speed of the circuit.

Finally in order to overcome the problem of the rippling carry, we developed the look-ahead carry adder which calculates the carry-out at each stage using the initial inputs，by ‘looking ahead’. This produces a faster design but does require more gates.