This is an old revision of this page, as edited by RscprinterBot (talk | contribs) at 15:41, 1 October 2012 (Removing article categories and/or interwiki links on non-articles (Why?)). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.
Revision as of 15:41, 1 October 2012 by RscprinterBot (talk | contribs) (Removing article categories and/or interwiki links on non-articles (Why?))(diff) ← Previous revision | Latest revision (diff) | Newer revision → (diff)This is the user sandbox of Afveitch. A user sandbox is a subpage of the user's user page. It serves as a testing spot and page development space for the user and is not an encyclopedia article. Create or edit your own sandbox here. Other sandboxes: Main sandbox | Template sandbox Finished writing a draft article? Are you ready to request review of it by an experienced editor for possible inclusion in Misplaced Pages? Submit your draft for review! |
Edward W. Veitch is an American computer scientist. He graduated from Harvard University in 1946 with a degree in Physics, followed by graduate degrees from Harvard in Physics and Applied Physics in 1948 and 1949 respectively. In his 1952 paper "A Chart Method for Simplifying Truth Functions", Veitch described a graphical procedure for the optimization of logic circuits, a year later (1953) refined in a paper by Maurice Karnaugh into what is now known as the Karnaugh map method.
Recent Comments on Design
In May 1952 at an ACM conference, Veitch presented the Veitch diagram as a useful tool for simplifying the Boolean expression of a truth table. After he learned from a member of the audience that M. Karnaugh at Bell Labs was working on a somewhat similar chart. Karnaugh’s documentation of his version came out in 1953. The primary difference was that Karnaugh used the Gray code to sequence the squares in the diagram, whereas the Veitch diagram used the binary code sequence used in the truth table. This change meant that more but not all of the simplification patterns would appear as contiguous blocks in the Karnaugh version. This approach was generally accepted by the computer industry and within a few years text books began to teach the Karnaugh map method.
Veitch has written recently about the coding decision, providing a greater explanation of his choice.
“I was not surprised by the Gray code approach as I had actually recognized and seriously considered using that alternative back before I gave my 1952 paper, and of course well before I had heard of Karnaugh. I already knew the arguments for using the Gray code as well as my reasons for rejecting it. My first reaction when I learned of Karnaugh’s approach was against it. Eventually I learned of its acceptance by almost everyone.
My choice of the binary sequence came from my recognition that a three variable map (which I will write as a 3v-map) can be drawn as two 2v-maps (each a two-by-two set of four squares) next to each, one to represent the top of a three dimensional cube and one to represent the bottom of the cube, and that all adjacencies between the two 2 v-maps are based on a one-to-one matching of corresponding squares in the two maps. A similar relationship applies to all maps of larger number of variables.
Thus every (n+1)v-map consists of two nv-maps whose cells are each adjacent to the corresponding cells of the other nv-map. The 4v-map consists of four 2v-maps in a square with each 2v-map to be matched against its horizontal and vertical neighbors. I now name this approach the ‘overlay’ method in that the matching of two n variable maps is done by figuratively overlaying one map over the other. I will discuss my reasoning for choosing this approach over the Gray code alternative later in this paper after providing a little more background.
I realize now that I said little about my method in the 1952 paper. I think now that I had erroneously thought that the minimization patterns were obvious. Also in my initial design of three and four variable diagrams I had placed a small gap between each of the four 2v-maps to help the user understand the relationships among the cells. I removed it because I found that it slowed down my drawing by hand of the diagram. I wasn’t thinking of using preprinted maps. I think now that the removal of the gap made it more difficult for the readers to recognize my patterns. I have reintroduced the gap in my explanation here.
The fact that everyone chose the Gray code approach made me feel that perhaps I was wrong in rejecting it. I regretted not having mentioned the Gray code approach in my paper. About that time I became involved in projects that did not involve minimization of Boolean expressions so I paid less attention to details of logical design.
What I did not realize until recently was that no one understood my method for identifying simplification patterns; therefore, their choice of the Gray code version was really made without a full understanding of both versions. This note is therefore written to provide a better description of my approach and how I developed it.
I have read the various versions of the Misplaced Pages article and re-read my 1952 paper. I realized that my paper was too hastily written; it needed better proof reading and more explanation in some parts. I believe now that people generally believed that my simplification patterns were based entirely upon looking at the variable labels on the diagrams, especially when contrasted with the definitions given in Karnaugh’s paper. This current note, although over fifty years too late, is an attempt to explain my method and why I chose the binary sequence.
In 1949 I took a college course on digital computer design that explained that digital circuits could be represented in Boolean algebra. A desired function could be expressed as the logical sum of its minterms. This expression could be simplified using the rules of Boolean algebra such as: 1 +1= 1 and X + X’ = 1. Thus for example, assume that the truth table for a three variable function states that the function output F is to be high (or 1) when the inputs ABC are 011 or 100 or110 or 111. The Boolean function then is:
F = A’’BC’ +A’B’C’ + ABC’’ + ABC = AC’(B+B’) + BC(A’ + A) +AB(C’+C) = A’C’+ BC+AB
None of the three terms in the final expression can be shortened further and are sometimes called the prime implicants of the function. In this case only the AC’ and BC are needed to cover all of the minterms and AB can be dropped as being redundant. Sometimes there are alternative sets of which prime implicants to choose. The overall process of finding the prime implicants and selecting the minimum set of prime implicants can sometimes be lengthy and not obvious. In 1949 a standard way to find the prime implicants was to test each minterm against every other minterm for possible simplification and then to test the results for further reduction. For the function specified in the Misplaced Pages K- map article, there are eight minterms specified as 1. Testing each of these against the others finds ten terms of three variables each. Testing each of these against the others gives further reductions. The resulting set of four terms is the four prime implicants AB’, AC’, BCD’, and AD’. The first three of these cover the minterms; the fourth term is redundant. (I will discuss race conditions later.) The above process can be tedious and can easily lead to careless errors. In 1949 I was shown a diagram that could be used to help in simplifying a four variable function. It was a table with 16 columns (one for each of the possible minterms) and 80 rows (one for each of the 1, 2, 3 or 4 variable terms that could be a prime implicants). A similar five variable diagram would have 32 columns and 242 rows and was considered to be impractical.
In my initial work in logical design at Burroughs Corporation I designed structures that would carry out a lot of simple operations involving many inputs and many outputs. I found no need for using the minimization methods discussed above. I then read a paper that reminded me that even a three variable Boolean expression could sometimes be difficult to simplify with multiple sets of prime implicants that would cover the specified function.
I was aware that an n-variable expression could be represented as corners of an n-dimensional cube, with each dimension of the cube representing one of the variables. For three variables, I drew a cube and marked the corners specified for the particular truth table. The patterns were immediately obvious. Minterms that could be combined were on the opposite ends of one of the edges of the cube. Four points around one of the six sides of the cube forming a square would allow a double combination. I could see the prime implicants and the alternative minimizations. The problem then became: how can I show this for three and higher number of variables on a two dimensional chart?
I drew a two-by-two set of four squares (a 2v-map) to represent the top of the cube and a second 2v-map set of squares under it for the bottom of the cube with a space between the two 2v-maps. Within each 2v-maps set of squares, adjacencies were obvious. The adjacencies between the top and bottom of the cube, the four side edges of the cube, were represented by the corresponding squares if one superimposed the top 2v-map over the bottom 2v-map. For four variables I remembered that the tesseract was often drawn as a cube inside a cube with all corresponding corners attached, so I drew the four variable chart as four 2v-maps separated by small gaps. For five and six variable charts I added one or three copies with a larger gap between the copies.
I believed that the gaps between the portions of a map were helpful to the eye. After a few weeks I became accustomed to the fact that the second and third columns and the second and third rows of the map were not actually adjacent even though they were next to each other in the diagram so I removed the small gaps in order to simplify the hand drawing of the map which was slowed down, especially for the five or six variable map, with its combinations of small and large gaps. I believe now that this removal was an important reason that the readers of my paper misunderstood my approach. Therefore I am now reintroducing the gap via the double line border feature of Excel.
I did not intend to remove the gap between the 4-maps in the 5-map and the 6-map. In my 1952 paper I included one 5-map which appears without a gap. This was due to a misunderstanding. The artist at Burroughs who created the slides and document of my presentation misunderstood my sketch and I decided that I could explain the mistake during my talk. I had no idea that the error might cause confusion sixty years later. I don’t know whether or not the current K-map for five or six variables separates its parts by a gap or heavy line or some other way. If it doesn’t, I recommend that one be added to help the eye.
What about the Gray code? When I first thought about creating two-dimensional representation of an n-dimensional cube, I tried different arrangements. I discovered that for three variables I could put the eight minterms in two concentric rings but this would have disadvantages and wouldn’t extend to four or more variables. Early in my investigation I recognized that if I turned around the bottom portion of the three variable cube, I reversed the sequence of rows in the bottom 2v-map in the 3v-map, which would make half of the adjacent corners on the cube that were not adjacent on the map to be also adjacent on the map. A similar advantage could be achieved on the 4v-map by doing the same reversal on the right side of the 4v-map. If this had been the case for all adjacencies I would have chosen that approach, and I considered the two alternatives for several moments one afternoon before I wrote my paper. If it had covered all adjacencies in the 3b-map and 4b-map I would probably have chosen that approach, but it didn’t include those now covered by what the K-map paper calls the ‘wrap around’ method. There are 32 adjacent pairs of cells in the four variable map. The K-map makes 24 of these contiguous; the binary map makes 16 contiguous. There are 24 different pattern groups of four cells possible; the K-map shows 17 of these contiguous, the binary map makes 12 contiguous.
I had three reasons for choosing the binary or what I now call the ‘overlay matching’ method. 1. The binary method allows a direct straight forward copying from the truth table to the map. The Gray code method requires the copying to reverse the third and fourth row and columns. This makes the Gray code method more subject to errors. 2. I felt that patterns are easier to see in the overlay method than in the ‘wrap around’ method. Note that with the binary map, all matchings between adjacent 2v-maps are two spaces apart, while in the Gray code map some connections are three squares apart. 3. The Gray Code ‘wrap around’ method doesn’t extend to five and six variables problems. The overlay method used with the binary code does extend to five and six variable problems. Actually the ‘overlay’ method can apply also to the K-map but it means a different way to determine adjacencies involving the extra variables.
In early 1952 I felt that higher variable problems were very important. I was impressed by how easy it was to see relationships on the new diagram; I saw that ‘don’t care’ squares could be easily handled and I had hopes that it would be easy to extend this capability to the real circuits of digital computer design that involved multiple outputs and intermediate combinations of logic. This is why my paper included what in retrospect may have been too much about ‘don’t care’ cases and about how it is sometimes possible to see relationships in the diagram other than the finding of the prime implicants.
About race conditions. In digital circuit design with real physical hardware it is always important to recognize the fact that nothing happens instantaneously. However, in thinking about how to simplify Boolean expressions I did not consider issues of timing.
One of my first lessons when I began work was that the manual opening of a gate to allow a series of pulses to pass might mean that the first pulse arrived at the gate just as it was opening and that the result could be a weakened pulse which if given two tasks to perform might do one of them and not the other. This could lead to a misperformance of the logic. In a different example, if two numbers are to be compared, the comparison should not be done until after both numbers have been transferred to the comparison circuitry. In the example from the K-map article it is assumed that only one variable changes at ay particular time. Assume that the function inputs specify that the output is to be “’one’ and an input is to change state so that the function output is still ‘one’. During the change it may be desirable that there be no temporary variation in the output during this change. Note that any two ‘one’ cells that differ in a single input variable must be part of some prime implicant that remains constant over the transfer. Thus, if there is a temporary glitch in the output, it must mean that the associated prime implicant was left out of the function, presumable for being redundant. In the example, that prime implicant is AD’ and therefore the glitch must occur when there is a change in B or C.
In my 1952 paper I did not consider the issue of race conditions. The example above can be detected and analyzed easily using either the Gray code or Binary map. I understand that in the design of asynchronous circuits it can be helpful to assign the states of variables in a Gray code like sequence with only a single variable changing state between successive positions. It may be that current design rules make the Gray code map version may valuable. I am not familiar with the rules of today’s chip designs where there are many logical components crowded together and computers include features such as looking ahead, multiple layers of memory, parallel processing, microprogramming, firmware, etc.
Rules for finding simplification patterns in the Veitch 4v-map: 1. Look at each of the four 2v-maps. Each map may have a pattern group of 4 cells (I will call such a group a ‘’4-group’) or it may have one or more 2-group and/or 1-groups. 2. Look at adjacent 2v-maps, both horizontally and vertically by mentally overlaying the maps. Where pattern or part of a pattern (even a single cell) matches in both maps, the groups can be combined. If all four 2v-maps match, all four can be combined. I use as an example the same truth table used in the K-map Misplaced Pages article.
There are four prime implicants. Note that two result from overlays between the two right 2v-maps and one from an overlay between the two bottom maps. I believe that the best way to mark these on the full map is to mark the cells with differently colored crayons. The asterisks (*) show the cell in each prime implicant that forces the use of that prime implicant because they are covered by only the one prime implicants.
About the race condition discussed in the K-map article: A change between the two right sided cells in the AD’ term or between its two bottom cells can affect the function output, if AD’ is not included in the implementation. By including AD’ in the function output, the function output will remain high, unaffected by the transition.
Rules for five or six variables: The 5v-map consists of two 4v-maps and the additional overlay matching is between the two sixteen cell 4v-maps. For example, the cells in the third column of one 4-map are adjacent to the cells in the third column of the other 4-map. If the same pattern group appears in both 4-maps the two groups can be combined. If there is a partial match, matching portions can be combined. For example, assuming X is the fifth variable that distinguishes between the two 4-maps, and the two partial matches are XAB and X’BCD, the combined new prime implicants will be ABCD.
Note that there will be at least one prime implicant for every match between the two 4v-maps. It may help to draw a 4-map with only the matching 1. The solution to this map will be the combined prime implicants that don’t include the X variable.
The six variable map consists of four 4v-maps. Assuming that the fifth and sixth variable are X and Y, these maps can be designated as X’Y’, X’Y, XY’, and XY. A systematic approach would be to first find the patterns for each of the maps individually. Then find the combinations within the two Y maps, then the two Y’ maps, and finally combinations for the full six variable map. A worst case example could include many different prime implicants and the selection of the best set of prime implicants could be complicated. In many cases however, the finding of simplification terms may be easy. In either case I believe that the major advantage of using a map is that it will give the user a full overall picture of the significance of each of the six variables and thus will help in the design or redesign of the hardware. “
References
- Edward W. Veitch, 1952, "A Chart Method for Simplifying Truth Functions", Transactions of the 1952 ACM Annual Meeting, ACM Annual Conference/Annual Meeting "Pittsburgh", ACM, NY, pp. 127–133.
- Maurice Karnaugh, November 1953, The Map Method for Synthesis of Combinational Logic Circuits, AIEE Committee on Technical Operations for presentation at the AIEE summer General Meeting, Atlantic City, N. J., June 15–19, 1953, pp. 593–599.
Category:American computer scientists Category:Living people Category:1924 births
P ≟ NP | This biographical article relating to a computer scientist is a stub. You can help Misplaced Pages by expanding it. |