Course Handout - Short-Term Memory Subtypes in
Computing and Artificial Intelligence
Part 1 - A Brief History of Computing Technology, to 1924
Copyright Notice: This material was written and published in Wales by Derek J. Smith (Chartered Engineer). It forms part of a multifile e-learning resource, and subject only to acknowledging Derek J. Smith's rights under international copyright law to be identified as author may be freely downloaded and printed off in single complete copies solely for the purposes of private study and/or review. Commercial exploitation rights are reserved. The remote hyperlinks have been selected for the academic appropriacy of their contents; they were free of offensive and litigious content when selected, and will be periodically checked to have remained so. Copyright � 2010, High Tower Consultants Limited.
|
�
First published online 15:14 BST 23rd September 2002, Copyright Derek J. Smith (Chartered Engineer). This version [HT.1 - transfer of copyright] dated 12:00 13th January 2010
�
This is the first part of a seven-part review of how successfully the psychological study of biological short-term memory (STM) has incorporated the full range of concepts and metaphors available to it from the computing industry. The seven parts are as follows: Part 1: An optional introductory and reference resource on the history of computing technology to 1924. This material follows below and starts to introduce the vocabulary necessary for Parts 6 and 7. The main sections are: 1 - Computing Before the Nineteenth Century 2 - The Method of Differences 3 - Computing 1805-1866 - The Difference Engine 4 - Computing 1805-1866 - The Electric Telegraph 5 - Computing 1867-1883 - Things Start to Speed Up 6 - Computing 1884-1924 - Data, Data, Everywhere Part 2: An optional introductory and reference resource on the history of computing technology from 1925 to 1942. This will further introduce the vocabulary necessary for Parts 6 and 7. To go directly to Part 2, click here. Part 3: An optional introductory and reference resource on the history of computing technology from 1943 to 1950. This will further introduce the vocabulary necessary for Parts 6 and 7. In so doing, it will also refer out to three large subfiles reviewing the history of codes and ciphers, and another subfile giving the detailed layout of a typical computer of 1950 vintage. To go directly to Part 3, click here. Part 4: An optional introductory and reference resource on the history of computing technology from 1951 to 1958. This will further introduce the vocabulary necessary for Parts 6 and 7. To go directly to Part 4, click here. Part 5: An optional introductory and reference resource on the history of computing technology from 1959 to date. This will further introduce the vocabulary necessary for Parts 6 and 7. To go directly to Part 5, click here. Part 6: A review of the memory subtypes used in computing. To go directly to Part 6, click here. Part 7: A comparative review of the penetration (or lack thereof) of those memory subtypes into psychological theory. To go directly to Part 7, click here. To avoid needless duplication, the references for all seven parts are combined in the menu file. To return to the menu file, click here, and to see the author's homepage, click here. We also strongly recommend the computer history sites maintained corporately by the Charles Babbage Institute at the University of Minnesota, the Bletchley Park Museum, the Computer Conservation Society, and the National Archive for the History of Computing at Manchester University, as well as those maintained individually by the University of Essex's Simon Lavington, Clemson University's Mark Smotherman, and the freelance historians George Gray (editor of the Unisys Newsletter) and Ed Thelen. |
��
1 - Computing Before the Nineteenth Century
The precise motivation for the human race's development of numeracy can only be guessed at on the basis of archaeological data, for it predates recorded history. It could have been driven, for example, by the needs of such neolithic crafts as boat-building, construction, or navigation, or by the needs of commerce (Smith, 1923) or taxation (Adams, 1993). As far as commerce is concerned, Smith (1923) notes that the Sumerians had a well-developed system of commerce by 3000BC, including bills, receipts, statements of account, weights and measures, etc. However, the supporting technology was simple in the extreme. You both planned and executed your sums in your head, assisted by temporary storage on fingers or scratch-pads, and by longer-term storage on tally sticks, clay tablets, or tokens (Schmandt-Besserat, 1978, 1992). The tally stick, in fact, could be much older still, for deliberately notched bones have been found in Upper Palaeolithic excavations dating from the Magdalenian. Marshack (1991), for example, interprets the Grotte de Ta� bone plaque (ca. 15,000 years old) as some form of calendrical record. Counting frames, on which small stones could be laid out, seem to have been particularly commonplace in ancient commercial centres (indeed, the modern word "calculate" comes from the Latin word for the pebbles used, namely calculi), and early bankers developed these into grid-marked table tops (which to this day retain the connotation "counter"). The abacus is a clever improvement upon stone counting, being portable and having no loose pieces. It was developed in India about 3000BC, spread outwards from there, and survives to the present day in the Far East. A number of variants have been identified [for details, click here], but all require that any arithmetical "carry" operation be executed manually.
Now the problem with doing things manually is that it invites doing them wrongly - forgetting what to do next (errors of omission) or not doing something the right way (errors of commission). Accurate calculation is and always has been a constant struggle against error, and calls for just about every type of cognitive skill available to the human brain, not least the conceptual and procedural forms of long-term memory, and the working memory aspects of short-term memory; and it is when those psychological systems become overloaded (as they rapidly do), that they need to be further supported by external systems; and the problem with external systems is that they are themselves procedurally complex and can potentially consume more mental resources than they release. Numeracy, in short, pushes our powers of cognition to the very limit, and trades off complexity in long-term memory for less frequent overloads in short-term memory. Nevertheless, it is probably safe to suggest that the numerically literate subset of these ancient societies possessed a sound grasp of the cardinal number concept, planned their numerical problem solving using some form of the number line, understood at least a rudimentary system of place values, and had at their disposal a wide range of linguistic furniture to put it all into effect .....�
Key Concepts - Cardinal Numbers and the Number Line: What does it mean to think "one", and how does this differ from "two", or "ten", or "a million"? Do we think of that number of beads on a table top, or of that number of points on an infinitely long mental straight line (the "number-line"), or of that number of depressions on a keypad, or what? And what does it actually mean when we ask "how many"? If we have ten things in sight do we reply "ten" in absolute terms because we somehow know what ten is (conceptual, or "semantic", knowledge), or do we know that if we start counting upwards from 1 we will quite quickly get to 10 (procedural knowledge). The problem is that it cannot entirely be the latter because there are plenty of numbers we cannot actually count up to, such as 10,000,000, or minus 5, or 1.33452, nor can it entirely be the former, because there is insufficient room in our semantic memories to have an entry for every possible number, most of which will never ever get used. Well following a decade of experimentation (eg. Gelman and Gallistel, 1978), Gelman (1980) concluded that successful counting involves the coordination of no less than five separate conceptual principles, namely (1) the One-One Principle - the principle that each item in an array can be given only one tag, (2) the Stable-Order Principle - the principle that the tags used must come from a stably ordered sequence, (3) the Cardinal Principle - the principle that the last tag used during the count represents the "cardinal number" of the array as a whole, that is to say, how many items are in it, (4) the Abstraction Principle - the principle that any set of items can be collected together for counting, and (5) the Order-Irrelevance Principle - the principle that the order in which the items are tagged makes no difference to the end result. The age of attainment of the cardinal principle in modern society has been further investigated by Wynn (1990). She observed that children between 2:7 and 3:0 prefer to answer the "How many?" question by repeating the entire count rather than stating the last tag used, and that only at mean age 3:6 do children reliably use the last tag on its own.
Key Concept - Collective Nouns: Ancient languages and ancient minds must also have been able to deal with collections of items by giving them a collective name. They would have needed collective nouns like "herd", "flock", or "shoal", to indicate an indeterminately large number of cattle, sheep, or fish. This allows the mind to substitute one symbol (spoken or written) for a collection of other symbols, and once that habit has been acquired it is only a small additional step to introduce place value into your tally stick or counting frame system [for examples of aboriginal counting systems (a branch of cultural anthropology known as "ethnomathematics"), click here].
Key Concept - Quantitative Nouns: A quantitative noun is an expression which defines a precise number precisely, or a vague number as precisely as operationally necessary. In English this includes words like "few", "lots", "many", "pair", "dozen", "score", "gross", which we use to describe different sized groupings of entities. We may also use short phrases as nouns, as in the phrase "He wants what precious little additional money we have collected", where the five underlined words act as a quantitative noun (Despain, 2002). Language is actually remarkably good at devising ad hoc constructions such as this to convey number concepts, and although quantitative nouns are not in themselves numeracy, you need a degree of numeric literacy to use them properly. You also need to know how numerically literate your audience is, because they are the ones who are going to have to decode your chosen expression, and it is vital that all parties end up with the same common understanding.
Key Concept - Place Value and "Carrying": A place value is the use of spatial position to change the value indicated by a given digit in a written system of numerical notation, and it is easy to demonstrate their usefulness by considering a counting frame, where a growing pile of identical unit counters would eventually become so large as to be unmanageable. At that juncture, every so many of the units could either (a) be replaced with one of the same type of counter placed somewhere else, or (b) be replaced in situ by a different type of counter by size, colour, shape, or material of construction, just as five green Monopoly� houses equals one red hotel. For some as-yet-unknown reason, option (a) is what happens in the abacus and place value systems of arithmetic notation, and option (b) has been reserved for physical token systems such as coinage. Our own "HTU" system of written numbers allows the digit "1" to indicate one in the units column, ten in the tens column, one hundred in the hundreds column, one thousand in the thousands column, and so on. You then have to "carry one" from one column to its left hand neighbour whenever you exceed nine in a column. We mention this (a) because the problem of mechanising the carry process was not solved until the 17th century, and (b) because the impact of the resulting system on a nation's collective numeracy can be remarkably subtle. Bierhoff (1996) studied the numerical concept systems of different European nationals, and found that continentals do not "dissect" two-digit numbers, preferring not to overemphasise the place value of each digit. This perhaps assists their being dealt with as single concepts. Thus 52 is explained to a Continental child as being "a fifty and a two", but to a British child it is explained as being "five tens and two units"(Bierhoff, 1996).
Of course, once a society has acquired basic numeracy "skills", some of its members - we call them "mathematicians" - ascend far beyond the marketplace and do some remarkably clever things with them. By 300BC, Greek mathematical knowledge was sufficiently advanced for Euclid's "Elements" to become the standard geometry textbook for the next two thousand years, and, by 200BC, for Eratosthenes to calculate the circumference of the Earth. Greek mathematics was kept alive during the European Dark and Middle Ages by the Islamic world, and then claimed back during the Renaissance by pure mathematicians such as Stifel, Fibonacci, and Fermat, generalists such as Descartes and Galileo, and applied mathematicians (usually astronomers) such as Copernicus, Brahe, and Kepler. Nevertheless, the abacus, the counting table, and the tally stick remained the principal aids to calculation for some four and a half thousand years, with little substantive change, while the non-mathematicians - the ones whose efforts actually drove the economy - were still struggling with the basics. Practical help finally came in the shape of "Napier's bones". These were devised by the Scottish mathematician John Napier (1550-1617), and consisted of bars of wood or ivory, six or eight inches long, on which were engraved the multiplication tables of the numbers 1 to 9. Each bone was divided into ten vertical divisions. The top space on any one facet was used to identify the digit in question, and spaces #2 to #10 were index positions for the multipliers 1 to 9. Each index space contained the two-digit product of the digit in question and the corresponding multiplier. The lowest was therefore <01> (for bone #1, index position #1), and the highest was <81> (for bone #9, index position #9). However, the two digits of each product were always separated by a diagonal line. Thus the bone for <3> was marked "3" at the top to identify it, and then "0/3", "0/6", "0/9", "1/2" all the way down to "2/7", giving you your three times table at a glance. If you wished to multiply any two numbers together, you laid the first (and the longest, if there was one) number out as a sequence of bones, placed a straight edge across the bones to underline the "units" multiplier line, read off the products for each of the digit-by-digit multiplications, and then added up all the subtotals.�You then repeated the process for the "tens" multiplier, then the "hundreds", and so on, and summed the individual results together.
Napier's bones allowed large long multiplications to be done reasonably accurately by otherwise uneducated users, and any criticism that they were rather simplistic is offset by the fact that Napier was also responsible for one of the major theoretical advances of that era, namely logarithms, an even better method of multiplying by adding. Logarithms are power, or "exponential", relationships. They reflect how many times a number - known as the "base" of the logarithm - has to be multiplied by itself to give another number. Thus the act of computing base 3 to the power 2 means taking one 3 and multiplying it by a second 3, and it gives you the answer 9 (commonly known as "three squared"). Similarly, base 3 to the power 3 means taking the 9 you already have, and multiplying it by a third 3, giving 27 (commonly known as "three cubed"). As a specific formula, this latter would be written .....
33 = 27
..... and as a general formula, the process would be summarised as .....
bpower = x
In Napier's system, the power term is called the "logarithm" of the number x, and is properly written logbx (although this may be abbreviated to log x if the base of the logarithm is known beyond doubt), and x is called the "antilogarithm" of the logarithm. Calculating the logarithms of numbers in between the whole number powers takes time, but, once done, can be entered into books of "log tables", so that subsequent users can look them up rather than having to calculate them for themselves.
The ability to multiply by adding derives from the mathematical principle that when two powers of the same base are multiplied together, the answer is the base to the sum of those two powers. Thus if you multiply 100 (which is 10 squared, or 10 to the power 2) by 1000 (which is 10 cubed, or 10 to the power 3) then you get 100,000 (which is 10 to the power (2+3)). As a specific formula, this would be written .....
102 + 103 = 105
�..... and as a general formula, it would be written .....
bx + by = b(x+y)
�Or, to put it another way .....
�logbx + logby = logb(x.y)
�..... which is the same thing as the following look-up-able format .....
�x.y = antilog[logbx + logby]
The reason logarithms immediately caught on, is that once the log and antilog look-up tables had been calculated and printed, the process of using them involved less steps than did Napier's bones, and was less prone to error because it totally avoided any long multiplications. No matter how big your numbers were, or how many of them you had to multiply together, their multiplication involved looking up the antilogarithm of the sum of their logarithms. And the more complicated the multiplication, the bigger the saving. Napier published his method in 1614, and the system soon received critical acclaim from the likes of Henry Briggs (1561-1631) and the astronomer Kepler. Napier died in 1617 and Briggs carried on the calculations and published log tables to 14 significant figures in 1624, much to the delight of the astronomers and navigators of the day.
Logarithms also produced a valuable piece of technological spin-off, when the "additive" nature of logarithmic multiplication was noted by the astronomer Edmund Gunter and the clergyman William Oughtred and used to create the slide rule. In its simplest form, this is simply two straight laths of wood placed together long side to long side, with a logarithmic power scale (Gunter's contribution) etched into the two facing edges (Oughtred's contribution). If the <zero> point on the second scale is placed against the multiplicand on the first scale, then the multiplier on the second scale is level with the product of the multiplication on the first, that is to say, the logarithm of the multiplicand has been added to the logarithm of the multiplier, which, given Naperian theory, is equivalent to multiplying the original natural numbers together. The system was only accurate to a couple of significant figures, but that was sufficient for most practical applications. It was also portable, cheap, and you did not need to keep books of look-up tables at hand to make it work. It therefore caught on in a big way with everyday sum-doers such as craftsmen, bankers, and merchants. The system also had the power to fascinate, as in 1913, when a nine-year old Florida schoolboy named John Vincent Atanassof chanced upon his father's new slide rule, and it awoke in him such a love of numbers that - as we shall see in Part 2 - he went on to change the world.
But still - indeed more than ever - the individual arithmetician remained both thinker and doer. None of the available aids had any internal workings, and, no matter whether you used an abacus or Napier's bones or Briggs' log tables or Oughtred's slide rule, the necessary operating skill resided totally in the mind of the end user. If you omitted, or mis-sequenced, or mis-executed any one step of the calculation, then your final answer would immediately be compromised. The idea of removing error and increasing speed by mechanically assisting the process seems (along with the helicopter and the submarine) to have been yet another of those uncannily successful visions of the future by Leonardo da Vinci (1452-1519). Some time around 1500, he sketched a design for an adding machine built of interlinked, rotating, numbered wheels. It is, however, not known whether he actually built a machine to go with it, although 20th century facsimiles have been built to his design. Similarly, in 1623 Wilhelm Schickard (1592-1635) is reported to have�built a "calculating clock" capable of adding and subtracting six-digit numbers. Again no specimen of the machine exists, but again there has been a 20th century facsimile.
More complete details have survived from 1642, when the 19-year French mathematician Blaise Pascal (1623-1662) developed the Pascaline. In modern technoparlance, this was a "da Vinci clone", a set of geared counter wheels with a "tens carry" system, which could record an eight-digit running total. A number was inserted by rotating the appropriate "column" wheel (units, tens, hundreds, etc.) with a stylus, and then added to by onward rotation by cognate column from right to left. Unlike the abacus, however, the built-in carry system (see below) would take care of turning as many as necessary of the dials to the left of the one being moved manually. Half a century later, Gottfried Leibnitz (1646-1716) demonstrated a larger handle-driven version of Pascal's machine. This was capable of multiplication by repeated addition, and went under the name machina arithmetica. This was to remain the principle of the desk-top calculator for more than 200 years, until the likes of NCR and Burroughs (see below) added electric drives to their machines in the early 1900s.
Key Invention - The "Tens Carry" Cogwheel: This is the device which provided a solution to the place value problem. All the machines described above allowed a 10-position right hand cog to engage with the cog immediately to its left on every tenth advance. Thus as 9 advances towards 0 in the units column, say, a single digit is being added to the tens column, which will itself cast into the hundreds column if necessary, and so on until you fall off the left hand end of the available register. The technology is on the dashboard instrumentation of most 20th century automobiles, patiently adding up the kilometers/miles driven, and "going round the clock" back to zero when the number of digits finally gets exceeded.
We mention all this, because the first fundamental issue in the history of computing has to do with the difference between the "analog" and the "digital" representation of a variable .....�
Fundamental Issue - Analog vs Digital Representation: The point about any calculation aid is that sooner or later it has to represent the real world internally in some way. The simplest representation might be one finger (or stone, etc.) to represent one thing: one sheep, perhaps, or one soldier. This is known as a digital representation, because the number represents a finger-countable cardinal number (see above). Digital representations are ideal for counting indivisible things like sheep and soldiers, where there are only whole numbers on the number line, however they can also be used to deal with continuous variables such as length, weight, time, or angle of rotation. This is what is happening when we talk of horses being "15 hands" tall, or of Noah's ark being "300 arm's lengths" long. We are taking a particular dimension - height or length - and selecting a cardinal number of arbitrary units of it. When necessary, we can also cater for fractions of a unit, and if we convert these fractions to decimals it gives us a digital approximation to a point on a truly continuous variable (eg. 7.693 kilometers).
But in fact it is possible to select one continuous variable and treat it AS IF it were another. A good example of this is the clock candle, where the amount of burn down (ie. a distance) is taken as a measure of the passage of time. This is known as an analog representation, because the first measure is used analogously - in the place of - the second. Other examples of the digital-analog distinction are commonplace in everyday life ....
The Watch: A digital watch shows you discrete numbers on a display panel, whilst an analog watch shows you nothing but a constant angular sweep. Neither shows you the time as such, of course, but with experience we instantly grasp what they are trying to tell us.
The Slide Rule: This analogs distance against the logarithm of a number to be multiplied or divided.
The Analog Speedometer: This analogs angular sweep against speed.
The Analog Mercury Thermometer: This analogs distance (of expanding liquid) against temperature .....
The Analog Dial Thermometer: .....but the same job can also be done by angular sweep .....
The Digital Thermometer: ..... and digital displays are nowadays cheaper and more reliable, making them the display of choice in modern central heating, healthcare, and dashboard applications.
NB: The study of what makes for the most effective combination of instruments in dashboards and on flightdecks is known as "display engineering". It is part of the larger sciences of human factors, human-computer interaction, human-machine interaction, and cognitive ergonomics, and is a critical aspect of modern aerospace, automobile, and computer design.
2 - The Method of Differences
Before proceeding with the main argument we need to say a few quick words about yet another mathematical procedure, "the method of differences", a technique which helps analyse the rules behind numerical series. This procedure was developed over a number of years by a succession of workers, including Isaac Newton, the aforementioned Henry Briggs, and James Stirling (1692-1770). The method involves writing down the first half dozen values in the given series, and then recording the differences between successive values, thus .....
Original Series: 1, 3, 5, 7, 9, 11 .....
Differences: 2, 2, 2, 2, 2 .....
If the differences are the same (and in this case they are) then you have discovered the underlying secret of the series. You know how it works, and can accordingly work out for yourself the value of any nth term in it. If, on the other hand, the differences are NOT the same, then you have to take the differences of the differences. This is known as taking the "second order" differences of the "first order" differences. Thus .....
Original Series: 1, 4, 9, 16, 25, 36 .....
1st Order Differences: 3, 5, 7, 9, 11 .....
2nd Order Differences: 2, 2, 2, 2 .....
Again, if the differences are the same (and again they are) then you have discovered the underlying secret of the series. If, on the other hand, the differences are NOT the same, then you have to take the differences of the differences again. This is known as taking the "third order" differences of the "second order" differences. Thus .....
�Original Series: 1, 8, 27, 64, 125, 216 .....
1st Order Differences: 7, 19, 37, 61, 91 .....
2nd Order Differences: 12, 18, 24, 30 .....
3rd Order Differences: 6, 6, 6 .....
And so on.
Now this is the pay-off. Equations of the form y = ax + b will have generated any series where the first order differences are the same, equations of the form y = ax2 + bx + c will have generated series where the second order differences are the same, equations of the form y = ax3 + bx2 + cx + d will have generated series where the third order differences are the same, equations of the form y = ax4 + bx3 + cx2 + dx + e will have generated series where the fourth order differences are the same, and so on. In other words, the "degree" of the polynomial - that is to say, the highest power term - is the same as the number of passes required to make the differences equal. The method of differences is thus a quick way for a pure mathematician to turn a few initial observations into a general purpose polynomial equation, from which any later nth value can be predicted with total reliability. Unfortunately, the technique was beyond most navigators and engineers, for they were applied mathematicians (and were, in any event, already converts to using look-up tables), so the method of differences became the shortcut of preference to the mathematicians who built the look-up tables in the first place, and it did the job very well.
However, now that it was possible to do sums that much more quickly, another problem emerged, for the printers could not keep up with the mathematicians. The typesetting process was simply too slow and error-prone. The solution was to automate not just the calculation, but the print production process as well, and this idea seems to have come from a German engineer named Johann Helfrich M�ller, who wrote .....
"I would in the future make a machine, which would simultaneously print in printer's ink on paper any arbitrary arithmetical progression ... and which would halt of its own accord, when the side of the paper was full up. After setting the first figure, all one has to do is to turn a handle" (M�ller, 1784, cited in Swade, 2000, p30).
Yet M�ller never got round to making that machine, and when the French civil engineer Gaspard de Prony (1755-1839) worked with the method of differences from 1792 to 1801 as part of a team of around 80 mathematicians and assistants (that is to say, some 800 man years' effort) to produce "Les Grandes Tables Logarithmiques, etc.", a 19-volume book of logarithmic and trigonometric tables, the cost of typesetting and proof-reading prevented the manuscript ever being published in full. And it was these difficulties which prompted one Charles Babbage (see below) to try simultaneously producing the numbers AND the print out with the largest da Vinci clone ever produced .....
3 - Computing 1805-1866 - The Difference Engine
We move on now to 1805, the year in which a French weaver named Joseph Jacquard (1752-1834) perfected a method of controlling his looms using precisely sequenced packs of precoded slats to codify the pattern of the weave in advance. Prior to this, it had been standard practice for around five thousand years to introduce the pattern into woven cloth using what would today be termed "pixel technology" - by manually varying which warp (ie. longitudinal) threads were raised, and which lowered, at the moment the weft (ie. transverse) thread was passed. Using warps and wefts of different colours gave you hundreds of coloured pixels per minute, from which you built up your desired pattern, but it was slow and monotonous work, and subject, again, to operator error. The key to Jacquard's advance lay in mechanising the process .....
Key Invention - Jacquard's Needle System: A number of wooden or lacquered card slats were strung together into a continuous loop, and mounted above the loom. At each activation by the weaver, the following sequence of operations took place: (1) a new slat was rolled forward to be "read", and the "used" slat was returned to the back of the pack, (2) sensor needles were advanced up against the slat, and were able to move further where there was a hole in the slat than where there was none (that is to say, some needles were restrained by non-holes), (3) the warp threads coinciding with the restrained needles were lifted (this being possible because the forward movement of the unrestrained needles disengaged the link to the lifting mechanism), and (4) the shuttle was passed, drawing the weft thread between the lifted and the unlifted warp threads. Looms with 400-600 needle systems rapidly became common, and 1000 needle systems were not unknown. The profit came when the slats could be recycled in an endless loop, because the loom simply repeated the pattern every time it got back to the start. The mass production of any repeating woven pattern thus became simple once the appropriate slats had been produced, and loops of up to 24,000 cards were not unknown. In truth, Jacquard incorporated several earlier inventions into his machine: the idea of the punched card program seems to have come from Jean Philippe Falcon in 1728, and the use of lifting hooks to effect the automation from Jacques Vaucanson in 1745 (Wilkes, 1956).�Falcon's machine is on display in the Museum of Arts and Crafts in Paris.
The Falcon-Jacquard system is cited very frequently in histories of computing, because it involved automation. It reduced a repetitive craft to an essentially numerical process, it brought numerical control to a manufacturing industry, and it involved programming as we nowadays understand it. Indeed, it only remained to point the technology at mathematical problems for it to become computing, and the story of how this happened begins shortly afterwards, when the British engineer Charles Babbage (1791-1871) took Pascal's and Leibnitz's cogwheeled calculators to new levels of complexity. In his late twenties, Babbage conceived the idea of a large calculating machine capable of automating the method of differences and printing off the resulting look-up tables. Like de Prony a few years before, Babbage had, as a young man, spent time producing mathematical tables for a living, and had been struck by the highly repetitive nature of the work, and by the underlying simplicity of the individual steps. Basically, even the most complicated tasks boiled down time after time to additions and subtractions done in a very precise order. He called his proposed machine a Difference Engine, had a small prototype (now lost) ready in 1822, and submitted the plans to the British government for funding to build a full-sized version. He was awarded enough money to start work, but progress was so slow that in 1833 funding was withdrawn with the machine still far from complete. The Difference Engine as it stood in December 1832 was as far as it got, and H.M. Treasury had invested nearly �17,500 in it (Swade, 2000).
The root of Babbage's difficulties lay partly in the fact that the design for the Difference Engine was far ahead of its time. Although it was basically a da Vinci clone, full of toothed interlocking wheels and spindles, it was very large (Swade, 2000, estimates 25,000 components). This meant it had to be extraordinarily well-engineered, so as not to lock solid when applying the tens-carry operation to numbers containing lots of nines. More significantly, it was put together in an ingenious fashion to provide four main modules. Firstly there were stores for numbers, secondly there was a "mill" for doing calculations based on those numbers, thirdly there was a controller to ensure that everything was done in the right order, and fourthly (independently reaching the same conclusion as M�ller, and conscious of de Prony's difficulties) there was also an automatic link to a typesetting module, so that the tables would be produced ready for printing. Babbage was also guilty of persistently upgrading the specification as new ideas occurred to him, and in 1834 he did so again, in order to incorporate Jacquard's proven ability to "pre-program" a sequence of instructions. Jacquard's cards would allow the sequence of the computation to be preset, thus acting as the first storable and reusable computer program. Babbage called this new machine the Analytical Engine. Sadly the Government was far less enthusiastic now, and adequate funding was not forthcoming.
What happened next is an object lesson in how not to manage the process of innovation. Swade explains how the Swedish engineer Georg Scheutz, having read of Babbage's undertaking in 1834, and understanding the basic principles, was inspired to produce his own design for a difference engine. Scheutz, however, kept it simple, so that when the plans and prototypes were ready in 1837, he was able to set his 15-year old son Edvard to work at putting it together. It took father and son only six years to complete it (whereupon the British government promptly placed their own order for one). As Swade puts it, "a Swedish teenager had succeeded where the best of British had failed" (p197). Babbage's second machine was still incomplete at the time of his death in 1871, a quarter of a century later, although a working facsimile was built in 1991 to celebrate Babbage's bicentenary and is in the Science Museum in London.
Between 1833 and 1852, Babbage was assisted in his work by Augusta Ada Lovelace (1815-1852), daughter of the poet, Lord Byron, and a woman of great influence in British society. A keen student of mathematics herself, Lovelace soon grasped Babbage's programming concepts, lobbied on behalf of the project in high places, and published a number of historically valuable technical memoranda. This enthusiastic contribution to the newly born software industry was honoured in 1979 when the US Department of Defence named their ADA programming language after her.�
4 - Computing 1805-1866 - The Electric Telegraph
The story of the electric telegraph is usually dated to the 1830s, but to get the best flavour out of the later material it helps to go back some way before that. So we shall start in 1731, when the British polymath, Stephen Gray (1666-1736), who for several decades had been carrying out some of the very first empirical investigations of electrical phenomena, demonstrated that an electrical charge would readily conduct itself away from the place where it had been generated. Gray experimented with the transfer of charge along glass and metal rods, fishing poles, pokers, chains of people, thin brass wire, and silken thread, and made the arrival of the charge at the distant end visible by having it electrostatically attract slips of foil or paper. His longest successful transmission seems to have been 886 feet along a length of cord suspended by silk threads. This series of experiments was followed in 1751 by Benjamin Franklin's demonstration that if you flew a kite into an electric storm the lightning would discharge down a suspended wire in much the same way as "the virtue" would travel along Gray's rods and filaments, albeit with a considerably louder bang.
It was against this background that in February 1753 a certain Charles Marshall published a theoretical proposition of no little foresight. This is what he wrote:
"It is well known to all who are conversant in electrical experiments that the electric power may be propagated along a small wire, from one place to another, without being sensibly abated by the length of its progress; let, then, a set of wires equal in number to the letters of the alphabet be extended horizontally between two given places parallel to each other and each of them about an inch distant from that next to it. At every twenty yards' end let them be fixed in glass or Jewelers' cement to some firm body, both to prevent them from touching the earth or any other non-electric, and from breaking from their own gravity." (Marshall, 1753, cited in Naughton, 2002.)
Naughton then goes on to explain Marshall's proposed method of operating the telegraph. Slips of paper bearing the letters of the alphabet were to be placed an eighth of an inch below suspended metallic balls at the receiving station and would be attracted to these balls as each wire was electrostatically charged from the sending station. It then only remained for a human operator at the receiving end to observe and record which letters moved, and in which order, for the message to have been successfully "telegraphed".
It is not recorded that Marshall did more than short range experimentation with such an electrostatic telegraph, but he had awakened the interest of one of the principal users of signalling technology (such as it then was), namely the military. The next major development was by Citizen Claude Chappe (1763-1805) during the French Revolutionary War. Chappe's ocular telegraph consisted of large pointers mounted on gantries, whose angular position sequentially coded each letter of the message to be sent. Repeater stations were situated in direct line of sight every few miles, and at night lanterns were tied to the arms to maintain the service. There was an operational 15-station link between Paris and Lille (144 miles) by July 1794. A longer link from Paris to Toulon (475 miles) consisted of 120 towers and boasted a transmission time of about 12 minutes. A longer one still, from Paris to Milan, was under construction at the time of Chappe's early death, and before the method was rendered obsolete in the mid-nineteenth century by the various electromagnetic telegraphs (see below), the Chappe network had expanded to 556 stations (Kingsford, 1969) covering 3000 miles (Derry and Williams, 1960).
Not wishing to be outdone by their enemy, Lord George Murray of the British Royal Navy invented the shutter telegraph in 1796. This consisted of a large shielded lantern revealed by six spatially separated slots. Light could be "sent" by opening the slots in appropriately coded combinations. The number of different signals depends upon the number of slots, and with a six-slot system this amounts to 63 combinations. The lights were "read" by observers down the line equipped with telescopes, and then retransmitted to "repeater stations" further down the line until the final destination was received. Fifteen hilltop signal stations were set up between London and Deal (70 miles), and the first message was passed on 27th January 1796, taking two minutes to send and acknowledge. A 10-station link was then established between London and Portsmouth (65 miles), and a 22-station extension from Portsmouth to Plymouth was operational in 1806 (which, incidentally, is why there are still so many Telegraph Hills in the south of England). The send-and-acknowledge time to Plymouth was three minutes. In the end, however, the Royal Navy remained unimpressed with the trials of Murray's shutter telegraph, and simply copied the French system. Admiral Sir Home Popham gave the gantry system the name "semaphore" (Greek sema - "sign", plus pherein - "to carry"), and an eight-station link (the two termini, plus six intermediates) opened between London and Chatham Dockyard on 3rd July 1816. (See Holzmann and Pehrson, 1994, for further details, if interested.)
Of course it is the nineteenth century electrical telegraphy systems which sit most neatly on the route to modern computing, and the key invention here was the electromagnetic relay .....
Key Inventions - The Electromagnet (1825), the Electromagnetic Telegraph (1830), and the Electromagnetic Relay (1835): In 1825, the British amateur physicist William Sturgeon (1783-1850) noted that if an iron slug was placed inside a coil of wire its magnetic properties could be controlled by passing a current through the coil. It became an "electro-magnet" when a current was passed, and could be used to suspend large blocks of ferrous material, and it reverted to a useless piece of iron when the current was switched off, releasing whatever had been attracted to it. This discovery was then extended and applied by the American Joseph Henry (1795-1878) as an electromagnetic telegraph. Around 1830, Henry found he could significantly increase the output power of an electromagnet by using a number of separate sub-coils wired up in parallel, rather than a single coil. Using such equipment, he could deflect a suitably mounted piece of iron with enough force to sound a bell, and he could do this at a distance of one mile (Hochfelder, 1998). This form of remote reaction was used in the Wheatstone-Cooke telegraph (see next). By 1835, however, Henry had gone an important step further, using the deflected piece of iron to complete an electrical circuit of its own, so that a remote power source could be brought in or out of play on demand. Moreoever, that remote switch might itself power up a second electromagnet at an even more remote, third, station, which might switch in a third electromagnet at a fourth station, and so on, thus "relaying" the original signal as far as you wanted it to go. The electromagnetic switch was therefore commonly referred to as a "relay", because a string of them could pass signals the way athletes pass the baton in a relay race. If you now made the wiring of each constituent leg of the system long enough, then you had the basis of a very long distance telegraph. You could make information travel down a wire at the speed of electricity, and you could do it between cities, countries, and even continents.
Three rival teams set out to make use of this new technology, and they ended up with two very different systems. In Britain, Charles Wheatstone (1802-1875) and William Cooke (1806-1879) patented the Five Needle Telegraph in 1837. This system consisted of six parallel wires connecting five local switches to five remote electromagnets. Each electromagnet acted upon a carefully mounted compass needle, and could flick this needle left or right from a vertical resting position. When this happened, the needle pointed to a matrix of letters set out on a backing panel, and, when two arrows were simultaneously keyed, the operator simply read off the single letter at the point of their intersection, and wrote it down (as it happened, the early systems would only cope with 20 letters, so operators had to guess at the missing ones, C, J, Q, U, X, and Z). The system was operational between Paddington and Slough railway stations by 1839, and members of the public were paying a shilling a time just to see it sending other people's messages. A public system was opened between London and Portsmouth in 1844. A Wheatstone "clone" system was developed in India by William (later Sir William) O'Shaughnessy, of the East India Company, who successfully demonstrated an experimental 13-mile of telegraph near Calcutta in 1839 (John H. Lienhard, "The Engines of our Ingenuity" website, University of Houston, 2002 online).
In the US, meanwhile, Joseph Henry and Samuel Morse (1791-1872) had independently been concentrating their efforts on developing one-wire telegraphs. Henry's approach has already been described, and required that a receiving operator faithfully, but manually, record all arriving signals, be they delivered by bell or whatever. Morse, on the other hand, an art lecturer turned engineer, had resolved that the receiving apparatus should be a "recording telegraph" - it should automatically create its own permanent record of what had been sent to it. Morse had his original ideas in 1832, and, between 1835 and 1838, perfected an electromagnetic "register", capable of remotely producing momentary deflections in an otherwise straight pencil trace on a slowly moving spool of paper. This was soon replaced by an embossed trace on a paper tape which was advanced every time it was activated by the sending signal key (Pope, 1881), and eventually by a fountain pen tracer.
The effective range of the trial system was not impressive, however, due to degradation of the signal over distances in excess of about one mile, and so Morse consulted a New York University colleague of his, a chemistry professor named Leonard Gale, who pointed him to Henry's previously published work on high-power electromagnets. Further development took place at the Speedwell Iron Works in Morristown, NJ, was funded by its proprietor's heir, Alfred Vail (1807-1859), and assisted by one of his technicians, William Baxter. By 1837, under Gale's guidance and thanks in no little part to Vail's and Baxter's enthusiastic development work, Morse had pushed the effective transmission range up to ten miles, and could cope with theoretically infinite distances by using repeater relays to refresh the signal every ten miles. Vail and Baxter were also instrumental in improving the sending key and the receiving register, and in perfecting the dot-dash code which still bears Morse's name (it is a matter of historical dispute whether Morse or Vail had the original idea). The system was demonstrated to the then President, Martin Van Buren, in 1838, and the code which went with it was patented in 1840. It took longer to train a good operator for the Morse system than the Wheatstone-Cooke system, but transmission speeds were higher (a skilled operator could transmit 40-50 words per minute).
In 1843, Congressman Francis O.J. Smith, Chairman of the Commerce Committee, was taken into the partnership in return for speaking on its behalf in high places. His efforts bore fruit, and the US government granted $30,000 to build an experimental line from Washington, DC, to Baltimore, 40 miles away. This was officially opened on 24th May 1844, just after the next critical invention had been made .....
Key Invention - Electromagnetic Synchronisation: Alexander Bain (1811-1877) was a self-educated Scottish clockmaker and inventor, and in 1841 had the idea of using electromagnets to swing the pendulum in a clock. The power came on momentarily on one of the swings, and then went off automatically to allow a backswing. Properly adjusted, this system would deliver just enough impetus at just the right time to maintain the natural oscillation of the pendulum for as long as the battery held out. From 1843, Bain began to extend these techniques into a "chemical telegraph", a method of transmitting entire pages of text or pictures by progressive scanning (the same principle as the modern fax machine). At the heart of this system were two electromagnetic pendulums connected along the telegraph wire, so that the timing pulses from one of them could be applied to both. This clever arrangement allowed the two pendulums to be kept exactly in step at their respective stations. Bain then fitted the home pendulum with a bared electrical brush contact, and positioned the distant pendulum over a sheet of paper impregnated with potassium iodide solution (or similar), such that a visible decomposition took place whenever an electrical current passed through it. He then arranged for the home pendulum to swing back and forth across the face of a page of text typeset with metal type (or, if a picture, a copper etching thereof), so that the circuit was completed only where the brush contact touched the metal, and the carriage was automatically advanced fractionally after every oscillation. The receiving pendulum traced the corresponding contact points in blue on the impregnated paper, and also advanced the drawing carriage by the same distance after each oscillation, thus creating a two-dimensional facsimile copy.
By 1846, the Morse telegraph service was operational between Washington, DC, and New York, and new horses were beginning to join what looked to become a very lucrative race. One of these was the Vermont inventor Royal Earl House, who patented a printing telegraph that same year. What House did was link two 28-key piano-style keyboards by wire. Each piano key represented a letter of the alphabet, and, when depressed, caused the corresponding letter to print at the receiving end. A "shift" key gave each main key two optional values. The working principle of the machine was that the rotation of a 56-character typewheel at the sending end was synchronised (no longer an impossibility, given Bain's work with the linked pendulums) to coincide with a similar wheel at the receiving end. If the key corresponding to a particular character was depressed at the home station, it actuated the typewheel at the distant station just as the same character moved into the printing position [readers familiar with the "daisywheel" word processors of the 1970s will recognise this layout immediately]. It was thus an example of a "synchronous" data transmission system. House's equipment could transmit around 40 instantly readable words per minute, but was difficult to manufacture in bulk.
The House system was closely followed in 1847 by the Siemens automatic dial telegraph, and in 1848 by the operational version of Bain's chemical telegraph (see above) and by a similar system developed by Frederick Bakewell. Bakewell's approach to full page scanning involved electrically insulating areas on a piece of flexible metal foil using shellac as ink. The prepared foil was then wrapped around a rotating cylinder, and a metal stylus tracked slowly around and along it, transmitting when it crossed a bared metal area and not transmitting when it crossed the shellac. A similar drum at the receiving end carried a sheet of impregnated paper, to receive the image. Bakewell demonstrated this system at the 1851 World's Fair in London, but for one reason or another it was another fifty years before there was much of a market for facsimile technology of this sort.
In 1851, the New York and Mississippi Valley Printing Telegraph Company (renamed the Western Union Telegraph Company in 1856) was incorporated to exploit both Morse and House licences, and by the following year an intense rivalry had developed between it and the many other companies springing up. Indeed, there were now so many companies involved, that there followed a period of legal squabbling over patent rights between Morse, Henry, and Bain. The Morse patent at this time was owned by Morse, Gale, Vail, and Smith in equal shares, but the rival patents by House and Bain were beginning to make inroads into their business. Fortunately for the future Western Union, the US Supreme Court declared in 1852 that the Bain system had infringed Morse's patent. As a result, the Bain licences were suddenly worthless, and Western Union simply started to buy out or assimilate all its rivals.
However, the House and Bain systems were followed in 1855 by yet another alternative system devised by David E. Hughes (1831-1900). Hughes used a system of vibrating springs to effect his synchronisation, and achieved considerable commercial success because his machine was quicker and more robust than House's, and its performance in the hands of experienced operators was relatively error-free. Hughes sold the rights to his system to a group of New York businessmen, who founded the American Telegraph Company in 1857.
Progress was also being made in coping with increased customer demand without constantly laying new lines. Needless to say, this meant pushing increased traffic down the existing wires. Attention was therefore directed at the slow points in the existing system, and it turned out that the keying speed of the transmitting operator and the deciphering speed of the receiving operator were not the only limiting factors. For example, considerable time was also spent confirming the integrity of the connection before a transmission, and checking for its safe receipt afterwards. One solution was Morse's permanent record register, because this allowed receiving operators to be away from their equipment when messages arrived. They could, for example, be at their counter, selling a transmission in the opposite direction, or they could be decoding, and arranging for the onward delivery of, previous incoming messages. Other advances speeded up the sending process. One technique was to wipe a stylus across a set of long and short metal studs, so that the dots and the dashes for a particular letter would be generated automatically and without error. Another was to pre-enter the characters as perforations on paper tape and pass it over a brush contact reader which completed the sending circuit every time it encountered a perforation. The advantages here were (a) that each message could be prepared well before transmission (what we would today describe as "off-line"), and (b) that several operators could afford to be coding simultaneously because tape transmission was much faster than manual keying.
Nevertheless, the Holy Grail of telegraphy was not to send the messages faster, but to put more than one of them down the same wire at the same time. This was where the biggest money was to be made, and the first advance was made in 1852 by Moses Gerrish Farmer (1820-1893). He experimented with methods of "duplexing" each telegraph line, that is to say, methods of enabling two messages to travel along it simultaneously, in either the same or opposite directions, but without mutual interference. Much the same research was being carried out in Europe by the Viennese physicist, Julius Wilhelm Gintl (1804-1883), but the early systems remained impractical until further improved by Joseph B. Stearns in 1868, by Thomas A. Edison in 1872 (duplex), by Edison again in 1874 (quadruplex), and by Jean-Maurice-Emile Baudot in 1874 (quadruplex). Indeed, the system is still being improved today (albeit the "wires" are now microwave satellite links), as modern networking technology spreads its tentacles ever wider, and offers ever greater bandwidth per unit investment [to feel this technology at work, and for a glimpse of some modern telecommunications technology, click here].
By the end of 1861, funded in accordance with the Pacific Telegraph Act, 1860, the Western Union telegraph linked the Pacific and Atlantic seaboards, and in 1866 Cyrus W. Field of the Atlantic Telegraph Company successfully laid the first transatlantic telegraph cable (with advice from an ageing Wheatstone), and the Western Union took over the American Telegraph Company, thus putting all the main telegraphic systems under the same owner. And just as Western Union thought they had it made, along came the telephone .....
5 - Computing 1867-1883 - Things Start to Speed Up
The closing third of the nineteenth century saw a veritable explosion of human ingenuity. In the two years after the first transatlantic telegraph cable had been laid, the world invented not just the Nestl� baby formula and dynamite (so please check the labels carefully), but ticker tape and the QWERTY keyboard. Here is a timeline on the developments most closely related to computing, or the demand for it .....
6 - Computing 1884-1924 - Data, Data, Everywhere
In the end, the vigour of the late nineteenth century industrial economies did so much to build the modern Western way of life, and prompted so many inventions, that a corresponding demand for computation in the financial world grew up. More and more inventions meant more sales - more cash across more counters, more cashings up and close of business reconciliations, more invoices and delivery notes, more tax returns, more stockchecks, and more internal and external financial controls behind the scenes. The sheer logistics of running all the new businesses was frightening, so that other new businesses were established to cater for the data processing needs of the existing ones. Here is a timeline on the developments most closely related to computing, or the demand for it .....
The greatest single story from this period, however, is that of the development of punched card technology. This begins in 1884, when Herman Hollerith (1860-1929), a US Census Bureau civil servant and mechanical engineer, obtained his first patent for a punched card data recording system .....
Key Invention - The Hollerith Punched Card: This was a thin rectangle of card, with characters and numbers printed on it. To service these, Hollerith supplied a keyed punch, capable of punching holes through selected character positions on the card. If you moved the punch carriage across to column 1 and keyed "1", a hole would be punched through character position <column 1, row 1>, and so on. Many prospective layouts were tried out, beginning with 24 columns of 12 vertical positions, but one of the most successful layouts was the 1928 IBM variant, which used 80 columns of 12 vertical punching positions per column. The 12 vertical positions were defined as the digits 0-9, plus two additional positions known as X and Y. The clever arrangement was than 0-9 on their own meant 0-9, but 1-9 in combination with 0, X, or Y, meant one of the letters A-Z. This meant that a total of 36 characters could be squeezed out of only 12 punch positions.
Completed cards thus "stored" the keys which had been pressed, and if you keyed business data then the resulting patterns of holes acted as a business data store (the Falcon-Jacquard loom cards, by contrast, while they might have looked similar, were instruction stores rather than data stores). Three other important factors also contributed to the system's success .....
As noted in the timeline above, it was actually a colleague of Hollerith's, Dr. John Shaw Billings, who had the basic punched card idea around 1881, but he declined the opportunity to develop a product to go with the idea. Hollerith accordingly went ahead on his own, and by 1889 had enough experience for his Tabulating Machine Company to win the contract to process the data from the 1890 US census. The Census Bureau, for their part, was quick to buy, because it had only just finished manually processing the 1880 figures! All went perfectly, and within a year the machines had tabulated over 62 million US citizens (Cortada, 1993). Hollerith went on to win the census contracts for Austria, Canada, and Norway in 1891, and Russia (a massive 129 million citizens) in 1897. He was also quick to open the technology up to the larger private corporations for their operational and business intelligence needs. The New York Central Railroad began processing their freight-handling paperwork in 1895. The 1900 US census earned a gross revenue of $428,000, and in 1911 Hollerith's company merged with several others to become the Computing - Tabulating - Recording Company (CTR). This was the first of many commercial successes, and the punched card system dominated commercial data processing until well into the 1950s. [In fact, British Telecom were still arranging for programming trainees to use a manual card punch into the 1980s - if only for experience's sake. The present author used one in August 1980, and it worked perfectly (but painfully slowly).] In 1914, Thomas J. Watson joined CTR from NCR as general manager, and in 1924 Watson had CTR renamed as part of a corporate makeover, choosing the name International Business Machines (IBM), and we shall be hearing much more about them in Part 2 .....
�
�
�