A Complete History Of Mainframe Computing
A Complete History Of Mainframe ComputingOur trip down mainframe lane starts and ends, not so surprisingly, with IBM. Back in the 1930s, when a computer was actually a fellow with a slide rule who did computations for you, IBM was mainly known for its punched-card machines. However, the transformation of IBM from one of the many sellers of business machines to the company that later became a computer monopoly was due in large part to forward-looking leadership, at that time going by the name of Thomas Watson, Sr.
The Harvard machine was a manifestation of his vision, although in practical terms, was not a technological starting point for what followed. Still, it is worth looking at, just so we can see how far things have come.
It all began in 1936, when Howard Aiken, a Harvard researcher, was trying to work through a problem relating to the design of vacuum tubes (a little ironic, as you will see). In order to make progress, he needed to solve a set of non-linear equations, and there was nothing available that could do it for him. Aiken proposed to Harvard researchers there that they build a large-scale calculator that could solve these problems. His request was not well-received.
Aiken then approached Monroe Calculating Company, which declined the proposal. So Aiken took it to IBM. Aiken's proposal was essentially a requirement document, not a true design, and it was up to IBM to figure out how to fulfill these requirements. The initial cost was estimated at $15,000, but that quickly ballooned up to $100,000 by the time the proposal was formally accepted in 1939. It eventually cost IBM roughly $200,000 to make.
It was not until 1943 that the five-ton, 51-ft. long, mechanical beast ran its first calculation. Because the computer needed mechanical synchronization between its different calculating units, there was a shaft driven by a five-horsepower motor running its entire length. The computer "program" was created by inserting wire links into a plug board. The data was read by punched cards and the results were printed on punched cards or by electric typewriters. Even by the standards of the day, it was slow. It was only capable of doing three additions or subtractions per second and the machine took a rather ponderous six seconds to do a single multiplication. Logarithms and trigonometric calculations took over a minute each.
As mentioned, the Harvard Mark I was a technological dead-end, and did not do much important work during the 15 years it was used. Still, it represented the first fully-automated computing machine ever made. While it was very slow, mechanical, and lacked necessities like conditional branches, it was a computer, and represented a tiny glimpse at what was yet to come.
Killed a good hour of my day, and I very much enjoyed it.
But will it Blend?
Killed a good hour of my day, and I very much enjoyed it.
Surely they qualified as Mainframes of their times?
I know that now a days it's very much dependent on software design, but it would still be nice to follow the progression in terms of calculation power of the machines.
Out of curiosity, since its a metric I am more familiar with, what would the TeraFLOPS rating be in the newest and bestest from IBM. And how much would one of those bad boys set you back in the wallet.
Was a very educational and interesting article.
Many of these statements are sure to be wrong. 1) For sure, it would not be faster at floating point than integer. 2) Index registers have to do with memory addressing, not branching.
The choice of computers was U.S. centric, because computers were U.S. centric. I chose only one mechanical computer, and it was made by IBM, since they were the dominant company. To add more computers would have been boring, and none of them were important technological milestones. So, while they might be specifically interesting to you, I was of the opinion too many computers from the same time frame would be boring. I almost chose the EDSAC over the EDVAC, but, went with the first design over the first implementation.
With regards to the index registers, "the IBM 704 added index registers and a “TSX” instruction that would branch to an address but leave the address of the TSX in an index register. A single unmodified branch could use that index register value to return."
Loops involve branching, branching involves memory addressing.
With regards to floating point vis-a-vis integer, you need to be more careful about what you're sure of. For one, multiplies and divides are generally slower, being much more complex. But, more to the point, this information is available directly from IBM.
Any mention of mainframes without the Honeywell H-800 series, the H200 series or Multics leaves out systems that have had a large influence on computing as we know it. The H-800 was one of the first multiprocessing systems of the late '50s, the H-200 was Honeywell's answer to the 1401 in the '60s and Multics merely contributed much of the hardware architecture for the Intel CPU used in today's PCs and foreshadowed UNIX and many of the development tools we use today. I saw no mention of GE and their 600-6000 series. And NCR. (Remember the term "BUNCH" as the competitors to IBM.)
So starting in the '50s, you should also have the history of the BUNCH woven in even to their demise. Not every great idea originated from IMB (though many did).
http://abcnews.go.com/Technology/story?id=3951187&page=1
Thanks Rich Arzoomanian for writing this article.