A Complete History Of Mainframe Computing

The 3090

Although not one of the more well-known mainframes, when the IBM 3090 was announced in 1985 it offered a solid advancement on the System/370 architecture that not only continued the improvements in speed, but also increased the number of processors while giving them vector processing options.

Initially only available as Model 200 and Model 400 (the first number denoted the number of processors), the line was expanded dramatically in its short four years of existence. A uniprocessor version (1xx series) and a 600 series of processors were added, as well as an enhanced version of each model (denoted with an "E" after the model; for example, 600E). Even the original models were formidable, running at over 54 MHz, and executing instructions almost twice as fast as the 3081s they replaced.

The next year the 3090 was expanded to include the vector processing feature, which added 171 new instructions and sped up computation-intensive programs by a factor of 1.5 to three times. The "E" version of the 3090 ran at a brisk 69 MHz, and was capable of roughly 25 MIPs per processor. By comparison, the x86 processor at that time, the 80386, ran at 20 MHz, was capable of roughly 4 MIPs, was uniprocessor only, and had no vector instructions.

The 3090 was replaced after four short years by the ES/9000 line. With local area networks (LANs) gaining popularity and powerful new processors like the 80486 and the many RISC designs (including IBM's own POWER), it was becoming increasingly clear that these technologies would soon render the mainframe obsolete and extinct, as they were doing to the mini-computer. The handwriting was on the wall for anyone that wanted to read it. Or was it?

The ES/9000

In late 1990, IBM replaced the illustrious 3090 with the ES/9000 line, which ushered in the era of fiber optics with a technology IBM called ESCON or Enterprise Systems Connection. Naturally, this was not the only new thing about these systems. In fact, Thomas J. Watson Jr. considered the ES/9000 as the most important release in the company's history. Even more important than the System/360, you ask? Well, Mr. Watson thought so.

So let us assume he was lucid and not simply issuing hyperbole. Certainly ESCON was an important technology. It was a serial, fiber optic channel that could transmit data at 10 MB/s and up to nine kilometers apart when it was released. Or maybe he was referring to the massive amounts of 9 GB of memory it could use? Or perhaps it was the ability to use eight processors in one sysplex, which allowed it to be treated as one logical unit? Then again, for the first time, one could create multiple partitions and allocate processor resources to each logical partition, and run any of the new (and compatible) Enterprise System Architecture/390 operating systems on them. Maybe that was it.

I doubt it was the performance, which was roughly 1.7 to 1.9 times the speed of the 3090/600J (the previous fastest mainframe from IBM) in commercial applications, 2.0 to 2.7 in scalar, and 2.0 to 2.8 in vector performance. Although impressive, we've seen similar jumps before between generations. None of this sounds so earth shattering that it should be the most important release in the most important computer company's history does it? Yes, by today's standards 9 GB is a lot and 10 MB/s over nine kilometers is faster than the Internet speeds to which most of us have access. Serial transmission has been around for a few years now, and virtualization is becoming more common all the time. Eight processors is a good amount, but dual-socket quad-core processors are not that rare anymore. And we'll soon have processors with that many cores. So, I just don't know.

Maybe it had something to do with it being released in 1990. You know, when the 486 was hot and George H.W. Bush was in the first part of his term. Before Yahoo! existed and about six years before the first article appeared on Tom's Hardware about Softmenu BIOS features for Socket 7 motherboards. Taken in that time context, it was a monumental achievement, with so many important advances in so many aspects of the systems. All in all, it's very hard to disagree with Mr. Watson. Would you have expected otherwise from such a distinguished and accomplished person?

But, although this marvel has technology that hardly seems old even by today's standards, our story is surely not done. But, what can top the ES/9000? It's hard to imagine, but then again, it's even harder to imagine a computer line staying the same for 19 years. So, let's take a look at the latest and greatest from Big Blue.

The System z10 EC

While this article is supposed to be a history of big computers, this last entry is about a computer that is still being sold today. But it was sold yesterday too, and that's history, right? So, let's and take a look at IBM's biggest and baddest computer on the planet, the System z10 EC.

In this day and age, it's hard to imagine a physically large computer, but IBM did manage to create a 30 square ft. beast that weighed in at over 5,000 pounds and consumed  27,500 watts of power. Still not impressed? How about 1,520 GB of memory? Yes, that's a bit more than the 6 GB of most Core i7-based enthusiast boxes. Well, actually, that's a bit more than the average hard disk of a PC with the Nehalem. It can also have 1,024 ESCON, 336 FICON Express4, 336 FICON Express2, 120 FICON Express, 96 OSA-Express3, and 48 OSA-Express2 channels. That's more I/O than the X58, wouldn't you agree? Maybe several orders of magnitude more? This amazing machine can even host up to 16 virtual LANs in one machine.

Needless to say, these computers far exceed your normal server and, in fact, consolidate many smaller x86-processor machines. Rather than fading into oblivion, mainframes are finding customers that never used them before and wish to consolidate their x86 servers for space and energy savings. The flexibility of these servers are truly impressive, as one can stock them with up to 64 integrated facility for Linux (IFL) processors if Linux is the choice of operating systems or add up to 32 zAAP processors to assist with integration of Web apps using Java or XML with backend databases. There also can be up to 32 zIIP processors for data and transaction processing and network workloads, which are often used for ERP, CRM, and XML applications and IPSec data encryption.

The main processor, the z10 processor unit chip, has a rich CISC design that can execute 894 instructions, 668 of which are hardwired. The processor, in a nod to the ENIAC, even supports hardware decimal floating point operations, which can limit rounding errors and is much faster than using binary and converting. On top of all this, it can still run software written for the System/360, which is now 45 years old, and the amazingly solid MVS operating system, although it's now called z/OS. One can have up to 64 of these 4.4 GHz quad-core monsters running, designed for 99.999% uptime. It is no wonder these machines are selling well, as they offer incredible reliability, excellent and flexible performance, capacity that is hard to imagine, and very advanced, yet rock-solid software.

As suggested, virtualization capabilities on these machines are far beyond those of mere mortal servers. Naturally, they can run multiple operating systems, including Linux, z/OS (which includes a full version of UNIX), z/VM, and OpenSolaris, but more than that, they are capable of hot-swapping capacity non-disruptively and on the fly when one partition needs more capacity. One can even bring extra processors online for short periods of burst activity, and schedule them for certain times of the day, if there are known peaks.

These remarkable machines have capabilities that are so advanced that it might be difficult to get your mind around it. Forgetting for a moment the remarkable performance and flexibility of these machines, it is still dumbfounding how reliable they are. They feature, for example, something called "lock-stepping," when each result-oriented instruction is run twice and the results are compared to make sure they are the same. If they are not, the instruction is re-executed and the computer attempts to locate where the error occurred. It can even switch in-flight instructions to other processors, thus eliminating any negative effects of the error from the user’s perspective. More than this, when used in a parallel sysplex (clustering up to 32 mainframes into one logical image), one can update all the software and hardware on any mainframe without any downtime or disruption at all.

Only in the sense that these magnificent machines make the average desktop machine look small by comparison are they dinosaurs. They are far more advanced, powerful, flexible, capacious, and useful than the PCs we all know and love, not only in hardware, but in the incredible stability of the system software. They still are very much part of the backbone of computing and show absolutely no signs of death. On the contrary, their sales increase every year. In fact, how could it be any other way?

Mainframes arguably express man's highest achievement, not only in the amazing amount of thought and intelligence invested in them, but also in the sublime role they have had, and still have, on human life, and the endeavors of our kind. Perhaps rather than dinosaurs, they are like something even older. Like diamonds, they are a combination of many ordinary parts, that when combined in a certain way, through nature or extraordinary thought, become something far greater than the sum of ordinary.

  • seboj
    Mainframes arguably express man's highest achievement

    But will it Blend?
    Reply
  • Ramar
    Wonderful article, thanks Tom's. =]
    Killed a good hour of my day, and I very much enjoyed it.
    Reply
  • 1ce
    Really cool. One observation, on page 7 I think the magnetic drum is rotating 12,500 revolutions per minute, not per second....If my harddrive could spin at 12,500 revolutions per second I'm sure it could do all sorts of amazing things like flying or running Crysis.
    Reply
  • pugwash
    Good article, however although not quite "Complete". There is no mention of Collosus (which was used to break Enigma codes from 1944) or The Manchester Small-Scale Experimental Machine (SSEM), nicknamed Baby, which was the world's first stored-program computer which ran its first program in June 1948.
    Reply
  • neiroatopelcc
    So the ABC was in fact the first mobile computer? The picture does show wheels under the table at least :) But I guess netbooks are easier to handle, and have batteries
    Reply
  • dunnody
    I am with pugwash - its a good article but why does it seem like it is a bit US centric, no mention of Alan Turning or "Baby" and the Enigma code cracking machines of Bletchley Park
    Reply
  • Err what about the Zuse Z3?
    Reply
  • candide08
    I agree with others, in that I am surprised that there was not even a mention of a Turing machine or other very early "computers".

    Surely they qualified as Mainframes of their times?
    Reply
  • It's a shame that multiplication, addition and division benchmarks are not persistently noted throughout the article.

    I know that now a days it's very much dependent on software design, but it would still be nice to follow the progression in terms of calculation power of the machines.
    Reply
  • theholylancer
    25 pages??? i love ad block but damn this is annoying
    Reply