How Intel expects Macintosh software to change

 

Santa Clara (CA) - When Apple CEO Steve Jobs presented his company's new Intel Core Duo-based Macintosh computers at Macworld Expo in San Francisco last week, behind him on the big screen were the imposing characters, 2 - 3x. According to company tests, Mac applications can run two to three times faster on a Core Duo-based Mac than on a PowerPC G5-based Mac. But Jobs was sure to predicate that remark with a word of both caution and fairness; he said those gains are realized when applications are produced for those processors using the best compilers available for each.

For the Intel Core Duo, the compiler used for Apple's test was made by Intel, confirmed James Reinders, Intel's director of marketing for software development products, in an exclusive interview with TG Daily. IBM, Reinders said, made the compiler used for the G5 benchmarks.

The shift in both hardware and software architecture to the Intel platform represents the second great exodus in the Macintosh's wandering, colorful history. The first came in 1994, about a decade after the Mac's initial introduction. At that time, under the stewardship of Michael Spindler, Apple successfully engineered a move away from the Motorola 680x0 platform to the RISC instruction set-based PowerPC platform, conceived along with Motorola and IBM. Being able to accept IBM - the former enemy - was made easier for Mac supporters with Motorola co-developing the new processor, even though that company would later back away from the deal as it exited the CPU business altogether.

This year's exodus from PowerPC (or Power, as IBM refers to it) to Intel's Core Duo requires the crossing of a much broader architectural chasm. Helping Mac software along most of the way is a kind of virtual machine called Rosetta, that runs programs with Power machine code in a seamless emulation mode. With the Core Duo being a much faster processor anyway, Mac users shouldn't notice much of a difference. But a difference is coming, Intel's Reinders told us, as the incorporation of Intel architecture, coupled with the increased use of Intel compilers by Mac developers, may lead to a long-overdue rethinking of the methodologies developers use to write Macintosh applications.

"I think the most significant thing to be thinking about here is dual-core [architecture]," Reinders told us. "We need to get out of practices that think of a computer as something that can only do one thing at a time" Operating systems that continue to presume a single CPU as their core processor, he explained, tend to manage running tasks as though only one thing can be done at a time. Sophisticated, yet still single-core, systems utilize pre-emptive multitasking, which is a lot like cooking on all four stoves simultaneously, if you accept the notion that the chef can only stir one pot at a time.

"Even with all the pre-emptive multitasking the operating systems have done for us, the fact still was, the computer did one thing at a time," said Reinders. "Now it can do more...and Intel's been pretty clear, I think, that dual-core's not the end. There's nothing magical about stopping at dual-core, [though] that's where we are today."

The fact that multicore processors constitute the gamut of Apple's current development, and will encompass its entire product line as the last of the G5 units are finally sold, means that developers should begin presuming - as opposed to testing for - parallelism and multi-threading. To that end, last week, Intel announced it was extending its beta program for its C++ and Fortran compilers, for integration in Apple's Xcode development environment (IDE). "The Intel compilers are the best at extracting the performance of the processor," professed Reinders, and his argument seems sensible enough. Intel is in the best position to interpret how machine code can be optimized for execution in the microprocessors it produces. "I think it will re-emphasize people thinking in an event-oriented nature, which I'm very fond of," he added, "and it does free people up to think in parallel very cleanly."

Event-oriented computing is a concept which dates back to the inception of the Common User Access model, inspired by the Xerox PARC laboratory's original experiments with graphical computing. Rather than asking the user a series of questions, and processing tasks in response to each one, in sequence, a truly event-driven program presents users with a panel of buttons or options. All the while, it processes central tasks such as layout and printing as separate threads. While you might think such a model has existed since the first 1984 Macintosh, you may be surprised to learn how many programs actually withhold processing until events are recognized, rather than schedule things they can be doing other than waiting.

To help move developers toward true event-orientation, Reinders said, Intel's Mac-based compilers will support the same parallel processing support methodologies that its x86- and Itanium-based compilers have used. Developed by an industry consortium championed by Intel, Sun, and SGI, OpenMP is an API for an explicit set of compiler directives, which a programmer inserts at opportune locations in the C++ or Fortran source code. Long-time programmers will recognize the #pragma directive as a flag that signals the compiler to perform a task on a specified condition. In this case, the condition is whether the target platform has multiprocessing (MP) capabilities. With pragmas inserted in source code, the developer can change target platforms - creating, for instance, one build for a single-core G5, another for a multicore G5, and another for Core Duo - without altering the source code.

"Neither C++ nor Fortran were designed to be parallel programming languages," Reinders explained to us. Yet they remain heavily used, even in the face of arguments that a multiprocessing-oriented language might be better suited...if only someone would create one. "So OpenMP is a set of language extensions that give the compiler a little extra information about an application, so that the compiler can safely do optimization to run parts of the code in parallel."

The curious little matter of how any automatic system should know when it's safe or practical or desirable to split a thread of execution into two or four or eight paths, is a familiar problem for everyone at Intel. Its Itanium processor architecture was built around the notion that explicit multithreading, such as the kind OpenMP facilitates, will become common practice among developers once they realize the performance payoffs. That hasn't happened, and Intel found itself inventing "hyperthreading" as a kind of stop-gap measure for simulated, automatic ("implicit") multithreading up until it could start producing dual-core processors like the Core Duo.

A multicore processor's own capability to divide threads for itself certainly can lead to noticeable performance gains for Mac OS X applications (and Windows and Linux apps, for that matter). But those gains can't quite eclipse what could be realized if developers simply learned to multithread as a matter of common practice. OpenMP, Intel's Reinders said, offers developers a way to adopt that practice with zero tradeoffs. A target platform that doesn't multithread, or a different brand of C++ compiler, can easily ignore any pragma directives it doesn't recognize, without errors.