The previous posters are correct in pointing out that "it's the application" that counts. A little history lesson....back in the old days (yes, pre PC, pre MAC) computers had an ALU (arithmetic Logic Unit) and dynamic memory that would need to be reloaded everytime the "system powered up. Static memory was so expensive that a very small amount maybe 1 KB was used to "bootstrap" the loader program. Since of course shortened to "boot". As technology advanced, the CPU (Central Processor Unit) was introduced. CPU's had some built-in instructions that programmers could manipulate a little easier. Instructions like 'branch on condition" and "add", and "subtract" were much more comprehendable than "load register A", "store register A", "load register B", "Exclusive OR register B", store results. yada yada.
Even with the new architecture of "programming" it was still done at a machine language level. A complete program that read and wrote files, and totalled sales for a million dollar company, would be written in less than 32 KB of memory. Remember memory was extremely expensive. Approximately $2.00 per byte! That's right, 32K of memory would cost $64,000. Why? This was not MOSFET memory (not yet invented) but miniature ferrite toroidal cores, that were hand woven together to make a memory array. You could put hundreds of these "cores" in a tablespoon. Each core had 4 wires woven through it. An X and Y address line, a read line, and an inhibit line. The memory manager would have to inhibit against the destructive read of the cores value in a read operation, and allow a change on a write operation. This memory operated in the millisecond range.
The one thing that each programmer knew was that he owned all the memory in the computer. When his or her application was running, it was the only one taking up resources. This programming technique followed for years. When Apple and IBM came out with their respective desktop computers, not much thought was given to performing multitasking operations. Memory (now MOSFET semiconductor) had come down in price, but was still expensive to the home hobbyist. So with 64K on the original IBM PC, and DOS as the operating system, it was still prudent to run one application at a time. Again, the person who wrote the program had complete control of the system. Programs as exotic as "Visicalc", and "PeachTree Accounting"
were single threaded applications.
As users wanted to do more with the desktop computers programs such as "Top Level" (who else remembers that one?) would manage memory blocks by switching areas of memory between applications. The problem here was that many of the programs were still being written as though the programmer "owned" the machine. A nasty problem of programmer hygiene where they would not "clean up" after themselves (closing open files, releasing allocated memory, deleting stacks,etc.) would produce the so-called "memory leak". These were the zombies left running in the memory that would chew up more resources than existed.
So when OS/2 and then Windows95 were developed, a method of using flags and semaphores was created to act as a traffic cop to direct memory allocation and services. Problem here was that "old school" programmers didn't pick up on this multithreaded way of thinking/programming for quite awhile. So there was quite a bit of fingerpointing as to which application was at fault. (Not, that it's that much clearer now) Multi-threaded programming requires a lot of thought in the development process. Thread tuning involves a great working knowledge of the system that the application will be running on, a bag of chicken bones, and a whole bunch of black magic. But when it's right, you know it.