Sign in with
Sign up | Sign in
Your question

8086 code

Tags:
  • CPUs
  • Font
Last response: in CPUs
Share
August 9, 2001 1:13:24 PM

The original 8086 code is very tightly packed.Only 8 bits for more then 100 commands(if I got it right).
Why is so efficiently desined machine code is so NO NO today?

I'm merely asking.


<P ID="edit"><FONT SIZE=-1><EM>Edited by Era on 08/09/01 09:39 AM.</EM></FONT></P>

More about : 8086 code

August 9, 2001 1:22:45 PM

Cynical answer's Microsoft & Intel

Truth is CPU's have massive growth in functionality, therefore growth in commands. People don't appear to worry so much about efficiency in their code anymore - but not many people use machine code directly anymore either

--------------------------------

Look at the size of that thing!
August 9, 2001 1:38:09 PM

The cpu handels only machine code.
Related resources
August 9, 2001 1:56:45 PM

I'm not sure I understand your question. But if you are asking why people rarely use machine code and assemblers directly, it is because compilers these days are better than they used to be. They are efficient enough to handle everyday tasks. Also, writing windows in assemblers is not exactly a walk in the park.

True, High level code is not as efficient and fast as assembly instructions, but the machines these days are fast enough to make sluggish code look pretty swift.

Games and stuff use all sorts of hardware accelerations that need the code going different places. Utilising those hardware can be pretty mind boggling with assemblers. Also, almost everyone sticks to popular API's such as Open GL and DirectX (now did I order the two that way for a reason? :wink: ), which means Direct hardware interaction is virtually eliminated.

Some inner loops in games are still written in assembly instructions which are integrated in the C/C++ code. I heard Visual C++ and Borland C++ aren't too good at blending code like that. I think code warrior is supposed to be pretty good at that. For Quake 2 (I think) ID software hired Michael Abrash to do the assembly code, cos he's supposed to be the guru. So people still do use it here and there, but its just not economically viable to develop general software using assemblers anymore.


<font color=red><i>Tomorrow I will live, the fool does say
today itself's too late; the wise lived yesterday
August 9, 2001 2:13:09 PM

I do understand the software aspect,but why did they waste the basics of the fundamental idea of efficient information handling?
Anonymous
a b à CPUs
August 9, 2001 2:31:57 PM

Trying to write software the most efficient way is something that costs a lot of time, so also a lot of money.
Those compilers are made so a lot of people can write 'decent' software in a convenient and 'fast' way.
It is then up to the compiler to translate that to the most efficient code. And those compilers just don't have the intelligence that people have.
August 9, 2001 3:53:36 PM

OK.I really ment hardware.If information is put in such a compact form,why to demolish it.
August 9, 2001 10:34:11 PM

Ok, now I really don't understand your question.


<font color=red><i>Tomorrow I will live, the fool does say
today itself's too late; the wise lived yesterday
August 10, 2001 6:40:01 AM

Today's 32-bit processors handle 32-bit values more quickly than 8 or 16 bit values. In reality when you load up an 8 or 16 bit value into a register it gets translated to a 32-bit value internally and the operation is performed. Performing the operation on the full 32-bit value is actually much faster. This is why it's always best to use full 32-bit integers for everything. The smaller values might be a bit smaller but they are much slower to handle.

-Raystonn


= The views stated herein are my personal views, and not necessarily the views of my employer. =
Anonymous
a b à CPUs
August 10, 2001 7:12:29 AM

If your question is why CPU's moved from 8bit to 32bit (and more), that has a lot of reasons.
Just try to program some calculations with large numbers and you only have 8bits registers, then you must split up your numbers over several registers if the number is greater than 256. Even 16bits registers are still very clumsy to work with.
I can assure you, you make a lot of 'bugs' then!
August 10, 2001 7:43:59 AM

because people are idiots... theyve got faster cpus and more ram so the reckon they can afford to waste it... current system requirements would be about the same as they were years ago if machine code was used... the problem is that there are very few commands in assembler (machine code language i think...) so you have to do everything... there are no shortcuts you have to write each command yourself... becomes very tedious with complex products...

if in doubt blame microsoft...
August 10, 2001 12:07:30 PM

Quote:
This is why it's always best to use full 32-bit integers for everything


Unless of course you care about how much ram you use, some of us out here program for embedded systems, you have restricted ram so it's best to get in the habit of being as efficient in all aspects of your code as possible.

--------------------------------

Look at the size of that thing!
August 10, 2001 12:09:28 PM

Although you build up libraries of common functions, code re-use is allowed!

--------------------------------

Look at the size of that thing!
August 10, 2001 12:11:30 PM

Here's a vaguely related question one of the first proc's I programmed was the 8088 ( an intel product) how comes we're not all using x88 series processors and compatibles??

--------------------------------

Look at the size of that thing!
August 10, 2001 12:36:44 PM

Quote:
because people are idiots... theyve got faster cpus and more ram so the reckon they can afford to waste it

Thats my theory too, once upon a time you HAD to watch how you coded and couldnt allow to be "sloppy", limited memory and slow cpu power meant you couldnt afford to be a lazy coder and if it meant rewriting lines of code to make the program smaller or faster then it got rewritten, now, we have gigs of ram Mhz of cpu and if a program that could fit into 1 meg of ram is badly coded and requires 64 meg of ram then who cares??? just change the minimum spec from a 486 with 4 meg of ram to a p3/athlon with 128 meg and everyones happy - coz no one knows!!



Next time you wave - use all your fingers
August 10, 2001 2:42:33 PM

i know it is but only specific commands can be used... using libraries will make it slower unless they are very specific...

if in doubt blame microsoft...
August 10, 2001 2:52:47 PM

There are some things that you repeatedly need to use

I still have libraries for controlling the ACIA from a 6502, basic functions like sending or receiving a data word or packet. Basic functions like that get massive re-use, I've recently been programming external port addressing from a T80 microcontroller which is based on the Z80, I've had these routines for like 15 years plus and still use them.

--------------------------------

Look at the size of that thing!
August 10, 2001 4:24:07 PM

the 8088 was still a x86 processor!
for software, it had no difference and ran all the 8086 software as easily.

it was designed to be used in older 8 bit systems that used 8085 but still be a 16 bit processor. the difference in 8086 and 8088 was that the 8086 had a 16 bit data bus while 8088 data bus was only 8 bit wide. other minor difference was that 8088 had 4 byte instruction prefetch que while the 8086 had 6.

and for hardware guys, the IO-/M signal that was used to differentiate between access to memory or I/O port, was inverted to IO/M- to be compatible with that of 8085. obviously, the 8088 was supposed to work in 8085 systems and use 8086 software. with this they saved a lot of money in 16 bit design work.

girish

<font color=blue>die-hard fans don't have heat-sinks!</font color=blue>
August 10, 2001 8:01:49 PM

Then you are trading off speed for program size. Making your application smaller will actually make it slower, but it will fit in less RAM. If this reduces your memory consumption enough to allow you to spend less money on RAM for the embedded system, then go for it. Otherwise it's just pointless memory savings that reduces your performance.

-Raystonn


= The views stated herein are my personal views, and not necessarily the views of my employer. =
August 10, 2001 10:06:31 PM

>> Making your application smaller will actually make it slower, but it will fit in less RAM.

That's not true. In fact, most of the time, it's the other way around. In Borland Delphi, Visual C++, Visual Basic and any other high-level visual programming language, form templates are stored in the executable and then loaded when you load the program. A form is a pre-designed window that you make in the programming language interface. If you were to design a program purely using the Windows API (CreateWindow, etc), the program would be significantly smaller and faster and a lot more efficient. In fact, the major limitation in performance would be the Windows API. All the Microsoft Dlls significantly slow things down. But the problem is, you have to use the Windows API if you want to program for Windows. So I would say Windows itself is the main limitation in PC performance. If Microsoft were to optimize all the Windows API dlls, all Windows programs would benefit. I believe that's why Windows 2000 is significantly faster than Windows 9x in business apps. It is a lot more optimized because it was rewritten from the ground up. DirectX is also a major bottleneck for Windows. By making a universal API, it adds a lot of overhead on the processor and the graphics card as the API attempts to identify the available features of the specific CPU and graphics card.

AMD technology + Intel technology = Intel/AMD Pentathlon IV; the <b>ULTIMATE</b> PC processor
August 10, 2001 10:32:03 PM

In fact, even computer hardware is inefficient. A graphics card's antialiasing ability is a prime example. Supersampling and multisampling are way too complicated, wouldn't it be easier to just add more polygons? Using the T&L engine, it would cost a lot less, performance wise. That's why Unreal Tournament has a lot less jaggies and doesn't improve in performance with a graphics card upgrade. Unreal Tournament relies on the CPU to perform T&L! Make a game that fully utilizes the T&L engine of the graphics card and you have a great looking game with excellent performance on a mid-class processor with a Geforce2 or 3.

AMD technology + Intel technology = Intel/AMD Pentathlon IV; the <b>ULTIMATE</b> PC processor <P ID="edit"><FONT SIZE=-1><EM>Edited by AMD_Man on 08/10/01 07:06 PM.</EM></FONT></P>
August 10, 2001 10:52:07 PM

I fail to see what any of this has to do with the discussion we were having. We were discussing the use of 8, 16, and 32 bit data types and their affect on the execution speed of your application. 32-bit data types, though larger, are manipulated much more quickly on todays 32-bit processors. Cutting your data down to 8-bit (or 16-bit) sizes would create a smaller application that would fit in less RAM, but would create less efficient code for the processor.

-Raystonn


= The views stated herein are my personal views, and not necessarily the views of my employer. =
August 13, 2001 10:26:43 AM

Thanks for the info girish, as I said I programmed 8088 at college, didn't touch x86 processors again until the 386 was around (spent the time in between on 68000's).



--------------------------------

Look at the size of that thing!
August 13, 2001 10:31:37 AM

I just feel it's a good habit to get into not using more memory then you need. If you're coding assembler then your right it's better to use 32 bit words if you've got the RAM.

If you're using C or another HLL why not specify the word length as required and then use compiler options to optimise for speed or size?

--------------------------------

Look at the size of that thing!
August 13, 2001 12:07:57 PM

well, its still dangerous to code in assembler and use multibyte words! btw word I refer to here means the width in bytes of the concerned datum as opposed to the more colloquial term for 16 bit datum in assembler language. 32 bit is called dword and 64 bit datum is called as quadword and refered to as DB (define byte), DW (word), DD (double word) and DQ (quad word)

a compiler will align the words to the word boundary, dwords on 4 byte boundary automatically. its difficult using the assembler using the "ORG 0xxh" just once and get different multibyte data items in order and yet get them aligned to their respective widths.

you might save memory by using all the data items defines at once, but then you lose out on performance since oddly placed data items are accessed slowly. if you want to align them, you need to let the space between them go unused. try aligning a array to a cache line of 16 bytes after you have defined sufficient variables with DB, DW and DDs. it might skip 15 bytes to get the array aligned to next 16 byte boundary!

a compiler will take care of that automatically although you can disable/enable it with a switch.

basically its not just using 32 bits in assembler and shorter ones in any HLL, its about making the right combination of speed optimisation as well as size optimisation to get the optimum performance.

girish

<font color=blue>die-hard fans don't have heat-sinks!</font color=blue>
August 13, 2001 12:37:04 PM

BTW, with an 8-bit processor, you cannot natively support integers over a byte in size. This limits you to 0..255. With today's processors, the limit is over to 2 billion. Well, either way, there are always software tricks to work with larger numbers than the processor supports, it's just, they wouldn't be efficient and they'd waste cycles.

AMD technology + Intel technology = Intel/AMD Pentathlon IV; the <b>ULTIMATE</b> PC processor
August 13, 2001 12:56:29 PM

its a 16 bit processor we are talking about, which was succeeded by a 32 bit one. its the same problem that 8086 and 286 faced with misaligned words, 386 and others face with misaligned double words and quad words.

Gog was saying that its better to use 32 bit variables even when you need to store 1 bit flags. 386 supports it anyway. it would be a better trick than to keep track of you variables in variety of sizes and their alignment.

of course the limitation of the processor word size could be overcome by software but we are talking about using a 32 bit processor to use 8 bit values. whether its better to use a logical 8 bit value or promote it to a dword and still use just lower byte.

with todays machines having megabytes of RAM as standard it might be worthwhile to do it since it improves performance at te cost of more memory that available abundantly anyway and that too almost free!

girish

<font color=blue>die-hard fans don't have heat-sinks!</font color=blue>
August 13, 2001 1:21:08 PM

I can honestly say I've never used a 16 or 32 bit word to store a one bit flag but I've gone as far as 8 bit words in a 32 bit. For flags I tend to use a flag variable and logic to find/set flags. As I said in my first comment, I've been programming embedded systems lately with low amounts of RAM and therefore size wins the size vs speed issue (given the controllers are based around z80a's people aren't expecting high performance)

--------------------------------

Look at the size of that thing!
August 13, 2001 3:46:49 PM

Just stay with the type that is optimized for the processor you are programming for. I mean, what's the difference between 1 byte and 4 bytes (Byte type and Longint type) nowadays. With many computers today coming with 256MB RAM, a few bytes more won't make a difference. But stay away from software emulated types (such as Borland Delphi's Int64 or Real types) because they are not optimized for current 32-bit processors and slow things down.

I'm referring to Delphi because that's the programming language I use the most.

AMD technology + Intel technology = Intel/AMD Pentathlon IV; the <b>ULTIMATE</b> PC processor
August 13, 2001 6:02:27 PM

well, I work on the Intel MCS-51, that 8031/32 family controllers and the National Semisonductions COP8 controllers.

Its size that matters on these controllers, but my experience with them is that embedded apps are such that you can always have a bit more ROM as an option, thats just upgrading the chip. there are vary rare embedded apps that need and use all the memory space of the controller.

frankly, till now I've found the 32kbyte Flash ROM of the COP8 as well as 8k or 16k of 89C51 sufficient for most apps. that is why I make a lot of tables in memory that speed up things. that speeds up things by a ton.

of course on the Z80 processors you can have all the 64k of memory since all of it is external. but demanding more memory can be costly on these kind of systems.

girish

<font color=blue>die-hard fans don't have heat-sinks!</font color=blue>
August 13, 2001 6:09:01 PM

of course, if you are into embedded systems, you cannot lose a single byte, ROM is enough but RAM is on short supply. almost all embedded controllers have bit addressable RAM so implementing flags is easy, but even while I learnt the assembly language for 8088, I used to have a single word to store all the flags and I used bit masks defined as constants and TEST instruction to check them, AND, OR and XOR instructions to reset, set or compliment individual bits. if this flags are aligned well you could implement 32 flags in 4 bytes and have the least penalty checking them since all of them will be in the cache at once!

btw 386 natively supports byte sized flags with the conditional jump instruction that checks a given byte to be zero or non zero. and byte access on the 386 are free!

girish

<font color=blue>die-hard fans don't have heat-sinks!</font color=blue>
August 13, 2001 7:01:13 PM

Firstly, this isn't really the point. There is a large differance between data types and APIs.

However, I'd just like to add that while the VCL (Delphi, C++ Builder) is actually larger (adds about 300KB to any windows executable), it is NOT slower. This is a common myth though, since Visual Basic's library is much slower than straight API. In fact, some parts of Borlands library actually optimize and make things faster than doing it the traditional Windows API way. I won't give specific examples because it would take too much space here, but just trust me.

Yes, Windows is pretty slow and unoptimized. DirectX is not however. In fact, it is pretty well the fastest thing out there (save maybe OpenGL, which is about the same). The added "burden" of checking features is only performed once MAX per DirectX session and it is still trivial compared to the emulation speeds. The only way anyone could get even close to DirectX performance would be to code specifically for one video card and one processor, etc. and this is simply not feasable nowadays when people don't all want to have XTs with a CGI card.
August 13, 2001 7:06:53 PM

I agree in principal with what you're trying to say but you have some problems with th actual examples.

Firstly, antialiasing is not a bad idea. The only way to get near the quality is to bump up the resolution, not the number of polygons. I think if it was that simple, people would have thought of it and not bothered to develope FSAA! :) 

Secondly, the whole support of T&L and other technologies is really subjective. It just depends on where people want to put more burden. This issue has swung back and forth for years on whether or not the processor or the video card should handle something. Whatever the case, if a game decides to put more load on the CPU, you need a better CPU and the same with the video card. The only advantage with video cards is having code optimized for 3D math, but this is being added to CPUs now as well, so it really doesn't matter too much unless you have a slow cpu/fast video or vice versa.
August 13, 2001 8:49:02 PM

I doubt DirectX is as optimized as you say it is. As long as an API such as DirectX uses dlls then it's not optimized. What that basically means is API = slow. The idea of having a single standard interface from which all programs communicate with hardware is a good one, however, it adds overhead. Managing loading and unloading of dlls, maintaining backward compatibility extends the problem. Whenever I decide to write I new version of a program, I try as much as possible not to use old code. One day I hope the computer industry starts over from scratch leaving behind all the backward compatibility. It is the notion of carrying previous mistakes into new software and hardware that really bugs me.

In all fairness, I believe Intel's approach to the 64-bit market may beat AMD's because they are designing a radically new processor with new technology and enhancements. The entire x86 family is too old now. What we need is a new start!

Also, the Delphi/C++ Builder VCL is slow. Sure, they tell you it’s faster than the Windows API but how is that possible when they use the Windows API? All the VCL does beyond the normal WinAPI is cache some pens and brushes. Forms and components are slower and they consume more RAM and disk space. Nowadays, however, writing a program without a visual library takes forever and increases the chance of making mistakes. Loading and freeing resources is also difficult because as a program becomes more complex the chance of forgetting to release a resource becoming infinitely greater. The VCL promises to easy that problem but the VCL itself has leaks.


AMD technology + Intel technology = Intel/AMD Pentathlon IV; the <b>ULTIMATE</b> PC processor
August 14, 2001 12:03:28 PM

The actual chips I've been working has been an old stock of T80's they're z80 with 8k rom and 8 or 16k RAM in one chip. 90% of the one's we have in use are the 8k ram version. Space is tight in there!

--------------------------------

Look at the size of that thing!
August 14, 2001 2:45:03 PM

well, thats tight!
why dont you change your inventory? MCS-51 or COP8 are cheap and can offer upto 32k of Flash ROM but limited RAM, of the order of 256~512 bytes!

in such projects you must go in for size wherever possible and for performance wherever necessary.

girish

<font color=blue>die-hard fans don't have heat-sinks!</font color=blue>
Anonymous
a b à CPUs
August 14, 2001 3:10:01 PM

I can see that DLL's eats memory for breakfast, but why are
they slow?
I could be wrong in the behavior of dll's, but I thought they wore a way to store object code, using a standard calling convention. If u optimize the code and store it in
a dll, the code would still be optimized.

Skrue


<P ID="edit"><FONT SIZE=-1><EM>Edited by SkrueMcDuck on 08/14/01 05:17 PM.</EM></FONT></P>
August 14, 2001 3:29:05 PM

DLLs are <b>D</b>ynamic <b>L</b>ink <b>L</b>ibraries, and they actually save you a lot of memory.

imagine if each of your program had to include all the code to display windows, read and write files, print docs...! everybody will write his own code and all apps will look different, fight for the printer and other devices, fight among themselves on whom the mouse and the keyboard and the display belongs to..!

DLLs are implemented by the OS and they allow all the programs to share the code and thus save a lot of memory since they are loaded just once. they also give a standardised interface to all apps so that all your apps display the same type of windows and their buttons and input boxes look the same and behave the same way.

one downside of DLLs is that it takes a lot of overheads in initialisation and linkage (the mechanism that your program calles these DLLs, they are in different memory spaces) and they are generalised and do a lot of error checking on the arguments passed, so they might prove costly in terms of speed. especially if you are using a single DLL just for your app it might take up large amount of space and be slower than statically linked code. but it makes their distribution easy.

in fact you dont even need to include standard system DLLs, you can distribute a large application on a floppy!

girish

<font color=blue>die-hard fans don't have heat-sinks!</font color=blue>
August 14, 2001 3:43:10 PM

In the perfect world that's how Dlls would work. But dynamically accessing Dlls via LoadLibrary loads the Dll in memory if it's not loaded already. But what happens if you forget to call CloseLibrary? You get a resource leak! That's just annoying! Also, Dlls consume less overall disk space and memory because they can be shared but they reduce performance and they're harder to manage. You can't distribute a large app on a floppy. It would have to be an extremely small app especially if it was designed by a visual programming language. All the advantages of Dlls are lost because the visual programming language stores a lot of redundant resources and forms inside the executable making it huge and consume a lot of memory.

AMD technology + Intel technology = Intel/AMD Pentathlon IV; the <b>ULTIMATE</b> PC processor
August 14, 2001 3:59:56 PM

This was a case of we have 5000+ of these suckers lying around we'll use them, effectively they're free (we've had them for years)

--------------------------------

Look at the size of that thing!
August 14, 2001 4:09:49 PM

to bad!!! :smile:
can you ship them overseas, since they are free!!! we have a lot of people who know the 8085 and we need to train tem on MCS-51, COP8 and MC6811. with those chips we might save a lot on the training of these guys.... :smile:

girish

<font color=blue>die-hard fans don't have heat-sinks!</font color=blue>
August 14, 2001 4:15:53 PM

why do programmers "forget" to unload a DLL? they also forget to free up memory, close files, close resources? although Windows is supposed to release them when the app exits (the DLLs are not unloaded automatically) I havent seen it happen especially if a program crashes all its resourses are owned by its ghost and nobody is able to access it.

why do people use visual tools? they produce hefty code which would be almost halved and performance doubled using simple SDK? with forms, I guess you are using Visual Basic... too bad!

girish

<font color=blue>die-hard fans don't have heat-sinks!</font color=blue>
August 14, 2001 6:26:22 PM

I used to use Visual Basic, now I use Borland Delphi which is based on Pascal. Programmers forget to unload resources because coding can get really complicated with hundreds of subroutines. A huge program can really get messy. There is a lot of jumping in the code and more than one thing can happen at a time. Users want responsive apps so a lot of work gets done during the cycles while the user is not interacting with the program and the CPU is free.

Without visual programming, designing a user interface takes forever and the chance of making a mistake increases. Visual programming eases the task of designing the user interface and helps you focus on the actual functionality of the program.

It is very difficult to manage resources. I wish Microsoft would do it's job and let Windows do this stuff automatically. Managing resources is supposed to be the operating system's job not an app's job.

AMD technology + Intel technology = Intel/AMD Pentathlon IV; the <b>ULTIMATE</b> PC processor
August 14, 2001 6:47:24 PM

well!
Quote:
Without visual programming, designing a user interface takes forever and the chance of making a mistake increases

I've seen many good apps really well written in plain Visual C SDK (not even MFC, I dont realy apreciate more than 50% of my code written by somebody else, especialy microsoft!) I use VB just for databse apps. I hate databases so its good for me to finish them off as fast as possible.

I havent ventured much into it yet, but have even written a small windows app in assembler! and I havent made any mistakes even difficult for me to debug.

if here is some decipline in developing the code, you dont run into such errors. of course I hate these ISO and CMM standards, they make a clerk out of a programmer who has to write more documents than his code, but I have a system of my own and so far its works for me.

girish

<font color=blue>die-hard fans don't have heat-sinks!</font color=blue>
Anonymous
a b à CPUs
August 15, 2001 7:26:58 AM

"DLLs are Dynamic Link Libraries, and they actually save you a lot of memory."
I agree, but they also could eat memory.
If u have to load 10 big dll's to get to 10 small functions
you have a big overhead.

The speed penalty from using dll's are not that high.
If you use static linking with the dll, all initialisation
would be done at startup. The cost of parameter checking is not that high for functions of reasonable size.

regards
Skrue
August 15, 2001 4:09:04 PM

yes that could be. windows collects a set of functions into one DLL, and then you need to load the complete DLL even if you ar eusing only a few of those implemented in that DLL. that way, they are memory eaters for sure!

but most common DLLs are shared and there are more chances that more than one program is using those DLLs. anyway if you are writing your own DLL then you collect all your functions into one or two.

i dont see this as much of a problem since the alternatives are not too good either.

girish

<font color=blue>die-hard fans don't have heat-sinks!</font color=blue>
!