Your intuition is correct. "bits" alone for Video RAM is not enough to measure how good the performance will be.
There's a better metric (number) to look at wich combines "bits" with "mhz" from VRAM: bandwith. When a video card has a very good Bandwith, that means the data stream for the GPU (chip) itself will make it work at it's fullest (won't be starved for data to process, in simple terms). Sometimes, a bigger bandwith needs a bigger "bit" count for very specific data types and like wise for "mhz". There's also "GDDR tech" to look at, "GDDR3" has half the bandwith of a "GDDR5" VRAM at the same speed (mhz), but it probably has the same (or very near) latency (wich in simple words is "time it takes to grab info"), so there's a trade between "how quick we bring the data" (latency) and "how much data we bring per cicle" (bandwith).
The width of the memory bus and the speed of the ram combine to tell you a cards memory bandwidth which is the important thing. Even that is overshadowed by the strength of the GPU in regards to overall performance, which of course is the only thing that really matters. If you current card is weaker than the 8500GT then it's simply not a good card, the memory bus width may be part of the reason why but it's almost definitely not the main reason.
What you want to look at when choosing a card is not bus width or memory speed or memory amount or clock speed or shader count or anything other than how well it plays games because that is the whole point. For a rough guide there are some charts that do an ok job of ranking the cards based on general performance; http://www.overclock.net/graphics-cards-general/502403-... http://www.tomshardware.com/reviews/best-graphics-card,...
Those can be used to figure out what card you should be looking at. Then check prices and look for reviews with benchmarks at your monitors native resolution to make a final decision.
allright then, and now, what is stream processor? does the higher the count, the better the performance? and what is the difference between core and memory clock? Which one affect the most performance??
Explaining "stream processors" is a little more complex (for me at least), but you can think of it as a way ATI do it's magic with the GPU (chip) for calculating stuff. nVidia's equivalent would be CUDA cores (that's being very simplistic). Might want to read specifics on the GTX285 launch and on the HD4870 launch from Tom's where they explain in detail each part from the arch they use. And yes, that's a rule of thumb; the more stream processors and CUDA cores, the better they ought to perform.
Now, on your other question, core speed is like your CPU speed and memory speed is like your RAM speed. The higher for each individual value, the better each will perform. And wich one is more important? Both are. If you have a very fast GPU coupled with low speed memory, the data out put will be limited by the memory side, and vice versa. So there has to be a relation between chip speed and ram speed. For a fast card, BOTH have to be fast, another rule of thumb. How much difference? That's a very complex question also... But if you like to play with (for example) 4xAA and 8xAF all the time, then you need a lot of memory and memory speed at the same core clock for it to perform better, and so on with the different filters and options out there. Each option affect a different part of the pie.