The frequency is the same, each core has a signal going to it that alternates four billion times a second for a 4GHz processor, just because you have 4 cores doesn't mean you now magically formed a signal that alternates 16 billion times a second, they are all feeding from the same clock signal.
Now, your MIPS and FLOPS have increased, those are the actual metrics of computing performance, for a perfectly parallel task 4 cores means you can perform four times as many instructions per clock cycle but each clock cycle still happens in 1 four billionth of a second(aka 4GHz)
Avoid confusing clock speed with how much a CPU can do, as soon as you cross between micro architectures the clock speed is a meaningless number. Good example of this is how 3.3GHz Core2Quads were notably outperformed by the 2.66GHz i7 920 when it first launched, the core2's had a 25% advantage in clock speed, but the i7 architecture was about 30% more efficient per clock so it easily closed the gap.
That's a good layman's summation.
Look at it like this...
Within the same manufacturer, and
architecture generation, clockspeed is a relative comparison of speed of the processor.
For example, one can compare the FX 6300 and FX 8350, the 6300 runs @ 3.5 GHz and the 8350 @ 4.0 GHz, the 8350 will be notably faster.
Now, if you compare the FX 6200 (bulldozer) to the 6300 (piledriver), the 6200 runs @ 3.9 GHz, giving it a clockspeed advantage. However, the 6300 is 15% more efficient processing instructions. This means the 11.4% advantage in clockspeed is actually overcome by the greater efficiency in the 6300. The 6300 will be faster at lower clockspeeds.
You can do the same with Intel as well.
Use clockspeed as a reference point in the same generation of the same architecture, as a relative measure of performance