As for the absolute argument of core clock vs. total cores (assuming a program that can make use of the cores), you need to consider the total aggregated performance, something like this:
Clock Speed x Instructions Per Clock Per Core (IPCC) x Number of Cores x Scaling Factor Per Core (depends on the program)
You can gauge the IPCC part with something like single-threaded (single core) benchmarks like Cinebench (single-core option), the scaling factor can be obtained with benchmarking also, and is less than 1.
Generally, AMD's IPCC < Intel for performance category chips. Depends on the benchmark you use, Intel's IPC on the Ivy Bridge series is easily 60% more, if you use Cinebench (rendering benchmark):
The Total Aggregated Performance as computed across all cores can be represented again, with some benchmarks like Cinebench (multi-core):
So you see, if the program is multi-threaded well (and lots of workloads are inherently single-threaded) so that the scaling factor is high, the AMD design is nearly as good as the Ivy Bridge.
But for games, where a lot of single-threading is going on, with a lot of branching, the AMD's design isn't quite as good, i.e.:
I'm sure the situation will be better for AMD in the future, with the general push towards better exploitation of more (although weaker) cores (like with the new consoles), and offloading massive computations via HSA with integrated GPU portions.