Kenny Sau Fung summarized up well. I have a bit more technical details to add (if you're interested):
1: CPU-based rendering/omputing is the old fashion way to compute things. CPU support a wide range of instruction, including vector instructions (so compute a set of values parallel in each cores). Also CPU cores can handle any loops and conditions. Compared to that GPU-s are much more "stupid". They can't really use conditions affecting many type of loops, and if they run out of tasks, they can't proceed to the next steps - but they are massively more cores compared to CPU.
Best example is image processing or neural networks: You have let's say one millions of the same task to do (for each pixel or node/synapse), the task itself is just 20-100 CPU cycles. On CPU that is a piece of cake, and they can even calculate 2-4 or (even 8 with avx2 instructions) of them parallel but than they have a load latency. So you can do [number of cores] * [supported vector size] tasks. Let's say 4 cores and 4 doubles (64-bit) as vector size: 16 tasks done in every 50-150 CPU cycles.
On GPU core: you can do only 1 operation (talking about 64-bit doubles or 32-64bit pixels) but you have 32 to 480 cores (depending on card) to do that. On top of it, if you're using lots of memory to each task: memory load is 300-500 CPU cycles to RAM->CPU and much less to GraphicRAM->GPU.
So even if current rendering engines don't support GPU/OpenCL rendering, they will in 1-3 years, and the performance of renderers that support it will be at least one magnitude more. I's just relative young parallelization approach and for complex softwares are hard to adapt or need basic structural change.
2: As long as some renderers don't support GPU, you need more CPU power. That is: more cores, higher clock, more cache, better vector instructions, better memory bandwith (faster RAM/QPI) in this priority-order. I'll recommend 4th gen i7 as it has avx2 (512-bit wide vector support). Also watch out that xxxxU (ultrabook/ultra low power CPU) has half or even less clock speed than xxxxM (standard mobile) CPUs.
Also you can simply "hope" that GPU/OpenCL support for renderers come out in near future, and than buy instead a good dedicated graphics laptop.
3: HT (hyperthreading or 4cores 8 threads) does NOT mean double performance. It just means, one CPU core has two instruction lines. That is good, if one thread needs to wait for some data from RAM (300-500 CPU cycles) the other thread become active so the CPU still have something to do.
Your operating system with graphical frontend (win, osx, linux with X) has around 200-800 threads already, which are mostly idle but still need some CPU time. Your application will be regularly interrupted to handle those. These interrupts can range from few hundred/thousand CPU cycles to only a few-hundred times in a second.
Also just to clarify: A 3GHz processor has around 1-3billion CPU cycles depending on idle states, most CPU instruction takes 1 or 2 cycles, but the vector-instructions can take up to 5-10 times more, so not so much "better" are they as they seem...
The memory load is almost constant time depending on architecture, so on low clock CPU the memory load is much less cycles than on a high clock one. Still in most cases the memory load is the bottleneck in today's computing. That's also one of the reasons, why GPU/OpenCL-based operation get more and more focus: they have dedicated graphic ram, which are quicker, less fragmented and don't need memory-access-protection, and don't spam the CPU cache.
regards,
jan