What's not really clear from that is the increase number of devices and architectures that were added to make it one of (if not the) most flexible OS around. And, sure that's a lot of lines of code, but how much is actually parsed and built into a normally-configured kernel?
This issue was recently brought up with the brouhaha and sharp words from Linus himself about the sad state of ARM additions (the typical "I can do better, let me reinvent the wheel for my version of the ARM" software engineering mindset of devs from TI, Marvell, etc. caused huge code bloat due to hacking together "board support" files instead of proper rearchitecting the ARM-specific bits of the kernel to share same/similar code) and has gotten better. It's nice to know that there are still people at the helm that care about that sort of thing.
It is larger but then includes far more running in Kernel space than user space. The comaprison is with Vista. The Windows Kernel got noticeably smaller going to Win7 and is set to shrink again with Win8 when it comes out; it's certainly smaller in the developer preview.
Surely this is graphing the size of the kernel sources, not the kernel itself. What does the size of the sources matter? I can easily find 100 MB of disk.
You could easily produce a smaller source tree by limiting it to one architecture with a limited set of devices supported. Also cut out all comments and obfuscate the code. Would that make a better kernel?
One could compare the size of kernels built from the same basic configuration from different source releases. My point is that 100MB odd of source code is of little matter. The memory footprint of the kernel is the important point. I'm not sure that this has increased unduly.