Unfortunately it's a little complicated.
http://ntfs.com/ntfs-mft.htm gets at it. Each record of the NTFS MFT represents a file. Really small files have their data stored right in the MFT (neat). Bigger files have one or more pointers to extents (contiguous clusters). A contiguous file has one and only one pointer to just one extent. A fragmented file has two or more extents. To defragment a file involves moving non-contiguous extents around until they are contiguous. A file with a few extents doesn't really hurt anything. A file with zillions of tiny extents all over the place is a performance problem.
SSDs don't suffer from seek latency *but* that is not the end of the story for them. As many have described thoroughly elsewhere, at first all of the capacity of an SSD is ready for immediate use. Later, when an extent is released, e.g. after a file is deleted, they require an extra step to make the released space available again which is no big deal unless you are waiting right now.
Unfortunately folks can tend to get these topics confused. They think SSDs don't suffer from fragmentation because they don't suffer from seek latency but this is not complete. For a file with zillions of extents even a mighty SSD can't hide the extra work the file system has to do.
To evaluate an SSD, Run as Administrator a command prompt;
C:\>windows\system32>defrag c: /a /v
PerfectDisk from http://raxco.com/ (as one example) goes further and reports the most fragmented files.
In my personal opinion *if* an SSD is fragmented bad enough then it is worth defragmenting occasionally even though doing so will nominally wear it out faster. Gratuitous defragmenting is to be avoided.