To put it simply, because write operations are more complex than read operations.
In the "read" case, the system merely needs an access to a file, it talks to the drive, and the drive controller merely points to the physical location(s) of the file.
In the "write/overwrite" case, the system needs to actively change the state of the corresponding cells. Here, the drive needs to:
A. (in the case of write) point to an empty space or marked-empty space, then apply voltage to all the corresponding cells, in order to change their state.
B. (in the case of 'overwrite') mark the cells containing old file(s) as unused (TRIM and controller's native garbage collection will likely erase the cell [again, by applying voltage to it] at a later time), then carry out step A.
Oh, and as far as USB 2.0 standard goes, 480mbps (~60MB/s) is the theoretical maximum for the interface. Effective speeds hover around 300mbps (~40MB/s), and that's for sustained sequential reads. Large sequential writes should be slightly lower at 20-25MB/s. Random performance should be lower still, at all the way down to sub 5MB/s. That said, lackluster random performance really isn't an issue with USB 2.0 media, since it is primarily meant to be used for storage of much larger files (much larger than 4KB, that is).