What causes fragmentation is the write process. When file is being written, the firmware looks for a contiguous area on the spinner drive in which to write the file. Absent that contiguous space, it starts writing into the largest space that is available, then jumps to the next largest open area and continues until the file is written. That is what fragmentation is, pieces and parts of files in multiple locations. The heads have to move and then wait for the sectors to arrive under the heads to be written to, and read from, so a fragmented file is slower to read and write. Spinner drives are the slowest bottleneck in most modern computers, so defragmenting a drive would often improve performance.
However, that performance kick was usually short-lived, particularly if the user manipulated large files as a regular operation. Once the contiguous space was again too small to hold a file that needed to be written, fragmentation would return. Add in that sometimes, at least in Windows, there were files that HAD to be contiguous and which were unmovable. Whenever they got written, they became a wall around which every other file had to maneuver for space.
Two factors are at play in fragmentation--file size is one and total free disk space is another. For the OP of this thread, this was the description:
My iMac has a 1TB HDD, with over 800GB free space. I may have mentioned that I don't keep large files on my hard drive, on any of my computers. .They are stored on external hard drives which are connected to my router and form my internal network.
So we have lots of free space and generally small files, both of which would suggest very little fragmentation at all. So that's why I suggested there was no need for defragging and doing so won't really have a big impact. Now if the opposite were true, that, let's say, there was only 100GB free and the use was editing large video files, even if they were under the 100GB limit by a lot, fragmentation would soon bring performance to its knees. In the case of the external drives on which large files are stored, I suspect the bottleneck is the interface and not the drive read/write speed. However, if the files are really large and the free space is tight on those drives, defragging may help some, but as I said, it will return if the files are volatile at all.
I will say that there are some obvious exceptions to the small file/lots free general rule. I remember one user in particular who complained about his machine going slower and slower over the work of each day and nothing we did fixed it for him. Finally, we asked what he was doing and the answer was that he was editing large graphic files in Photoshop, with the history level set to the maximum. So basically, for every edit there were 10 copies of the graphic file being saved on the drive in the background, driving up disk usage and driving down free space. Then when the file was finalized and saved at the end of the process, the final save would be terribly fragmented into nooks and corners of the drive because the history was still kept until the write was finalized. Only after the final write was done and the file closed in PS did those history versions get released. In the meantime, the final file was fragmented everywhere and caused problems for him when he opened the next graphic file to do the same thing. After one or two of these editing sessions, the drive would be hopelessly fragmented and performance would fall off the table. We "fixed" the problem by getting him to back down from the max history to something more workable and the fragmentation slowed down to where his drive didn't need defragging every day, just a couple of times a month. The deceptive nature of this story was that the final files were relatively small (as opposed to his free space available) but the workspace needed for PS to hold the history was consuming the drive space in large chunks, leading to fragmentation.
All of this discussion has confirmed in my mind that when it comes to drives, bigger is better. Size matters.