I looked at this program and I had one difficulty which after experimenting with it for a while left me looking for something else.
The problem was that I could not understand how it was deciding which of two identical files was the duplicate and which was the original.
It sounds trivial, but I think most people trying to resolve duplicates between two (or more) directory structures have a view that one directory is the 'original' and the other the duplicate.
In my case I had sorted a large number of files into subdirectories and had a pile more to sort. But there were duplicates between the unsorted ones and the already sorted ones. Clearly when I found these duplicates I wanted to delete the ones in the unsorted pile and leave the ones in the sorted pile as the originals.
But there seem to be no reliable way to do this.
The software developer responded to this review on Dec 10, 2007:
it uses byte for byte comparison, for 100% accuracy.