This weekend I was copying data from my laptop to one of my servers and wanted to double-check everything transferred successfully, without any corruption.

Mushrooms

My go-to tool to copy files to different machines is rsync. It is easy to use, reliable, and checks for file integrity as it transfers them. For each transferred file, rsync computes its checksum on the sending and receiving sides to make sure all went well.

But I was curious to find an easy way to compute the checksum of all my files, in all their subdirectories, and then validate those hashes. The usual checksumming tools (e.g. md5sum, sha256sum, xxhsum, etc.) have a -c flag that reads checksums and filenames from a file and verifies they match with the real files. This works great for files in a directory, but doesn’t play very well for files in nested directories:

$ tree
.
├── file1
├── file2
└── foo
    └── file3

2 directories, 3 files

$ sha256sum *
c1ab0663e764e040de477f9e3de4f733eb7787dbbfbfbfacb4dd0bb1c6705125  file1
478d1c63ffef35b04daec894bbd2fb7989dd53920af896f76b30a35ee8c81c1f  file2
sha256sum: foo: Is a directory

These tools don’t know how to handle directories.

Smart people wrote Bash and Python scripts that recursively calculate checksums and generate a checkfile. And these same scripts can do the other way around: verify the files’ hashes match the checkfile.

I just learned that those scripts are not needed: rclone can do it all. rclone is a command-line tool to sync files to and from cloud storage, but it also has a surprisingly capable hashsum subcommand.

You feed this command the hashing algorithm you want to use and the base directory for your files. For example:

$ rclone hashsum xxh128 .
c57b19b76f3500d2cac3e09f94063a67  foo/file3
bdbc5b9efcfa50cc98865b8795e4ef79  file1
5e71cb0a7b6f5be79a981284868fcc48  file2

I chose xxh128 as it is the fastest in my benchmarks (see my post about Benchmarking checksum tools).

And voilà! You get a checksum for every file in every subdirectory. Of course, remember to save these hashes somewhere, it can be as simple as redirecting to a file: rclone hashsum xxh128 . >/tmp/files.xxh128.

You can also verify an existing checkfile:

$ rclone hashsum xxh128 -C /tmp/files.xxh128 .
= file1
= file2
= foo/file3
2026/04/18 15:01:03 NOTICE: Local file system at /tmp/example: 0 differences found
2026/04/18 15:01:03 NOTICE: Local file system at /tmp/example: 3 matching files

And if rclone finds a file with mismatching hashes, you get a clear error message:

$ rclone hashsum xxh128 -C /tmp/files.xxh128 .
= file1
= file2
2026/04/18 15:01:46 ERROR : foo/file3: files differ
* foo/file3
2026/04/18 15:01:46 NOTICE: Local file system at /tmp/example: 1 differences found
2026/04/18 15:01:46 NOTICE: Local file system at /tmp/example: 1 errors while checking
2026/04/18 15:01:46 NOTICE: Local file system at /tmp/example: 2 matching files
2026/04/18 15:01:46 NOTICE: Failed to hashsum: 1 differences found

No need for obscure scripts. One tool indistinguishable from magic does the job.