Short answer is no. At one time the filename matched the docid and the folders were calculated by bit shifting the id - an common technique but whose implementation varies. Some folders are 2 level and others are 3 level so you would have to decode both. However, the uuid parts of the filename are also swapped in later versions so it's no longer a direct match. Enumerating the files on disk will no longer work. Even if you can decode the path and name, if the storage uses encryption (shame on you if not), the documents will still need to be retrieved through filenet to decrypt them. That renders md5 and other approaches moot.
The ClipID Gerold mentions is used with Fixed Content Devices such as Centera or DELL ECS that function as a 'black box' and not a filesystem. They use a "CAS" API to retrieve docs based on the CLIPID. For filesystems, the consistency checker, xcheck, does log the path if there's a problem file. It can be handy to know if there are any missing files prior to a migration but the xml is too verbose to be useful. You can build a parser to extract the path or use mine.. (see: https://www.applied-logic.com/reformat-consistency-check-reports/). I have used that approach just to account for missing docs in a storage area when migrating. I would not use it to enumerate the files for export - On a Fiber SAN, we could check 80 docs/second but using NFS it tends to time out and never finish so it wouldn't work for you. In the case where there are several storage areas, I find using a different process for each is the simplest to get better performance.