OK, it appears that there’s some kind of networking involved (NFS, SMB, …)? That’s more likely the culprit. Generally, there is no simple causal relationship between the corruption of an HDF5 file that’s being written and attempting to open/read from it. For example, on a local file system, when opening a file w/ HDFView in read-only mode, it is much more likely that HDFView will crash, because it’s reading what temporarily appears as an inconsistent HDF5 file (e.g., metadata pointing to data that hasn’t been flushed, etc.). In a networked scenario, things are much trickier when it comes to locking, caching, timeouts, etc. This setup is not supported even w/ SWMR, which depends on POSIX write() semantics.
It is conceivable that HDFView by default might open a file in
H5F_ACC_RDONLY | H5F_ACC_SWMR_READ, but that would have an effect only with a “cooperating SWMR writer.” There might be already a corresponding Jira issue/improvement request.
Yes, as long as the file is on a SWMR-supported file system.
Yes, because there is no guarantee that the bytes you’ve copied represent a consistent HDF5 file. The writing process may still hold unflushed (meta-)data not reflected in the bytes on disk.