Coming from the discussion of sparse structured chunks, one consideration was whether 32-bit offsets were sufficient since HDF5 chunks are currently limited to 4 GiB.
Is converging contiguous and chunked layouts near on the horizon (e.g. What do you want to see in "HDF5 2.0"? )? I think this convergence would be the main use case for very large chunks.
I’ll also note that the Zarr shard proposal currently describes 64-bit chunk sizes. https://zarr.dev/zeps/draft/ZEP0002.html#binary-shard-format