In the HDF5 User's guide, under the 'Creating a simple dataspace' heading of section 3.2 of the Dataspace chapter, it says:
In this example, a dataspace with current dimensions of 20 by 100 is created. The first dimension can be extended only up to 30. The second dimension, however, is declared unlimited; it can be extended up to the largest available integer value on the system. Recall that any dimension can be declared unlimited, and if a dataset uses a dataspace with any unlimited dimension, chunking has to be used (see the ?Data Transfer? section in the ?Datasets? chapter).
To me, the above implies that I'd be able to create a contiguous dataset with current dims (20,100) and maxdims of (30, 200) (i.e. larger than current dims, but not unlimited). But, that's not the case.
The 2.6 HDFView also confuses the issue. If I :
- open HDFView
- create a new HDF5 file
- create a new dataset
- enter dataset name
- ensure 'Contiguous' radio button is selected
- select 1 for No of dimensions
- enter 10 for current size
- click 'Set Max Size' button
- enter 100 for max size
- click OK
- Verify that the 'Contiguous' radio button is still selected
- click 'OK' on the 'New Dataset..' dialog
- dataset is created
Despite the 'Contiguous' radio button still being selected, the dataset was actually created as a Chunked dataset.