Rhys, thanks for the comment and sorry for the delay. (Just got back from
On the data side, using a skeletal HDF5/XML dump of a datafile to check
it is valid according to some domain-specific schema will be handy.
I suggest adding recommended HDF5 attributes with particular names for
validation purposes (e.g. an XML schema against which '/' or some other
object should verify)
so that tools like h5diff could perform such a verification.
This is an interesting idea. Although some users may perceive "recommended
HDF5 attributes" as too prescriptive and as stepping on their toes.
that through the tool chain would be a big investment.)
We were thinking of a more constraint-based approach, i.e., a domain expert
supply an XQuery (or XSL) transform that would convert an HDF5/XML
into a Boolean valued checklist so that non-compliance can be easily
The XQuery transform would consist more or less of a list of user-defined
predicates (Boolean functions) which check, e.g., for the presence of
groups or attributes, certain sizes etc.
It would be a huge win to toss a plugin into Firefox, point it at an HDF5
provide a stylesheet, and find the datasets genuinely browsable.
I personally love the utility and quality of the un*x CLI toolset but I
imagine browser capabilities easing HDF5 adoption by many folks.
Good point. I hope HDFView will let you do that, but in the absence of that,
most likely a browser will be at hand.
From where did you pull the fooT, barG, etc. naming convention?
I find the Hungarian-like notation a bit distracting compared to,
say, just spelling out "Type". No need to introduce brevity-- it's XML.
No deep philosophical reason here and, no, brevity wasn't the goal.
I just felt odd calling something 'datatypeType'.
There's a certain terminological overload here, because we are dealing with
HDF5 attributes and XML attributes, with HDF5 datatypes and XML schema
Consistency and avoiding ambiguity, maybe at the expense of aesthetics,
were the main goals.
Thanks for those comments and let's keep up the discussion!