Error through converting a jax numpy pre-trained weight to h5 weight

Hi all.
I have downloaded a jax numpy weight file with npz suffix, but when I tried to convert it to h5 file I recieved this error:


import jax.numpy as jnp
import h5py
import tensorflow as tf


BASE_URL = "https://github.com/faustomorales/vit-keras/releases/download/dl"

size = "B_16"
weights = "imagenet21k"
fname = f"ViT-{size}_{weights}.npz"
origin = f"{BASE_URL}/{fname}"

# saved weight file in local path "~/.keras/weights/"
local_filepath = tf.keras.utils.get_file(fname, origin, cache_subdir="weights")

# Load the npz and convert to h5
jax_file = jnp.load('ViT-B_16_imagenet21k.npz')
with h5py.File('ViT-B_16_imagenet21k.h5', 'w') as hf:
    hf.create_dataset('weights', data=jax_file)

the npz file will be downloaded in local path “~/.keras/weights/”.

my error is:


*** TypeError: No conversion path for dtype: dtype('<U71')
Traceback (most recent call last):
  File "/home/javaneh/jb_env/lib/python3.8/site-packages/h5py/_hl/group.py", line 161, in create_dataset
    dsid = dataset.make_new_dset(group, shape, dtype, data, name, **kwds)
  File "/home/javaneh/jb_env/lib/python3.8/site-packages/h5py/_hl/dataset.py", line 88, in make_new_dset
    tid = h5t.py_create(dtype, logical=1)
  File "h5py/h5t.pyx", line 1663, in h5py.h5t.py_create
  File "h5py/h5t.pyx", line 1687, in h5py.h5t.py_create
  File "h5py/h5t.pyx", line 1753, in h5py.h5t.py_create

Note1: the output of dir(jax_file):


dir(jax_file)
['__abstractmethods__', '__class__', '__contains__', '__del__', '__delattr__', '__dict__', '__dir__', '__doc__', '__enter__', '__eq__', '__exit__', '__format__', '__ge__', '__getattribute__', '__getitem__', '__gt__', '__hash__', '__init__', '__init_subclass__', '__iter__', '__le__', '__len__', '__lt__', '__module__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__reversed__', '__setattr__', '__sizeof__', '__slots__', '__str__', '__subclasshook__', '__weakref__', '_abc_impl', '_files', 'allow_pickle', 'close', 'f', 'fid', 'files', 'get', 'items', 'keys', 'pickle_kwargs', 'values', 'zip']

Note2: type of jax_file is:


type(jax_file)
<class 'numpy.lib.npyio.NpzFile'>

The jax_file has no dtype

 jax_file.dtype
*** AttributeError: 'NpzFile' object has no attribute 'dtype'

Note3: my tensorflow version 2.9.1

the challenge is that, it should be a unified h5 file(not multiple one). because I need to load h5 file once and feed to my tensorflow model.
length of jax_file.files is more 200 items, but how to unified all 200 items in one weight file in h5 format?

is there any way to create a unified h5 file from jax numpy file?

Any help will be appreciated.

What does jax_file.dtype says?

-Aleksandar

 jax_file.dtype
*** AttributeError: 'NpzFile' object has no attribute 'dtype'

You cannot store jax_file because it is not a NumPy array. HDF5 datasets are equivalent to NumPy arrays.

Can you list the content of the jax_file object?

-Aleksandar

the dir(jax_file) output is mentioned above in my question part, the output of jax_file.__dict__ is as below:

jax_file.__dict__
{'_files': ['Transformer/encoder_norm/bias.npy', 'Transformer/encoder_norm/scale.npy', 'Transformer/encoderblock_0/LayerNorm_0/bias.npy', 'Transformer/encoderblock_0/LayerNorm_0/scale.npy', 'Transformer/encoderblock_0/LayerNorm_2/bias.npy', 'Transformer/encoderblock_0/LayerNorm_2/scale.npy', 'Transformer/encoderblock_0/MlpBlock_3/Dense_0/bias.npy', 'Transformer/encoderblock_0/MlpBlock_3/Dense_0/kernel.npy', 'Transformer/encoderblock_0/MlpBlock_3/Dense_1/bias.npy', 'Transformer/encoderblock_0/MlpBlock_3/Dense_1/kernel.npy', 'Transformer/encoderblock_0/MultiHeadDotProductAttention_1/key/bias.npy', 'Transformer/encoderblock_0/MultiHeadDotProductAttention_1/key/kernel.npy', 'Transformer/encoderblock_0/MultiHeadDotProductAttention_1/out/bias.npy', 'Transformer/encoderblock_0/MultiHeadDotProductAttention_1/out/kernel.npy', 'Transformer/encoderblock_0/MultiHeadDotProductAttention_1/query/bias.npy', 'Transformer/encoderblock_0/MultiHeadDotProductAttention_1/query/kernel.npy', 'Transformer/encoderblock_0/MultiHeadDotProductAttention_1/value/bias.npy', 'Transformer/encoderblock_0/MultiHeadDotProductAttention_1/value/kernel.npy', 'Transformer/encoderblock_1/LayerNorm_0/bias.npy', 'Transformer/encoderblock_1/LayerNorm_0/scale.npy', 'Transformer/encoderblock_1/LayerNorm_2/bias.npy', 'Transformer/encoderblock_1/LayerNorm_2/scale.npy', 'Transformer/encoderblock_1/MlpBlock_3/Dense_0/bias.npy', 'Transformer/encoderblock_1/MlpBlock_3/Dense_0/kernel.npy', 'Transformer/encoderblock_1/MlpBlock_3/Dense_1/bias.npy', 'Transformer/encoderblock_1/MlpBlock_3/Dense_1/kernel.npy', 'Transformer/encoderblock_1/MultiHeadDotProductAttention_1/key/bias.npy', 'Transformer/encoderblock_1/MultiHeadDotProductAttention_1/key/kernel.npy', 'Transformer/encoderblock_1/MultiHeadDotProductAttention_1/out/bias.npy', 'Transformer/encoderblock_1/MultiHeadDotProductAttention_1/out/kernel.npy', 'Transformer/encoderblock_1/MultiHeadDotProductAttention_1/query/bias.npy', 'Transformer/encoderblock_1/MultiHeadDotProductAttention_1/query/kernel.npy', 'Transformer/encoderblock_1/MultiHeadDotProductAttention_1/value/bias.npy', 'Transformer/encoderblock_1/MultiHeadDotProductAttention_1/value/kernel.npy', 'Transformer/encoderblock_10/LayerNorm_0/bias.npy', 'Transformer/encoderblock_10/LayerNorm_0/scale.npy', 'Transformer/encoderblock_10/LayerNorm_2/bias.npy', 'Transformer/encoderblock_10/LayerNorm_2/scale.npy', 'Transformer/encoderblock_10/MlpBlock_3/Dense_0/bias.npy', 'Transformer/encoderblock_10/MlpBlock_3/Dense_0/kernel.npy', 'Transformer/encoderblock_10/MlpBlock_3/Dense_1/bias.npy', 'Transformer/encoderblock_10/MlpBlock_3/Dense_1/kernel.npy', 'Transformer/encoderblock_10/MultiHeadDotProductAttention_1/key/bias.npy', 'Transformer/encoderblock_10/MultiHeadDotProductAttention_1/key/kernel.npy', 'Transformer/encoderblock_10/MultiHeadDotProductAttention_1/out/bias.npy', 'Transformer/encoderblock_10/MultiHeadDotProductAttention_1/out/kernel.npy', 'Transformer/encoderblock_10/MultiHeadDotProductAttention_1/query/bias.npy', 'Transformer/encoderblock_10/MultiHeadDotProductAttention_1/query/kernel.npy', 'Transformer/encoderblock_10/MultiHeadDotProductAttention_1/value/bias.npy', 'Transformer/encoderblock_10/MultiHeadDotProductAttention_1/value/kernel.npy', 'Transformer/encoderblock_11/LayerNorm_0/bias.npy', 'Transformer/encoderblock_11/LayerNorm_0/scale.npy', 'Transformer/encoderblock_11/LayerNorm_2/bias.npy', 'Transformer/encoderblock_11/LayerNorm_2/scale.npy', 'Transformer/encoderblock_11/MlpBlock_3/Dense_0/bias.npy', 'Transformer/encoderblock_11/MlpBlock_3/Dense_0/kernel.npy', 'Transformer/encoderblock_11/MlpBlock_3/Dense_1/bias.npy', 'Transformer/encoderblock_11/MlpBlock_3/Dense_1/kernel.npy', 'Transformer/encoderblock_11/MultiHeadDotProductAttention_1/key/bias.npy', 'Transformer/encoderblock_11/MultiHeadDotProductAttention_1/key/kernel.npy', 'Transformer/encoderblock_11/MultiHeadDotProductAttention_1/out/bias.npy', 'Transformer/encoderblock_11/MultiHeadDotProductAttention_1/out/kernel.npy', 'Transformer/encoderblock_11/MultiHeadDotProductAttention_1/query/bias.npy', 'Transformer/encoderblock_11/MultiHeadDotProductAttention_1/query/kernel.npy', 'Transformer/encoderblock_11/MultiHeadDotProductAttention_1/value/bias.npy', 'Transformer/encoderblock_11/MultiHeadDotProductAttention_1/value/kernel.npy', 'Transformer/encoderblock_2/LayerNorm_0/bias.npy', 'Transformer/encoderblock_2/LayerNorm_0/scale.npy', 'Transformer/encoderblock_2/LayerNorm_2/bias.npy', 'Transformer/encoderblock_2/LayerNorm_2/scale.npy', 'Transformer/encoderblock_2/MlpBlock_3/Dense_0/bias.npy', 'Transformer/encoderblock_2/MlpBlock_3/Dense_0/kernel.npy', 'Transformer/encoderblock_2/MlpBlock_3/Dense_1/bias.npy', 'Transformer/encoderblock_2/MlpBlock_3/Dense_1/kernel.npy', 'Transformer/encoderblock_2/MultiHeadDotProductAttention_1/key/bias.npy', 'Transformer/encoderblock_2/MultiHeadDotProductAttention_1/key/kernel.npy', 'Transformer/encoderblock_2/MultiHeadDotProductAttention_1/out/bias.npy', 'Transformer/encoderblock_2/MultiHeadDotProductAttention_1/out/kernel.npy', 'Transformer/encoderblock_2/MultiHeadDotProductAttention_1/query/bias.npy', 'Transformer/encoderblock_2/MultiHeadDotProductAttention_1/query/kernel.npy', 'Transformer/encoderblock_2/MultiHeadDotProductAttention_1/value/bias.npy', 'Transformer/encoderblock_2/MultiHeadDotProductAttention_1/value/kernel.npy', 'Transformer/encoderblock_3/LayerNorm_0/bias.npy', 'Transformer/encoderblock_3/LayerNorm_0/scale.npy', 'Transformer/encoderblock_3/LayerNorm_2/bias.npy', 'Transformer/encoderblock_3/LayerNorm_2/scale.npy', 'Transformer/encoderblock_3/MlpBlock_3/Dense_0/bias.npy', 'Transformer/encoderblock_3/MlpBlock_3/Dense_0/kernel.npy', 'Transformer/encoderblock_3/MlpBlock_3/Dense_1/bias.npy', 'Transformer/encoderblock_3/MlpBlock_3/Dense_1/kernel.npy', 'Transformer/encoderblock_3/MultiHeadDotProductAttention_1/key/bias.npy', 'Transformer/encoderblock_3/MultiHeadDotProductAttention_1/key/kernel.npy', 'Transformer/encoderblock_3/MultiHeadDotProductAttention_1/out/bias.npy', 'Transformer/encoderblock_3/MultiHeadDotProductAttention_1/out/kernel.npy', 'Transformer/encoderblock_3/MultiHeadDotProductAttention_1/query/bias.npy', 'Transformer/encoderblock_3/MultiHeadDotProductAttention_1/query/kernel.npy', 'Transformer/encoderblock_3/MultiHeadDotProductAttention_1/value/bias.npy', 'Transformer/encoderblock_3/MultiHeadDotProductAttention_1/value/kernel.npy', 'Transformer/encoderblock_4/LayerNorm_0/bias.npy', 'Transformer/encoderblock_4/LayerNorm_0/scale.npy', 'Transformer/encoderblock_4/LayerNorm_2/bias.npy', 'Transformer/encoderblock_4/LayerNorm_2/scale.npy', 'Transformer/encoderblock_4/MlpBlock_3/Dense_0/bias.npy', 'Transformer/encoderblock_4/MlpBlock_3/Dense_0/kernel.npy', 'Transformer/encoderblock_4/MlpBlock_3/Dense_1/bias.npy', 'Transformer/encoderblock_4/MlpBlock_3/Dense_1/kernel.npy', 'Transformer/encoderblock_4/MultiHeadDotProductAttention_1/key/bias.npy', 'Transformer/encoderblock_4/MultiHeadDotProductAttention_1/key/kernel.npy', 'Transformer/encoderblock_4/MultiHeadDotProductAttention_1/out/bias.npy', 'Transformer/encoderblock_4/MultiHeadDotProductAttention_1/out/kernel.npy', 'Transformer/encoderblock_4/MultiHeadDotProductAttention_1/query/bias.npy', 'Transformer/encoderblock_4/MultiHeadDotProductAttention_1/query/kernel.npy', 'Transformer/encoderblock_4/MultiHeadDotProductAttention_1/value/bias.npy', 'Transformer/encoderblock_4/MultiHeadDotProductAttention_1/value/kernel.npy', 'Transformer/encoderblock_5/LayerNorm_0/bias.npy', 'Transformer/encoderblock_5/LayerNorm_0/scale.npy', 'Transformer/encoderblock_5/LayerNorm_2/bias.npy', 'Transformer/encoderblock_5/LayerNorm_2/scale.npy', 'Transformer/encoderblock_5/MlpBlock_3/Dense_0/bias.npy', 'Transformer/encoderblock_5/MlpBlock_3/Dense_0/kernel.npy', 'Transformer/encoderblock_5/MlpBlock_3/Dense_1/bias.npy', 'Transformer/encoderblock_5/MlpBlock_3/Dense_1/kernel.npy', 'Transformer/encoderblock_5/MultiHeadDotProductAttention_1/key/bias.npy', 'Transformer/encoderblock_5/MultiHeadDotProductAttention_1/key/kernel.npy', 'Transformer/encoderblock_5/MultiHeadDotProductAttention_1/out/bias.npy', 'Transformer/encoderblock_5/MultiHeadDotProductAttention_1/out/kernel.npy', 'Transformer/encoderblock_5/MultiHeadDotProductAttention_1/query/bias.npy', 'Transformer/encoderblock_5/MultiHeadDotProductAttention_1/query/kernel.npy', 'Transformer/encoderblock_5/MultiHeadDotProductAttention_1/value/bias.npy', 'Transformer/encoderblock_5/MultiHeadDotProductAttention_1/value/kernel.npy', 'Transformer/encoderblock_6/LayerNorm_0/bias.npy', 'Transformer/encoderblock_6/LayerNorm_0/scale.npy', 'Transformer/encoderblock_6/LayerNorm_2/bias.npy', 'Transformer/encoderblock_6/LayerNorm_2/scale.npy', 'Transformer/encoderblock_6/MlpBlock_3/Dense_0/bias.npy', 'Transformer/encoderblock_6/MlpBlock_3/Dense_0/kernel.npy', 'Transformer/encoderblock_6/MlpBlock_3/Dense_1/bias.npy', 'Transformer/encoderblock_6/MlpBlock_3/Dense_1/kernel.npy', 'Transformer/encoderblock_6/MultiHeadDotProductAttention_1/key/bias.npy', 'Transformer/encoderblock_6/MultiHeadDotProductAttention_1/key/kernel.npy', 'Transformer/encoderblock_6/MultiHeadDotProductAttention_1/out/bias.npy', 'Transformer/encoderblock_6/MultiHeadDotProductAttention_1/out/kernel.npy', 'Transformer/encoderblock_6/MultiHeadDotProductAttention_1/query/bias.npy', 'Transformer/encoderblock_6/MultiHeadDotProductAttention_1/query/kernel.npy', 'Transformer/encoderblock_6/MultiHeadDotProductAttention_1/value/bias.npy', 'Transformer/encoderblock_6/MultiHeadDotProductAttention_1/value/kernel.npy', 'Transformer/encoderblock_7/LayerNorm_0/bias.npy', 'Transformer/encoderblock_7/LayerNorm_0/scale.npy', 'Transformer/encoderblock_7/LayerNorm_2/bias.npy', 'Transformer/encoderblock_7/LayerNorm_2/scale.npy', 'Transformer/encoderblock_7/MlpBlock_3/Dense_0/bias.npy', 'Transformer/encoderblock_7/MlpBlock_3/Dense_0/kernel.npy', 'Transformer/encoderblock_7/MlpBlock_3/Dense_1/bias.npy', 'Transformer/encoderblock_7/MlpBlock_3/Dense_1/kernel.npy', 'Transformer/encoderblock_7/MultiHeadDotProductAttention_1/key/bias.npy', 'Transformer/encoderblock_7/MultiHeadDotProductAttention_1/key/kernel.npy', 'Transformer/encoderblock_7/MultiHeadDotProductAttention_1/out/bias.npy', 'Transformer/encoderblock_7/MultiHeadDotProductAttention_1/out/kernel.npy', 'Transformer/encoderblock_7/MultiHeadDotProductAttention_1/query/bias.npy', 'Transformer/encoderblock_7/MultiHeadDotProductAttention_1/query/kernel.npy', 'Transformer/encoderblock_7/MultiHeadDotProductAttention_1/value/bias.npy', 'Transformer/encoderblock_7/MultiHeadDotProductAttention_1/value/kernel.npy', 'Transformer/encoderblock_8/LayerNorm_0/bias.npy', 'Transformer/encoderblock_8/LayerNorm_0/scale.npy', 'Transformer/encoderblock_8/LayerNorm_2/bias.npy', 'Transformer/encoderblock_8/LayerNorm_2/scale.npy', 'Transformer/encoderblock_8/MlpBlock_3/Dense_0/bias.npy', 'Transformer/encoderblock_8/MlpBlock_3/Dense_0/kernel.npy', 'Transformer/encoderblock_8/MlpBlock_3/Dense_1/bias.npy', 'Transformer/encoderblock_8/MlpBlock_3/Dense_1/kernel.npy', 'Transformer/encoderblock_8/MultiHeadDotProductAttention_1/key/bias.npy', 'Transformer/encoderblock_8/MultiHeadDotProductAttention_1/key/kernel.npy', 'Transformer/encoderblock_8/MultiHeadDotProductAttention_1/out/bias.npy', 'Transformer/encoderblock_8/MultiHeadDotProductAttention_1/out/kernel.npy', 'Transformer/encoderblock_8/MultiHeadDotProductAttention_1/query/bias.npy', 'Transformer/encoderblock_8/MultiHeadDotProductAttention_1/query/kernel.npy', 'Transformer/encoderblock_8/MultiHeadDotProductAttention_1/value/bias.npy', 'Transformer/encoderblock_8/MultiHeadDotProductAttention_1/value/kernel.npy', 'Transformer/encoderblock_9/LayerNorm_0/bias.npy', 'Transformer/encoderblock_9/LayerNorm_0/scale.npy', 'Transformer/encoderblock_9/LayerNorm_2/bias.npy', 'Transformer/encoderblock_9/LayerNorm_2/scale.npy', 'Transformer/encoderblock_9/MlpBlock_3/Dense_0/bias.npy', 'Transformer/encoderblock_9/MlpBlock_3/Dense_0/kernel.npy', 'Transformer/encoderblock_9/MlpBlock_3/Dense_1/bias.npy', 'Transformer/encoderblock_9/MlpBlock_3/Dense_1/kernel.npy', 'Transformer/encoderblock_9/MultiHeadDotProductAttention_1/key/bias.npy', 'Transformer/encoderblock_9/MultiHeadDotProductAttention_1/key/kernel.npy', 'Transformer/encoderblock_9/MultiHeadDotProductAttention_1/out/bias.npy', 'Transformer/encoderblock_9/MultiHeadDotProductAttention_1/out/kernel.npy', 'Transformer/encoderblock_9/MultiHeadDotProductAttention_1/query/bias.npy', 'Transformer/encoderblock_9/MultiHeadDotProductAttention_1/query/kernel.npy', 'Transformer/encoderblock_9/MultiHeadDotProductAttention_1/value/bias.npy', 'Transformer/encoderblock_9/MultiHeadDotProductAttention_1/value/kernel.npy', 'Transformer/posembed_input/pos_embedding.npy', 'cls.npy', 'embedding/bias.npy', 'embedding/kernel.npy', 'head/bias.npy', 'head/kernel.npy', 'pre_logits/bias.npy', 'pre_logits/kernel.npy'], 'files': ['Transformer/encoder_norm/bias', 'Transformer/encoder_norm/scale', 'Transformer/encoderblock_0/LayerNorm_0/bias', 'Transformer/encoderblock_0/LayerNorm_0/scale', 'Transformer/encoderblock_0/LayerNorm_2/bias', 'Transformer/encoderblock_0/LayerNorm_2/scale', 'Transformer/encoderblock_0/MlpBlock_3/Dense_0/bias', 'Transformer/encoderblock_0/MlpBlock_3/Dense_0/kernel', 'Transformer/encoderblock_0/MlpBlock_3/Dense_1/bias', 'Transformer/encoderblock_0/MlpBlock_3/Dense_1/kernel', 'Transformer/encoderblock_0/MultiHeadDotProductAttention_1/key/bias', 'Transformer/encoderblock_0/MultiHeadDotProductAttention_1/key/kernel', 'Transformer/encoderblock_0/MultiHeadDotProductAttention_1/out/bias', 'Transformer/encoderblock_0/MultiHeadDotProductAttention_1/out/kernel', 'Transformer/encoderblock_0/MultiHeadDotProductAttention_1/query/bias', 'Transformer/encoderblock_0/MultiHeadDotProductAttention_1/query/kernel', 'Transformer/encoderblock_0/MultiHeadDotProductAttention_1/value/bias', 'Transformer/encoderblock_0/MultiHeadDotProductAttention_1/value/kernel', 'Transformer/encoderblock_1/LayerNorm_0/bias', 'Transformer/encoderblock_1/LayerNorm_0/scale', 'Transformer/encoderblock_1/LayerNorm_2/bias', 'Transformer/encoderblock_1/LayerNorm_2/scale', 'Transformer/encoderblock_1/MlpBlock_3/Dense_0/bias', 'Transformer/encoderblock_1/MlpBlock_3/Dense_0/kernel', 'Transformer/encoderblock_1/MlpBlock_3/Dense_1/bias', 'Transformer/encoderblock_1/MlpBlock_3/Dense_1/kernel', 'Transformer/encoderblock_1/MultiHeadDotProductAttention_1/key/bias', 'Transformer/encoderblock_1/MultiHeadDotProductAttention_1/key/kernel', 'Transformer/encoderblock_1/MultiHeadDotProductAttention_1/out/bias', 'Transformer/encoderblock_1/MultiHeadDotProductAttention_1/out/kernel', 'Transformer/encoderblock_1/MultiHeadDotProductAttention_1/query/bias', 'Transformer/encoderblock_1/MultiHeadDotProductAttention_1/query/kernel', 'Transformer/encoderblock_1/MultiHeadDotProductAttention_1/value/bias', 'Transformer/encoderblock_1/MultiHeadDotProductAttention_1/value/kernel', 'Transformer/encoderblock_10/LayerNorm_0/bias', 'Transformer/encoderblock_10/LayerNorm_0/scale', 'Transformer/encoderblock_10/LayerNorm_2/bias', 'Transformer/encoderblock_10/LayerNorm_2/scale', 'Transformer/encoderblock_10/MlpBlock_3/Dense_0/bias', 'Transformer/encoderblock_10/MlpBlock_3/Dense_0/kernel', 'Transformer/encoderblock_10/MlpBlock_3/Dense_1/bias', 'Transformer/encoderblock_10/MlpBlock_3/Dense_1/kernel', 'Transformer/encoderblock_10/MultiHeadDotProductAttention_1/key/bias', 'Transformer/encoderblock_10/MultiHeadDotProductAttention_1/key/kernel', 'Transformer/encoderblock_10/MultiHeadDotProductAttention_1/out/bias', 'Transformer/encoderblock_10/MultiHeadDotProductAttention_1/out/kernel', 'Transformer/encoderblock_10/MultiHeadDotProductAttention_1/query/bias', 'Transformer/encoderblock_10/MultiHeadDotProductAttention_1/query/kernel', 'Transformer/encoderblock_10/MultiHeadDotProductAttention_1/value/bias', 'Transformer/encoderblock_10/MultiHeadDotProductAttention_1/value/kernel', 'Transformer/encoderblock_11/LayerNorm_0/bias', 'Transformer/encoderblock_11/LayerNorm_0/scale', 'Transformer/encoderblock_11/LayerNorm_2/bias', 'Transformer/encoderblock_11/LayerNorm_2/scale', 'Transformer/encoderblock_11/MlpBlock_3/Dense_0/bias', 'Transformer/encoderblock_11/MlpBlock_3/Dense_0/kernel', 'Transformer/encoderblock_11/MlpBlock_3/Dense_1/bias', 'Transformer/encoderblock_11/MlpBlock_3/Dense_1/kernel', 'Transformer/encoderblock_11/MultiHeadDotProductAttention_1/key/bias', 'Transformer/encoderblock_11/MultiHeadDotProductAttention_1/key/kernel', 'Transformer/encoderblock_11/MultiHeadDotProductAttention_1/out/bias', 'Transformer/encoderblock_11/MultiHeadDotProductAttention_1/out/kernel', 'Transformer/encoderblock_11/MultiHeadDotProductAttention_1/query/bias', 'Transformer/encoderblock_11/MultiHeadDotProductAttention_1/query/kernel', 'Transformer/encoderblock_11/MultiHeadDotProductAttention_1/value/bias', 'Transformer/encoderblock_11/MultiHeadDotProductAttention_1/value/kernel', 'Transformer/encoderblock_2/LayerNorm_0/bias', 'Transformer/encoderblock_2/LayerNorm_0/scale', 'Transformer/encoderblock_2/LayerNorm_2/bias', 'Transformer/encoderblock_2/LayerNorm_2/scale', 'Transformer/encoderblock_2/MlpBlock_3/Dense_0/bias', 'Transformer/encoderblock_2/MlpBlock_3/Dense_0/kernel', 'Transformer/encoderblock_2/MlpBlock_3/Dense_1/bias', 'Transformer/encoderblock_2/MlpBlock_3/Dense_1/kernel', 'Transformer/encoderblock_2/MultiHeadDotProductAttention_1/key/bias', 'Transformer/encoderblock_2/MultiHeadDotProductAttention_1/key/kernel', 'Transformer/encoderblock_2/MultiHeadDotProductAttention_1/out/bias', 'Transformer/encoderblock_2/MultiHeadDotProductAttention_1/out/kernel', 'Transformer/encoderblock_2/MultiHeadDotProductAttention_1/query/bias', 'Transformer/encoderblock_2/MultiHeadDotProductAttention_1/query/kernel', 'Transformer/encoderblock_2/MultiHeadDotProductAttention_1/value/bias', 'Transformer/encoderblock_2/MultiHeadDotProductAttention_1/value/kernel', 'Transformer/encoderblock_3/LayerNorm_0/bias', 'Transformer/encoderblock_3/LayerNorm_0/scale', 'Transformer/encoderblock_3/LayerNorm_2/bias', 'Transformer/encoderblock_3/LayerNorm_2/scale', 'Transformer/encoderblock_3/MlpBlock_3/Dense_0/bias', 'Transformer/encoderblock_3/MlpBlock_3/Dense_0/kernel', 'Transformer/encoderblock_3/MlpBlock_3/Dense_1/bias', 'Transformer/encoderblock_3/MlpBlock_3/Dense_1/kernel', 'Transformer/encoderblock_3/MultiHeadDotProductAttention_1/key/bias', 'Transformer/encoderblock_3/MultiHeadDotProductAttention_1/key/kernel', 'Transformer/encoderblock_3/MultiHeadDotProductAttention_1/out/bias', 'Transformer/encoderblock_3/MultiHeadDotProductAttention_1/out/kernel', 'Transformer/encoderblock_3/MultiHeadDotProductAttention_1/query/bias', 'Transformer/encoderblock_3/MultiHeadDotProductAttention_1/query/kernel', 'Transformer/encoderblock_3/MultiHeadDotProductAttention_1/value/bias', 'Transformer/encoderblock_3/MultiHeadDotProductAttention_1/value/kernel', 'Transformer/encoderblock_4/LayerNorm_0/bias', 'Transformer/encoderblock_4/LayerNorm_0/scale', 'Transformer/encoderblock_4/LayerNorm_2/bias', 'Transformer/encoderblock_4/LayerNorm_2/scale', 'Transformer/encoderblock_4/MlpBlock_3/Dense_0/bias', 'Transformer/encoderblock_4/MlpBlock_3/Dense_0/kernel', 'Transformer/encoderblock_4/MlpBlock_3/Dense_1/bias', 'Transformer/encoderblock_4/MlpBlock_3/Dense_1/kernel', 'Transformer/encoderblock_4/MultiHeadDotProductAttention_1/key/bias', 'Transformer/encoderblock_4/MultiHeadDotProductAttention_1/key/kernel', 'Transformer/encoderblock_4/MultiHeadDotProductAttention_1/out/bias', 'Transformer/encoderblock_4/MultiHeadDotProductAttention_1/out/kernel', 'Transformer/encoderblock_4/MultiHeadDotProductAttention_1/query/bias', 'Transformer/encoderblock_4/MultiHeadDotProductAttention_1/query/kernel', 'Transformer/encoderblock_4/MultiHeadDotProductAttention_1/value/bias', 'Transformer/encoderblock_4/MultiHeadDotProductAttention_1/value/kernel', 'Transformer/encoderblock_5/LayerNorm_0/bias', 'Transformer/encoderblock_5/LayerNorm_0/scale', 'Transformer/encoderblock_5/LayerNorm_2/bias', 'Transformer/encoderblock_5/LayerNorm_2/scale', 'Transformer/encoderblock_5/MlpBlock_3/Dense_0/bias', 'Transformer/encoderblock_5/MlpBlock_3/Dense_0/kernel', 'Transformer/encoderblock_5/MlpBlock_3/Dense_1/bias', 'Transformer/encoderblock_5/MlpBlock_3/Dense_1/kernel', 'Transformer/encoderblock_5/MultiHeadDotProductAttention_1/key/bias', 'Transformer/encoderblock_5/MultiHeadDotProductAttention_1/key/kernel', 'Transformer/encoderblock_5/MultiHeadDotProductAttention_1/out/bias', 'Transformer/encoderblock_5/MultiHeadDotProductAttention_1/out/kernel', 'Transformer/encoderblock_5/MultiHeadDotProductAttention_1/query/bias', 'Transformer/encoderblock_5/MultiHeadDotProductAttention_1/query/kernel', 'Transformer/encoderblock_5/MultiHeadDotProductAttention_1/value/bias', 'Transformer/encoderblock_5/MultiHeadDotProductAttention_1/value/kernel', 'Transformer/encoderblock_6/LayerNorm_0/bias', 'Transformer/encoderblock_6/LayerNorm_0/scale', 'Transformer/encoderblock_6/LayerNorm_2/bias', 'Transformer/encoderblock_6/LayerNorm_2/scale', 'Transformer/encoderblock_6/MlpBlock_3/Dense_0/bias', 'Transformer/encoderblock_6/MlpBlock_3/Dense_0/kernel', 'Transformer/encoderblock_6/MlpBlock_3/Dense_1/bias', 'Transformer/encoderblock_6/MlpBlock_3/Dense_1/kernel', 'Transformer/encoderblock_6/MultiHeadDotProductAttention_1/key/bias', 'Transformer/encoderblock_6/MultiHeadDotProductAttention_1/key/kernel', 'Transformer/encoderblock_6/MultiHeadDotProductAttention_1/out/bias', 'Transformer/encoderblock_6/MultiHeadDotProductAttention_1/out/kernel', 'Transformer/encoderblock_6/MultiHeadDotProductAttention_1/query/bias', 'Transformer/encoderblock_6/MultiHeadDotProductAttention_1/query/kernel', 'Transformer/encoderblock_6/MultiHeadDotProductAttention_1/value/bias', 'Transformer/encoderblock_6/MultiHeadDotProductAttention_1/value/kernel', 'Transformer/encoderblock_7/LayerNorm_0/bias', 'Transformer/encoderblock_7/LayerNorm_0/scale', 'Transformer/encoderblock_7/LayerNorm_2/bias', 'Transformer/encoderblock_7/LayerNorm_2/scale', 'Transformer/encoderblock_7/MlpBlock_3/Dense_0/bias', 'Transformer/encoderblock_7/MlpBlock_3/Dense_0/kernel', 'Transformer/encoderblock_7/MlpBlock_3/Dense_1/bias', 'Transformer/encoderblock_7/MlpBlock_3/Dense_1/kernel', 'Transformer/encoderblock_7/MultiHeadDotProductAttention_1/key/bias', 'Transformer/encoderblock_7/MultiHeadDotProductAttention_1/key/kernel', 'Transformer/encoderblock_7/MultiHeadDotProductAttention_1/out/bias', 'Transformer/encoderblock_7/MultiHeadDotProductAttention_1/out/kernel', 'Transformer/encoderblock_7/MultiHeadDotProductAttention_1/query/bias', 'Transformer/encoderblock_7/MultiHeadDotProductAttention_1/query/kernel', 'Transformer/encoderblock_7/MultiHeadDotProductAttention_1/value/bias', 'Transformer/encoderblock_7/MultiHeadDotProductAttention_1/value/kernel', 'Transformer/encoderblock_8/LayerNorm_0/bias', 'Transformer/encoderblock_8/LayerNorm_0/scale', 'Transformer/encoderblock_8/LayerNorm_2/bias', 'Transformer/encoderblock_8/LayerNorm_2/scale', 'Transformer/encoderblock_8/MlpBlock_3/Dense_0/bias', 'Transformer/encoderblock_8/MlpBlock_3/Dense_0/kernel', 'Transformer/encoderblock_8/MlpBlock_3/Dense_1/bias', 'Transformer/encoderblock_8/MlpBlock_3/Dense_1/kernel', 'Transformer/encoderblock_8/MultiHeadDotProductAttention_1/key/bias', 'Transformer/encoderblock_8/MultiHeadDotProductAttention_1/key/kernel', 'Transformer/encoderblock_8/MultiHeadDotProductAttention_1/out/bias', 'Transformer/encoderblock_8/MultiHeadDotProductAttention_1/out/kernel', 'Transformer/encoderblock_8/MultiHeadDotProductAttention_1/query/bias', 'Transformer/encoderblock_8/MultiHeadDotProductAttention_1/query/kernel', 'Transformer/encoderblock_8/MultiHeadDotProductAttention_1/value/bias', 'Transformer/encoderblock_8/MultiHeadDotProductAttention_1/value/kernel', 'Transformer/encoderblock_9/LayerNorm_0/bias', 'Transformer/encoderblock_9/LayerNorm_0/scale', 'Transformer/encoderblock_9/LayerNorm_2/bias', 'Transformer/encoderblock_9/LayerNorm_2/scale', 'Transformer/encoderblock_9/MlpBlock_3/Dense_0/bias', 'Transformer/encoderblock_9/MlpBlock_3/Dense_0/kernel', 'Transformer/encoderblock_9/MlpBlock_3/Dense_1/bias', 'Transformer/encoderblock_9/MlpBlock_3/Dense_1/kernel', 'Transformer/encoderblock_9/MultiHeadDotProductAttention_1/key/bias', 'Transformer/encoderblock_9/MultiHeadDotProductAttention_1/key/kernel', 'Transformer/encoderblock_9/MultiHeadDotProductAttention_1/out/bias', 'Transformer/encoderblock_9/MultiHeadDotProductAttention_1/out/kernel', 'Transformer/encoderblock_9/MultiHeadDotProductAttention_1/query/bias', 'Transformer/encoderblock_9/MultiHeadDotProductAttention_1/query/kernel', 'Transformer/encoderblock_9/MultiHeadDotProductAttention_1/value/bias', 'Transformer/encoderblock_9/MultiHeadDotProductAttention_1/value/kernel', 'Transformer/posembed_input/pos_embedding', 'cls', 'embedding/bias', 'embedding/kernel', 'head/bias', 'head/kernel', 'pre_logits/bias', 'pre_logits/kernel'], 'allow_pickle': False, 'pickle_kwargs': {'encoding': 'ASCII', 'fix_imports': True}, 'zip': <zipfile.ZipFile file=<_io.BufferedReader name='ViT-B_16_imagenet21k.npz'> mode='r'>, 'f': <numpy.lib.npyio.BagObj object at 0x7f86359bd790>, 'fid': <_io.BufferedReader name='ViT-B_16_imagenet21k.npz'>}

You have a number of NumPy arrays in the jax_file object and for each one of them you will need a separate hf.create_dataset(<array_name>, data=<array numpy object>) statement. See the Notes section on this page https://numpy.org/doc/stable/reference/generated/numpy.savez.html for how to access NumPy arrays in the jax_file object.

Note that using the create_dataset() method without any other storage settings may produce an HDF5 file with poor performance. This may be acceptable if such HDF5 files will be used by you or a small group but I recommend learning more about HDF5 if your files will be shared more widely.

-Aleksandar