ccbunney
ccbunney

Reputation: 2762

NetCDF file - why is file 1/3 size after fixing record dimension?

I am struggling to get to grips with this.

I create a netcdf4 file with the following dimensions and variables (note in particular the unlimited point dimension):

dimensions:
    point = UNLIMITED ; // (275935 currently)
    realization = 24 ;
variables:
    short mod_hs(realization, point) ;
        mod_hs:scale_factor = 0.01 ;
    short mod_ws(realization, point) ;
        mod_ws:scale_factor = 0.01 ;
    short obs_hs(point) ;
        obs_hs:scale_factor = 0.01 ;
    short obs_ws(point) ;
        obs_ws:scale_factor = 0.01 ;
    short fchr(point) ;
    float obs_lat(point) ;
    float obs_lon(point) ;
    double obs_datetime(point) ;
}

I have a Python program that populated this file with data in a loop (hence the unlimited record dimension - I don't know apriori how big the file will be).

After populating the file, it is 103MB in size.

My issue is that reading data from this file is quite slow. I guessed that this is something to do with chunking and the unlmited point dimension?

I ran ncks --fix_rec_dmn on the file and (after a lot of churning) it produced a new netCDF file that is only 32MB in size (which is about the right size for the data it contains).

This is a massive difference in size - why is the original file so bloated? Also - accessing the data in this file is orders of magnitude quicker. For example, in Python, to read in the contents of the hs variable takes 2 seconds on the original file and 40 milliseconds on the fixed record dimension file.

The problem I have is that some of my files contain a lot of points and seem to be too big to run ncks on (my machine runs out of memoery and I have 8GB), so I can't convert all the data to fixed record dimension.

Can anyone explain why the file sizes are so different and how I can make the original files smaller and more efficient to read?

By the way - I am not using zlib compression (I have opted for scaling floating point values to an integer short).

Chris

EDIT My Python code is essentially building up one single timeseries file of collocated model and observation data from multiple individual model forecast files over 3 months. My forecast model runs 4 times a day, and I am aggregateing 3 months of data, so that is ~120 files.

The program extracts a subset of the forecast period from each file (e.t. T+24h -> T+48h), so it is not a simple matter of concatenating the files.

This is a rough approxiamtion of what my code is doing (it actually reads/writes more variables, but I am just showing 2 here for clarity):

# Create output file:
dout = nc.Dataset(fn, mode='w', clobber=True, format="NETCDF4")

dout.createDimension('point', size=None)
dout.createDimension('realization', size=24)

for varname in ['mod_hs','mod_ws']:
    v = ncd.createVariable(varname, np.short, 
            dimensions=('point', 'realization'), zlib=False)
    v.scale_factor = 0.01

# Cycle over dates
date = <some start start>
end_dat = <some end date>

# Keeo track if record dimension ('point') size:
n = 0

while date < end_date: 
    din = nc.Dataset("<path to input file>", mode='r')
    fchr = din.variables['fchr'][:]

    # get mask for specific forecast hour range
    m = np.logical_and(fchr >= 24, fchr < 48)
    sz = np.count_nonzero(m)

    if sz == 0:
        continue

    dout.variables['mod_hs'][n:n+sz,:] = din.variables['mod_hs'][:][m,:]
    dout.variables['mod_ws'][n:n+sz,:] = din.variables['mod_wspd'][:][m,:]

    # Increment record dimension count:
    n += sz

    din.close()

    # Goto next file
    date += dt.timedelta(hours=6)

dout.close()

Interestingly, if I make the output file format NETCDF3_CLASSIC rather that NETCDF4 the output size the size that I would expect. NETCDF4 output seesm to be bloated.

Upvotes: 1

Views: 1565

Answers (3)

Charlie Zender
Charlie Zender

Reputation: 6352

My experience has been that the default chunksize for record dimensions depends on the version of the netCDF library underneath. For 4.3.3.1, it is 524288. 275935 records is about half a record-chunk. ncks automatically chooses (without telling you) more sensible chunksizes than netCDF defaults, so the output is better optimized. I think this is what is happening. See http://nco.sf.net/nco.html#cnk

Upvotes: 2

John Caron
John Caron

Reputation: 1387

I think your problem is that the default chunk size for unlimited dimensions is 1, which creates a huge number of internal HDF5 structures. By setting the chunksize explicitly (obviously ok for unlimited dimensions), the second example does much better in space and time.

Unlimited dimensions require chunking in HDF5/netCDF4, so if you want unlimited dimensions you have to think about chunking performance, as you have discovered.

More here:

https://www.unidata.ucar.edu/software/netcdf/docs/netcdf_perf_chunking.html

Upvotes: 0

kakk11
kakk11

Reputation: 918

Please try to provide a code that works without modification if possible, I had to edit to get it working, but it wasn't too difficult.

import netCDF4 as nc
import numpy as np
dout = nc.Dataset('testdset.nc4', mode='w', clobber=True, format="NETCDF4")
dout.createDimension('point', size=None)
dout.createDimension('realization', size=24)
for varname in ['mod_hs','mod_ws']:
    v = dout.createVariable(varname, np.short, 
            dimensions=('point', 'realization'), zlib=False,chunksizes=[1000,24])
    v.scale_factor = 0.01
date = 1
end_date = 5000
n = 0
while date < end_date: 
    sz=100
    dout.variables['mod_hs'][n:n+sz,:] = np.ones((sz,24))
    dout.variables['mod_ws'][n:n+sz,:] = np.ones((sz,24))
    n += sz
    date += 1
dout.close()

The main difference is in createVariable command. For file size, without providing "chunksizes" in creating variable, I also got twice as large file compared to when I added it. So for file size it should do the trick. For reading variables from file, I did not notice any difference actually, maybe I should add more variables? Anyway, it should be clear how to add chunk size now, You probably need to test a bit to get good conf for Your problem. Feel free to ask more if it still does not work for You, and if You want to understand more about chunking, read the hdf5 docs

Upvotes: 0

Related Questions