Codebase list netcdf4-python / 55c8736
New upstream version 1.3.1 Bas Couwenberg 6 years ago
18 changed file(s) with 601 addition(s) and 127 deletion(s). Raw diff Collapse all Expand all
1010
1111 env:
1212 global:
13 - DEPENDS="numpy cython setuptools==18.0.1"
13 - DEPENDS="numpy>=1.9.0 cython>=0.21 setuptools>=18.0"
1414 - NO_NET=1
15 - MPI=0
1516
1617 python:
1718 - "2.7"
2526 # Absolute minimum dependencies.
2627 - python: 2.7
2728 env:
28 - DEPENDS="numpy==1.9.0 cython==0.19 ordereddict==1.1 setuptools==18.0"
29 - DEPENDS="numpy==1.9.0 cython==0.21 ordereddict==1.1 setuptools==18.0"
30 # test MPI
31 - python: 2.7
32 env:
33 - MPI=1
34 - CC=mpicc
35 - DEPENDS="numpy>=1.9.0 cython>=0.21 setuptools>=18.0 mpi4py>=1.3.1"
36 - NETCDF_VERSION=4.4.1.1
37 - NETCDF_DIR=$HOME
38 - PATH=${NETCDF_DIR}/bin:${PATH} # pick up nc-config here
39 addons:
40 apt:
41 packages:
42 - openmpi-bin
43 - libopenmpi-dev
44 - libhdf5-openmpi-dev
2945
3046 notifications:
3147 email: false
3450 - pip install $DEPENDS
3551
3652 install:
53 - if [ $MPI -eq 1 ] ; then ci/travis/build-parallel-netcdf.sh; fi
3754 - python setup.py build
3855 - python setup.py install
3956
4057 script:
58 - |
59 if [ $MPI -eq 1 ] ; then
60 cd examples
61 mpirun -np 4 python mpi_example.py
62 cd ..
63 fi
4164 - cd test
4265 - python run_all.py
0 version 1.3.1 (tag v1.3.1rel)
1 =============================
2 * add parallel IO capabilities. netcdf-c and hdf5 must be compiled with MPI
3 support, and mpi4py must be installed. To open a file for parallel access,
4 use `parallel=True` in `Dataset.__init__` and optionally pass the mpi4py Comm instance
5 using the `comm` kwarg and the mpi4py Info instance using the `info` kwarg.
6 IO can be toggled between collective and independent using `Variable.set_collective`.
7 See `examples/mpi_example.py`. Issue #717, pull request #716.
8 Minimum cython dependency bumped from 0.19 to 0.21.
9 * Add optional `MFTime` calendar overload to use across all files, for example,
10 `'standard'` or `'gregorian'`. If `None` (the default), check that the calendar
11 attribute is present on each variable and values are unique across files raising
12 a `ValueError` otherwise.
13 * Allow _FillValue to be set for vlen string variables (issue #730).
14
015 version 1.3.0 (tag v1.3.0rel)
116 ==============================
217 * always search for HDF5 headers when building, even when nc-config is used
00 Metadata-Version: 1.1
11 Name: netCDF4
2 Version: 1.3.0
2 Version: 1.3.1
33 Author: Jeff Whitaker
44 Author-email: jeffrey s whitaker at noaa gov
55 Home-page: https://github.com/Unidata/netcdf4-python
66
77 ## News
88 For details on the latest updates, see the [Changelog](https://github.com/Unidata/netcdf4-python/blob/master/Changelog).
9
10 11/01/2017: Version 1.3.1 released. Parallel IO support with MPI!
11 Requires that netcdf-c and hdf5 be built with MPI support, and [mpi4py](http://mpi4py.readthedocs.io/en/stable).
12 To open a file for parallel access in a program running in an MPI environment
13 using mpi4py, just use `parallel=True` when creating
14 the `Dataset` instance. See [`examples/mpi_example.py`](https://github.com/Unidata/netcdf4-python/blob/master/examples/mpi_example.py)
15 for a demonstration. For more info, see the tutorial [section](http://unidata.github.io/netcdf4-python/#section13).
916
1017 9/25/2017: Version [1.3.0](https://pypi.python.org/pypi/netCDF4/1.3.0) released. Bug fixes
1118 for `netcdftime` and optimizations for reading strided slices. `encoding` kwarg added to
3333
3434 # Add path, activate `conda` and update conda.
3535 - cmd: set "PATH=%CONDA_INSTALL_LOCN%\\Scripts;%CONDA_INSTALL_LOCN%\\Library\\bin;%PATH%"
36 - cmd: conda update --yes --quiet conda
36 - cmd: set PYTHONUNBUFFERED=1
3737 - cmd: call %CONDA_INSTALL_LOCN%\Scripts\activate.bat
38
39 - cmd: set PYTHONUNBUFFERED=1
40
41 # Ensure defaults and conda-forge channels are present.
42 - cmd: conda config --set show_channel_urls true
43 - cmd: conda config --remove channels defaults
44 - cmd: conda config --add channels defaults
45 - cmd: conda config --add channels conda-forge
46
47 # Conda build tools.
48 - cmd: conda install -n root --quiet --yes obvious-ci
49 - cmd: obvci_install_conda_build_tools.py
50 - cmd: conda info
38 # for obvci_appveyor_python_build_env.cmd
39 - cmd: conda update --all --yes
40 - cmd: conda install anaconda-client=1.6.3 --yes
41 - cmd: conda install -c conda-forge --yes obvious-ci
42 # for msinttypes and newer stuff
43 - cmd: conda config --prepend channels conda-forge
44 - cmd: conda config --set show_channel_urls yes
45 - cmd: conda config --set always_yes true
46 # For building conda packages
47 - cmd: conda install --yes conda-build jinja2 anaconda-client
48 # this is now the downloaded conda...
49 - cmd: conda info -a
5150
5251 # Skip .NET project specific build phase.
5352 build: off
0 #!/bin/bash
1
2 set -e
3
4 echo "Using downloaded netCDF version ${NETCDF_VERSION} with parallel capabilities enabled"
5 pushd /tmp
6 wget ftp://ftp.unidata.ucar.edu/pub/netcdf/netcdf-${NETCDF_VERSION}.tar.gz
7 tar -xzvf netcdf-${NETCDF_VERSION}.tar.gz
8 pushd netcdf-${NETCDF_VERSION}
9 ./configure --prefix $NETCDF_DIR --enable-netcdf-4 --enable-shared --disable-dap --enable-parallel
10 make -j 2
11 make install
12 popd
33 <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1" />
44
55 <title>netCDF4 API documentation</title>
6 <meta name="description" content="Version 1.3.0
6 <meta name="description" content="Version 1.3.1
77 -------------
88 - - -
99
12451245 <li class="mono"><a href="#netCDF4.Variable.set_auto_mask">set_auto_mask</a></li>
12461246 <li class="mono"><a href="#netCDF4.Variable.set_auto_maskandscale">set_auto_maskandscale</a></li>
12471247 <li class="mono"><a href="#netCDF4.Variable.set_auto_scale">set_auto_scale</a></li>
1248 <li class="mono"><a href="#netCDF4.Variable.set_collective">set_collective</a></li>
12481249 <li class="mono"><a href="#netCDF4.Variable.set_var_chunk_cache">set_var_chunk_cache</a></li>
12491250 <li class="mono"><a href="#netCDF4.Variable.setncattr">setncattr</a></li>
12501251 <li class="mono"><a href="#netCDF4.Variable.setncattr_string">setncattr_string</a></li>
12681269
12691270 <header id="section-intro">
12701271 <h1 class="title"><span class="name">netCDF4</span> module</h1>
1271 <h2>Version 1.3.0</h2>
1272 <h2>Version 1.3.1</h2>
12721273 <hr />
12731274 <h1>Introduction</h1>
12741275 <p>netcdf4-python is a Python interface to the netCDF C library. </p>
12981299 <ul>
12991300 <li>Python 2.7 or later (python 3 works too).</li>
13001301 <li><a href="http://numpy.scipy.org">numpy array module</a>, version 1.9.0 or later.</li>
1301 <li><a href="http://cython.org">Cython</a>, version 0.19 or later.</li>
1302 <li><a href="http://cython.org">Cython</a>, version 0.21 or later.</li>
13021303 <li><a href="https://pypi.python.org/pypi/setuptools">setuptools</a>, version 18.0 or
13031304 later.</li>
13041305 <li>The HDF5 C library version 1.8.4-patch1 or higher (1.8.x recommended)
13201321 If you want <a href="http://opendap.org">OPeNDAP</a> support, add <code>--enable-dap</code>.
13211322 If you want HDF4 SD support, add <code>--enable-hdf4</code> and add
13221323 the location of the HDF4 headers and library to <code>$CPPFLAGS</code> and <code>$LDFLAGS</code>.</li>
1324 <li>for MPI parallel IO support, MPI-enabled versions of the HDF5 and netcdf
1325 libraries are required, as is the <a href="http://mpi4py.scipy.org">mpi4py</a> python
1326 module.</li>
13231327 </ul>
13241328 <h1>Install</h1>
13251329 <ul>
13361340 <li>run <code>python setup.py build</code>, then <code>python setup.py install</code> (as root if
13371341 necessary).</li>
13381342 <li><a href="https://pip.pypa.io/en/latest/reference/pip_install.html"><code>pip install</code></a> can
1339 also be used, with library paths set with environment variables. To make
1340 this work, the <code>USE_SETUPCFG</code> environment variable must be used to tell
1341 setup.py not to use <code>setup.cfg</code>.
1342 For example, <code>USE_SETUPCFG=0 HDF5_INCDIR=/usr/include/hdf5/serial
1343 HDF5_LIBDIR=/usr/lib/x86_64-linux-gnu/hdf5/serial pip install</code> has been
1344 shown to work on an Ubuntu/Debian linux system. Similarly, environment variables
1345 (all capitalized) can be used to set the include and library paths for
1346 <code>hdf5</code>, <code>netCDF4</code>, <code>hdf4</code>, <code>szip</code>, <code>jpeg</code>, <code>curl</code> and <code>zlib</code>. If the
1347 libraries are installed in standard places (e.g. <code>/usr</code> or <code>/usr/local</code>),
1348 the environment variables do not need to be set.</li>
1343 also be used, with library paths set with environment variables. To make
1344 this work, the <code>USE_SETUPCFG</code> environment variable must be used to tell
1345 setup.py not to use <code>setup.cfg</code>.
1346 For example, <code>USE_SETUPCFG=0 HDF5_INCDIR=/usr/include/hdf5/serial
1347 HDF5_LIBDIR=/usr/lib/x86_64-linux-gnu/hdf5/serial pip install</code> has been
1348 shown to work on an Ubuntu/Debian linux system. Similarly, environment variables
1349 (all capitalized) can be used to set the include and library paths for
1350 <code>hdf5</code>, <code>netCDF4</code>, <code>hdf4</code>, <code>szip</code>, <code>jpeg</code>, <code>curl</code> and <code>zlib</code>. If the
1351 libraries are installed in standard places (e.g. <code>/usr</code> or <code>/usr/local</code>),
1352 the environment variables do not need to be set.</li>
13491353 <li>run the tests in the 'test' directory by running <code>python run_all.py</code>.</li>
13501354 </ul>
13511355 <h1>Tutorial</h1>
13621366 <li><a href="#section10">Beyond homogeneous arrays of a fixed type - compound data types.</a></li>
13631367 <li><a href="#section11">Variable-length (vlen) data types.</a></li>
13641368 <li><a href="#section12">Enum data type.</a></li>
1369 <li><a href="#section13">Parallel IO.</a></li>
13651370 </ol>
13661371 <h2><div id='section1'>1) Creating/Opening/Closing a netCDF file.</h2>
13671372 <p>To create a netCDF file from python, you simply call the <a href="#netCDF4.Dataset"><code>Dataset</code></a>
21162121 </pre></div>
21172122
21182123
2119 <p>All of the code in this tutorial is available in <code>examples/tutorial.py</code>,
2124 <h2><div id='section13'>13) Parallel IO.</h2>
2125 <p>If MPI parallel enabled versions of netcdf and hdf5 are detected, and
2126 <a href="https://mpi4py.scipy.org">mpi4py</a> is installed, netcdf4-python will
2127 be built with parallel IO capabilities enabled. To use parallel IO,
2128 your program must be running in an MPI environment using
2129 <a href="https://mpi4py.scipy.org">mpi4py</a>.</p>
2130 <div class="codehilite"><pre><span></span><span class="o">&gt;&gt;&gt;</span> <span class="kn">from</span> <span class="nn">mpi4py</span> <span class="kn">import</span> <span class="n">MPI</span>
2131 <span class="o">&gt;&gt;&gt;</span> <span class="kn">import</span> <span class="nn">numpy</span> <span class="kn">as</span> <span class="nn">np</span>
2132 <span class="o">&gt;&gt;&gt;</span> <span class="kn">from</span> <span class="nn">netCDF4</span> <span class="kn">import</span> <span class="n">Dataset</span>
2133 <span class="o">&gt;&gt;&gt;</span> <span class="n">rank</span> <span class="o">=</span> <span class="n">MPI</span><span class="o">.</span><span class="n">COMM_WORLD</span><span class="o">.</span><span class="n">rank</span> <span class="c1"># The process ID (integer 0-3 for 4-process run)</span>
2134 </pre></div>
2135
2136
2137 <p>To run an MPI-based parallel program like this, you must use <code>mpiexec</code> to launch several
2138 parallel instances of Python (for example, using <code>mpiexec -np 4 python mpi_example.py</code>).
2139 The parallel features of netcdf4-python are mostly transparent -
2140 when a new dataset is created or an existing dataset is opened,
2141 use the <code>parallel</code> keyword to enable parallel access.</p>
2142 <div class="codehilite"><pre><span></span><span class="o">&gt;&gt;&gt;</span> <span class="n">nc</span> <span class="o">=</span> <span class="n">Dataset</span><span class="p">(</span><span class="s1">&#39;parallel_tst.nc&#39;</span><span class="p">,</span><span class="s1">&#39;w&#39;</span><span class="p">,</span><span class="n">parallel</span><span class="o">=</span><span class="bp">True</span><span class="p">)</span>
2143 </pre></div>
2144
2145
2146 <p>The optional <code>comm</code> keyword may be used to specify a particular
2147 MPI communicator (<code>MPI_COMM_WORLD</code> is used by default). Each process (or rank)
2148 can now write to the file indepedently. In this example the process rank is
2149 written to a different variable index on each task</p>
2150 <div class="codehilite"><pre><span></span><span class="o">&gt;&gt;&gt;</span> <span class="n">d</span> <span class="o">=</span> <span class="n">nc</span><span class="o">.</span><span class="n">createDimension</span><span class="p">(</span><span class="s1">&#39;dim&#39;</span><span class="p">,</span><span class="mi">4</span><span class="p">)</span>
2151 <span class="o">&gt;&gt;&gt;</span> <span class="n">v</span> <span class="o">=</span> <span class="n">nc</span><span class="o">.</span><span class="n">createVariable</span><span class="p">(</span><span class="s1">&#39;var&#39;</span><span class="p">,</span> <span class="n">np</span><span class="o">.</span><span class="n">int</span><span class="p">,</span> <span class="s1">&#39;dim&#39;</span><span class="p">)</span>
2152 <span class="o">&gt;&gt;&gt;</span> <span class="n">v</span><span class="p">[</span><span class="n">rank</span><span class="p">]</span> <span class="o">=</span> <span class="n">rank</span>
2153 <span class="o">&gt;&gt;&gt;</span> <span class="n">nc</span><span class="o">.</span><span class="n">close</span><span class="p">()</span>
2154
2155 <span class="o">%</span> <span class="n">ncdump</span> <span class="n">parallel_test</span><span class="o">.</span><span class="n">nc</span>
2156 <span class="n">netcdf</span> <span class="n">parallel_test</span> <span class="p">{</span>
2157 <span class="n">dimensions</span><span class="p">:</span>
2158 <span class="n">dim</span> <span class="o">=</span> <span class="mi">4</span> <span class="p">;</span>
2159 <span class="n">variables</span><span class="p">:</span>
2160 <span class="n">int64</span> <span class="n">var</span><span class="p">(</span><span class="n">dim</span><span class="p">)</span> <span class="p">;</span>
2161 <span class="n">data</span><span class="p">:</span>
2162
2163 <span class="n">var</span> <span class="o">=</span> <span class="mi">0</span><span class="p">,</span> <span class="mi">1</span><span class="p">,</span> <span class="mi">2</span><span class="p">,</span> <span class="mi">3</span> <span class="p">;</span>
2164 <span class="p">}</span>
2165 </pre></div>
2166
2167
2168 <p>There are two types of parallel IO, independent (the default) and collective.
2169 Independent IO means that each process can do IO independently. It should not
2170 depend on or be affected by other processes. Collective IO is a way of doing
2171 IO defined in the MPI-IO standard; unlike independent IO, all processes must
2172 participate in doing IO. To toggle back and forth between
2173 the two types of IO, use the <a href="#netCDF4.Variable.set_collective"><code>set_collective</code></a>
2174 <a href="#netCDF4.Variable"><code>Variable</code></a>method. All metadata
2175 operations (such as creation of groups, types, variables, dimensions, or attributes)
2176 are collective. There are a couple of important limitatons of parallel IO:</p>
2177 <ul>
2178 <li>If a variable has an unlimited dimension, appending data must be done in collective mode.
2179 If the write is done in independent mode, the operation will fail with a
2180 a generic "HDF Error".</li>
2181 <li>You cannot write compressed data in parallel (although
2182 you can read it).</li>
2183 <li>You cannot use variable-length (VLEN) data types. </li>
2184 </ul>
2185 <p>All of the code in this tutorial is available in <code>examples/tutorial.py</code>, except
2186 the parallel IO example, which is in <code>examples/mpi_example.py</code>.
21202187 Unit tests are in the <code>test</code> directory.</p>
21212188 <p><strong>contact</strong>: Jeffrey Whitaker <a href="&#109;&#97;&#105;&#108;&#116;&#111;&#58;&#106;&#101;&#102;&#102;&#114;&#101;&#121;&#46;&#115;&#46;&#119;&#104;&#105;&#116;&#97;&#107;&#101;&#114;&#64;&#110;&#111;&#97;&#97;&#46;&#103;&#111;&#118;">&#106;&#101;&#102;&#102;&#114;&#101;&#121;&#46;&#115;&#46;&#119;&#104;&#105;&#116;&#97;&#107;&#101;&#114;&#64;&#110;&#111;&#97;&#97;&#46;&#103;&#111;&#118;</a></p>
21222189 <p><strong>copyright</strong>: 2008 by Jeffrey Whitaker.</p>
26992766 <p><strong><code>memory</code></strong>: if not <code>None</code>, open file with contents taken from this block of memory.
27002767 Must be a sequence of bytes. Note this only works with "r" mode.</p>
27012768 <p><strong><code>encoding</code></strong>: encoding used to encode filename string into bytes.
2702 Default is None (<code>sys.getdefaultfileencoding()</code> is used).</p></div>
2769 Default is None (<code>sys.getdefaultfileencoding()</code> is used).</p>
2770 <p><strong><code>parallel</code></strong>: open for parallel access using MPI (requires mpi4py and
2771 parallel-enabled netcdf-c and hdf5 libraries). Default is <code>False</code>. If
2772 <code>True</code>, <code>comm</code> and <code>info</code> kwargs may also be specified.</p>
2773 <p><strong><code>comm</code></strong>: MPI_Comm object for parallel access. Default <code>None</code>, which
2774 means MPI_COMM_WORLD will be used. Ignored if <code>parallel=False</code>.</p>
2775 <p><strong><code>info</code></strong>: MPI_Info object for parallel access. Default <code>None</code>, which
2776 means MPI_INFO_NULL will be used. Ignored if <code>parallel=False</code>.</p></div>
27032777 <div class="source_cont">
27042778 </div>
27052779
62966370
62976371
62986372 <div class="item">
6373 <div class="name def" id="netCDF4.Variable.set_collective">
6374 <p>def <span class="ident">set_collective</span>(</p><p>self,True_or_False)</p>
6375 </div>
6376
6377
6378
6379
6380 <div class="desc"><p>turn on or off collective parallel IO access. Ignored if file is not
6381 open for parallel access.</p></div>
6382 <div class="source_cont">
6383 </div>
6384
6385 </div>
6386
6387
6388 <div class="item">
62996389 <div class="name def" id="netCDF4.Variable.set_var_chunk_cache">
63006390 <p>def <span class="ident">set_var_chunk_cache</span>(</p><p>self,size=None,nelems=None,preemption=None)</p>
63016391 </div>
0 # to run: mpirun -np 4 python mpi_example.py
1 from mpi4py import MPI
2 import numpy as np
3 from netCDF4 import Dataset
4 rank = MPI.COMM_WORLD.rank # The process ID (integer 0-3 for 4-process run)
5 nc = Dataset('parallel_test.nc', 'w', parallel=True, comm=MPI.COMM_WORLD,
6 info=MPI.Info())
7 # below should work also - MPI_COMM_WORLD and MPI_INFO_NULL will be used.
8 #nc = Dataset('parallel_test.nc', 'w', parallel=True)
9 d = nc.createDimension('dim',4)
10 v = nc.createVariable('var', np.int, 'dim')
11 v[rank] = rank
12 # switch to collective mode, rewrite the data.
13 v.set_collective(True)
14 v[rank] = rank
15 nc.close()
16 # reopen the file read-only, check the data
17 nc = Dataset('parallel_test.nc', parallel=True, comm=MPI.COMM_WORLD,
18 info=MPI.Info())
19 assert rank==nc['var'][rank]
20 nc.close()
21 # reopen the file in append mode, modify the data on the last rank.
22 nc = Dataset('parallel_test.nc', 'a',parallel=True, comm=MPI.COMM_WORLD,
23 info=MPI.Info())
24 if rank == 3: v[rank] = 2*rank
25 nc.close()
26 # reopen the file read-only again, check the data.
27 # leave out the comm and info kwargs to check that the defaults
28 # (MPI_COMM_WORLD and MPI_INFO_NULL) work.
29 nc = Dataset('parallel_test.nc', parallel=True)
30 if rank == 3:
31 assert 2*rank==nc['var'][rank]
32 else:
33 assert rank==nc['var'][rank]
34 nc.close()
0 /* Author: Lisandro Dalcin */
1 /* Contact: dalcinl@gmail.com */
2
3 #ifndef MPI_COMPAT_H
4 #define MPI_COMPAT_H
5
6 #include <mpi.h>
7
8 #if (MPI_VERSION < 3) && !defined(PyMPI_HAVE_MPI_Message)
9 typedef void *PyMPI_MPI_Message;
10 #define MPI_Message PyMPI_MPI_Message
11 #endif
12
13 #endif/*MPI_COMPAT_H*/
695695 cdef extern from "netcdf_mem.h":
696696 int nc_open_mem(const char *path, int mode, size_t size, void* memory, int *ncidp)
697697
698 IF HAS_NC_PAR:
699 cdef extern from "mpi-compat.h": pass
700 cdef extern from "netcdf_par.h":
701 ctypedef int MPI_Comm
702 ctypedef int MPI_Info
703 int nc_create_par(char *path, int cmode, MPI_Comm comm, MPI_Info info, int *ncidp);
704 int nc_open_par(char *path, int mode, MPI_Comm comm, MPI_Info info, int *ncidp);
705 int nc_var_par_access(int ncid, int varid, int par_access);
706 cdef enum:
707 NC_COLLECTIVE
708 NC_INDEPENDENT
709 cdef extern from "netcdf.h":
710 cdef enum:
711 NC_MPIIO
712 NC_PNETCDF
713
698714 # taken from numpy.pxi in numpy 1.0rc2.
699715 cdef extern from "numpy/arrayobject.h":
700716 ctypedef int npy_intp
44 from ._netCDF4 import __doc__, __pdoc__
55 from ._netCDF4 import (__version__, __netcdf4libversion__, __hdf5libversion__,
66 __has_rename_grp__, __has_nc_inq_path__,
7 __has_nc_inq_format_extended__, __has_nc_open_mem__)
7 __has_nc_inq_format_extended__, __has_nc_open_mem__,
8 __has_cdf5_format__,__has_nc_par__)
89 __all__ =\
910 ['Dataset','Variable','Dimension','Group','MFDataset','MFTime','CompoundType','VLType','date2num','num2date','date2index','stringtochar','chartostring','stringtoarr','getlibversion','EnumType']
00 """
1 Version 1.3.0
1 Version 1.3.1
22 -------------
33 - - -
44
3737
3838 - Python 2.7 or later (python 3 works too).
3939 - [numpy array module](http://numpy.scipy.org), version 1.9.0 or later.
40 - [Cython](http://cython.org), version 0.19 or later.
40 - [Cython](http://cython.org), version 0.21 or later.
4141 - [setuptools](https://pypi.python.org/pypi/setuptools), version 18.0 or
4242 later.
4343 - The HDF5 C library version 1.8.4-patch1 or higher (1.8.x recommended)
5959 If you want [OPeNDAP](http://opendap.org) support, add `--enable-dap`.
6060 If you want HDF4 SD support, add `--enable-hdf4` and add
6161 the location of the HDF4 headers and library to `$CPPFLAGS` and `$LDFLAGS`.
62 - for MPI parallel IO support, MPI-enabled versions of the HDF5 and netcdf
63 libraries are required, as is the [mpi4py](http://mpi4py.scipy.org) python
64 module.
6265
6366
6467 Install
7780 - run `python setup.py build`, then `python setup.py install` (as root if
7881 necessary).
7982 - [`pip install`](https://pip.pypa.io/en/latest/reference/pip_install.html) can
80 also be used, with library paths set with environment variables. To make
81 this work, the `USE_SETUPCFG` environment variable must be used to tell
82 setup.py not to use `setup.cfg`.
83 For example, `USE_SETUPCFG=0 HDF5_INCDIR=/usr/include/hdf5/serial
84 HDF5_LIBDIR=/usr/lib/x86_64-linux-gnu/hdf5/serial pip install` has been
85 shown to work on an Ubuntu/Debian linux system. Similarly, environment variables
86 (all capitalized) can be used to set the include and library paths for
87 `hdf5`, `netCDF4`, `hdf4`, `szip`, `jpeg`, `curl` and `zlib`. If the
88 libraries are installed in standard places (e.g. `/usr` or `/usr/local`),
89 the environment variables do not need to be set.
83 also be used, with library paths set with environment variables. To make
84 this work, the `USE_SETUPCFG` environment variable must be used to tell
85 setup.py not to use `setup.cfg`.
86 For example, `USE_SETUPCFG=0 HDF5_INCDIR=/usr/include/hdf5/serial
87 HDF5_LIBDIR=/usr/lib/x86_64-linux-gnu/hdf5/serial pip install` has been
88 shown to work on an Ubuntu/Debian linux system. Similarly, environment variables
89 (all capitalized) can be used to set the include and library paths for
90 `hdf5`, `netCDF4`, `hdf4`, `szip`, `jpeg`, `curl` and `zlib`. If the
91 libraries are installed in standard places (e.g. `/usr` or `/usr/local`),
92 the environment variables do not need to be set.
9093 - run the tests in the 'test' directory by running `python run_all.py`.
9194
9295 Tutorial
104107 10. [Beyond homogeneous arrays of a fixed type - compound data types.](#section10)
105108 11. [Variable-length (vlen) data types.](#section11)
106109 12. [Enum data type.](#section12)
110 13. [Parallel IO.](#section13)
107111
108112
109113 ## <div id='section1'>1) Creating/Opening/Closing a netCDF file.
892896 [0 2 4 -- 1]
893897 >>> nc.close()
894898
895 All of the code in this tutorial is available in `examples/tutorial.py`,
899 ## <div id='section13'>13) Parallel IO.
900
901 If MPI parallel enabled versions of netcdf and hdf5 are detected, and
902 [mpi4py](https://mpi4py.scipy.org) is installed, netcdf4-python will
903 be built with parallel IO capabilities enabled. To use parallel IO,
904 your program must be running in an MPI environment using
905 [mpi4py](https://mpi4py.scipy.org).
906
907 :::python
908 >>> from mpi4py import MPI
909 >>> import numpy as np
910 >>> from netCDF4 import Dataset
911 >>> rank = MPI.COMM_WORLD.rank # The process ID (integer 0-3 for 4-process run)
912
913 To run an MPI-based parallel program like this, you must use `mpiexec` to launch several
914 parallel instances of Python (for example, using `mpiexec -np 4 python mpi_example.py`).
915 The parallel features of netcdf4-python are mostly transparent -
916 when a new dataset is created or an existing dataset is opened,
917 use the `parallel` keyword to enable parallel access.
918
919 :::python
920 >>> nc = Dataset('parallel_tst.nc','w',parallel=True)
921
922 The optional `comm` keyword may be used to specify a particular
923 MPI communicator (`MPI_COMM_WORLD` is used by default). Each process (or rank)
924 can now write to the file indepedently. In this example the process rank is
925 written to a different variable index on each task
926
927 :::python
928 >>> d = nc.createDimension('dim',4)
929 >>> v = nc.createVariable('var', np.int, 'dim')
930 >>> v[rank] = rank
931 >>> nc.close()
932
933 % ncdump parallel_test.nc
934 netcdf parallel_test {
935 dimensions:
936 dim = 4 ;
937 variables:
938 int64 var(dim) ;
939 data:
940
941 var = 0, 1, 2, 3 ;
942 }
943
944 There are two types of parallel IO, independent (the default) and collective.
945 Independent IO means that each process can do IO independently. It should not
946 depend on or be affected by other processes. Collective IO is a way of doing
947 IO defined in the MPI-IO standard; unlike independent IO, all processes must
948 participate in doing IO. To toggle back and forth between
949 the two types of IO, use the `netCDF4.Variable.set_collective`
950 `netCDF4.Variable`method. All metadata
951 operations (such as creation of groups, types, variables, dimensions, or attributes)
952 are collective. There are a couple of important limitatons of parallel IO:
953
954 - If a variable has an unlimited dimension, appending data must be done in collective mode.
955 If the write is done in independent mode, the operation will fail with a
956 a generic "HDF Error".
957 - You cannot write compressed data in parallel (although
958 you can read it).
959 - You cannot use variable-length (VLEN) data types.
960
961 All of the code in this tutorial is available in `examples/tutorial.py`, except
962 the parallel IO example, which is in `examples/mpi_example.py`.
896963 Unit tests are in the `test` directory.
897964
898965 **contact**: Jeffrey Whitaker <jeffrey.s.whitaker@noaa.gov>
9351002 # python3: zip is already python2's itertools.izip
9361003 pass
9371004
938 __version__ = "1.3.0"
1005 __version__ = "1.3.1"
9391006
9401007 # Initialize numpy
9411008 import posixpath
9511018 import_array()
9521019 include "constants.pyx"
9531020 include "netCDF4.pxi"
1021 IF HAS_NC_PAR:
1022 cimport mpi4py.MPI as MPI
1023 from mpi4py.libmpi cimport MPI_Comm, MPI_Info, MPI_Comm_dup, MPI_Info_dup, \
1024 MPI_Comm_free, MPI_Info_free, MPI_INFO_NULL,\
1025 MPI_COMM_WORLD
1026 ctypedef MPI.Comm Comm
1027 ctypedef MPI.Info Info
1028 ELSE:
1029 ctypedef object Comm
1030 ctypedef object Info
9541031
9551032 # check for required version of netcdf-4 and hdf5.
9561033
9761053 __has_rename_grp__ = HAS_RENAME_GRP
9771054 __has_nc_inq_path__ = HAS_NC_INQ_PATH
9781055 __has_nc_inq_format_extended__ = HAS_NC_INQ_FORMAT_EXTENDED
979 __has_cdf5__ = HAS_CDF5_FORMAT
1056 __has_cdf5_format__ = HAS_CDF5_FORMAT
9801057 __has_nc_open_mem__ = HAS_NC_OPEN_MEM
1058 __has_nc_par__ = HAS_NC_PAR
9811059 _needsworkaround_issue485 = __netcdf4libversion__ < "4.4.0" or \
9821060 (__netcdf4libversion__.startswith("4.4.0") and \
9831061 "-development" in __netcdf4libversion__)
15491627 free(varids) # free pointer holding variable ids.
15501628 return variables
15511629
1552 cdef _ensure_nc_success(ierr, err_cls=RuntimeError):
1630 cdef _ensure_nc_success(ierr, err_cls=RuntimeError, filename=None):
15531631 # print netcdf error message, raise error.
15541632 if ierr != NC_NOERR:
1555 raise err_cls((<char *>nc_strerror(ierr)).decode('ascii'))
1633 err_str = (<char *>nc_strerror(ierr)).decode('ascii')
1634 if issubclass(err_cls, EnvironmentError):
1635 raise err_cls(ierr, err_str, filename)
1636 else:
1637 raise err_cls(err_str)
15561638
15571639 # these are class attributes that
15581640 # only exist at the python level (not in the netCDF file).
16901772 the parent Dataset or Group."""
16911773
16921774 def __init__(self, filename, mode='r', clobber=True, format='NETCDF4',
1693 diskless=False, persist=False, keepweakref=False,
1694 memory=None, encoding=None, **kwargs):
1775 diskless=False, persist=False, keepweakref=False,
1776 memory=None, encoding=None, parallel=False,
1777 Comm comm=None, Info info=None, **kwargs):
16951778 """
16961779 **`__init__(self, filename, mode="r", clobber=True, diskless=False,
16971780 persist=False, keepweakref=False, format='NETCDF4')`**
17621845
17631846 **`encoding`**: encoding used to encode filename string into bytes.
17641847 Default is None (`sys.getdefaultfileencoding()` is used).
1848
1849 **`parallel`**: open for parallel access using MPI (requires mpi4py and
1850 parallel-enabled netcdf-c and hdf5 libraries). Default is `False`. If
1851 `True`, `comm` and `info` kwargs may also be specified.
1852
1853 **`comm`**: MPI_Comm object for parallel access. Default `None`, which
1854 means MPI_COMM_WORLD will be used. Ignored if `parallel=False`.
1855
1856 **`info`**: MPI_Info object for parallel access. Default `None`, which
1857 means MPI_INFO_NULL will be used. Ignored if `parallel=False`.
17651858 """
17661859 cdef int grpid, ierr, numgrps, numdims, numvars
17671860 cdef char *path
17681861 cdef char namstring[NC_MAX_NAME+1]
1862 IF HAS_NC_PAR:
1863 cdef MPI_Comm mpicomm
1864 cdef MPI_Info mpiinfo
17691865
17701866 memset(&self._buffer, 0, sizeof(self._buffer))
17711867
17831879
17841880 if memory is not None and (mode != 'r' or type(memory) != bytes):
17851881 raise ValueError('memory mode only works with \'r\' modes and must be `bytes`')
1882 if parallel:
1883 IF HAS_NC_PAR != 1:
1884 msg='parallel mode requires MPI enabled netcdf-c'
1885 raise ValueError(msg)
1886 if format != 'NETCDF4':
1887 msg='parallel mode only works with format=NETCDF4'
1888 raise ValueError(msg)
1889 if comm is not None:
1890 mpicomm = comm.ob_mpi
1891 else:
1892 mpicomm = MPI_COMM_WORLD
1893 if info is not None:
1894 mpiinfo = info.ob_mpi
1895 else:
1896 mpiinfo = MPI_INFO_NULL
17861897
17871898 if mode == 'w':
17881899 _set_default_format(format=format)
17891900 if clobber:
1790 if diskless:
1901 if parallel:
1902 IF HAS_NC_PAR:
1903 ierr = nc_create_par(path, NC_CLOBBER | NC_MPIIO, \
1904 mpicomm, mpiinfo, &grpid)
1905 ELSE:
1906 pass
1907 elif diskless:
17911908 if persist:
17921909 ierr = nc_create(path, NC_WRITE | NC_CLOBBER | NC_DISKLESS , &grpid)
17931910 else:
17951912 else:
17961913 ierr = nc_create(path, NC_CLOBBER, &grpid)
17971914 else:
1798 if diskless:
1915 if parallel:
1916 IF HAS_NC_PAR:
1917 ierr = nc_create_par(path, NC_NOCLOBBER | NC_MPIIO, \
1918 mpicomm, mpiinfo, &grpid)
1919 ELSE:
1920 pass
1921 elif diskless:
17991922 if persist:
18001923 ierr = nc_create(path, NC_WRITE | NC_NOCLOBBER | NC_DISKLESS , &grpid)
18011924 else:
18211944 nc_open_mem method not enabled. To enable, install Cython, make sure you have
18221945 version 4.4.1 or higher of the netcdf C lib, and rebuild netcdf4-python."""
18231946 raise ValueError(msg)
1947 elif parallel:
1948 IF HAS_NC_PAR:
1949 ierr = nc_open_par(path, NC_NOWRITE | NC_MPIIO, \
1950 mpicomm, mpiinfo, &grpid)
1951 ELSE:
1952 pass
18241953 elif diskless:
18251954 ierr = nc_open(path, NC_NOWRITE | NC_DISKLESS, &grpid)
18261955 else:
18271956 ierr = nc_open(path, NC_NOWRITE, &grpid)
18281957 elif mode == 'r+' or mode == 'a':
1829 if diskless:
1958 if parallel:
1959 IF HAS_NC_PAR:
1960 ierr = nc_open_par(path, NC_WRITE | NC_MPIIO, \
1961 mpicomm, mpiinfo, &grpid)
1962 ELSE:
1963 pass
1964 elif diskless:
18301965 ierr = nc_open(path, NC_WRITE | NC_DISKLESS, &grpid)
18311966 else:
18321967 ierr = nc_open(path, NC_WRITE, &grpid)
18331968 elif mode == 'as' or mode == 'r+s':
1834 if diskless:
1969 if parallel:
1970 # NC_SHARE ignored
1971 IF HAS_NC_PAR:
1972 ierr = nc_open_par(path, NC_WRITE | NC_MPIIO, \
1973 mpicomm, mpiinfo, &grpid)
1974 ELSE:
1975 pass
1976 elif diskless:
18351977 ierr = nc_open(path, NC_SHARE | NC_DISKLESS, &grpid)
18361978 else:
18371979 ierr = nc_open(path, NC_SHARE, &grpid)
18381980 elif mode == 'ws':
18391981 if clobber:
1840 if diskless:
1982 if parallel:
1983 # NC_SHARE ignored
1984 IF HAS_NC_PAR:
1985 ierr = nc_create_par(path, NC_CLOBBER | NC_MPIIO, \
1986 mpicomm, mpiinfo, &grpid)
1987 ELSE:
1988 pass
1989 elif diskless:
18411990 if persist:
18421991 ierr = nc_create(path, NC_WRITE | NC_SHARE | NC_CLOBBER | NC_DISKLESS , &grpid)
18431992 else:
18451994 else:
18461995 ierr = nc_create(path, NC_SHARE | NC_CLOBBER, &grpid)
18471996 else:
1848 if diskless:
1997 if parallel:
1998 # NC_SHARE ignored
1999 IF HAS_NC_PAR:
2000 ierr = nc_create_par(path, NC_NOCLOBBER | NC_MPIIO, \
2001 mpicomm, mpiinfo, &grpid)
2002 ELSE:
2003 pass
2004 elif diskless:
18492005 if persist:
18502006 ierr = nc_create(path, NC_WRITE | NC_SHARE | NC_NOCLOBBER | NC_DISKLESS , &grpid)
18512007 else:
18552011 else:
18562012 raise ValueError("mode must be 'w', 'r', 'a' or 'r+', got '%s'" % mode)
18572013
1858 _ensure_nc_success(ierr, IOError)
2014 _ensure_nc_success(ierr, err_cls=IOError, filename=path)
18592015
18602016 # data model and file format attributes
18612017 self.data_model = _get_format(grpid)
33123468 if grp.data_model != 'NETCDF4': grp._enddef()
33133469 _ensure_nc_success(ierr)
33143470 else:
3315 # cast fill_value to type of variable.
3316 # also make sure it is written in native byte order
3317 # (the same as the data)
3318 if self._isprimitive or self._isenum:
3319 fillval = numpy.array(fill_value, self.dtype)
3320 if not fillval.dtype.isnative: fillval.byteswap(True)
3321 _set_att(self._grp, self._varid, '_FillValue',\
3322 fillval, xtype=xtype)
3471 if self._isprimitive or self._isenum or \
3472 (self._isvlen and self.dtype == str):
3473 if self._isvlen and self.dtype == str:
3474 _set_att(self._grp, self._varid, '_FillValue',\
3475 _tostr(fill_value), xtype=xtype, force_ncstring=True)
3476 else:
3477 fillval = numpy.array(fill_value, self.dtype)
3478 if not fillval.dtype.isnative: fillval.byteswap(True)
3479 _set_att(self._grp, self._varid, '_FillValue',\
3480 fillval, xtype=xtype)
33233481 else:
33243482 raise AttributeError("cannot set _FillValue attribute for VLEN or compound variable")
33253483 if least_significant_digit is not None:
43194477 The default value of `chartostring` is `True`
43204478 (automatic conversions are performed).
43214479 """
4322 if chartostring:
4323 self.chartostring = True
4324 else:
4325 self.chartostring = False
4480 self.chartostring = bool(chartostring)
43264481
43274482 def use_nc_get_vars(self,use_nc_get_vars):
43284483 """
43334488 `nc_get_vars` not used since it slower than multiple calls
43344489 to the unstrided read routine `nc_get_vara` in most cases.
43354490 """
4336 if not use_nc_get_vars:
4337 self._no_get_vars = True
4338 else:
4339 self._no_get_vars = False
4340
4491 self._no_get_vars = not bool(use_nc_get_vars)
4492
43414493 def set_auto_maskandscale(self,maskandscale):
43424494 """
43434495 **`set_auto_maskandscale(self,maskandscale)`**
43894541 The default value of `maskandscale` is `True`
43904542 (automatic conversions are performed).
43914543 """
4392 if maskandscale:
4393 self.scale = True
4394 self.mask = True
4395 else:
4396 self.scale = False
4397 self.mask = False
4544 self.scale = self.mask = bool(maskandscale)
43984545
43994546 def set_auto_scale(self,scale):
44004547 """
44334580 The default value of `scale` is `True`
44344581 (automatic conversions are performed).
44354582 """
4436 if scale:
4437 self.scale = True
4438 else:
4439 self.scale = False
4440
4583 self.scale = bool(scale)
4584
44414585 def set_auto_mask(self,mask):
44424586 """
44434587 **`set_auto_mask(self,mask)`**
44624606 The default value of `mask` is `True`
44634607 (automatic conversions are performed).
44644608 """
4465 if mask:
4466 self.mask = True
4467 else:
4468 self.mask = False
4469
4609 self.mask = bool(mask)
4610
44704611
44714612 def _put(self,ndarray data,start,count,stride):
44724613 """Private method to put data into a netCDF variable"""
47454886 else:
47464887 return data
47474888
4889 def set_collective(self, value):
4890 """
4891 **`set_collective(self,True_or_False)`**
4892
4893 turn on or off collective parallel IO access. Ignored if file is not
4894 open for parallel access.
4895 """
4896 IF HAS_NC_PAR:
4897 # set collective MPI IO mode on or off
4898 if value:
4899 ierr = nc_var_par_access(self._grpid, self._varid,
4900 NC_COLLECTIVE)
4901 else:
4902 ierr = nc_var_par_access(self._grpid, self._varid,
4903 NC_INDEPENDENT)
4904 _ensure_nc_success(ierr)
4905 ELSE:
4906 pass # does nothing
4907
47484908 def __reduce__(self):
47494909 # raise error is user tries to pickle a Variable object.
47504910 raise NotImplementedError('Variable is not picklable')
60806240 class MFTime(_Variable):
60816241 """
60826242 Class providing an interface to a MFDataset time Variable by imposing a unique common
6083 time unit to all files.
6243 time unit and/or calendar to all files.
60846244
60856245 Example usage (See `netCDF4.MFTime.__init__` for more details):
60866246
61126272 32
61136273 """
61146274
6115 def __init__(self, time, units=None):
6116 """
6117 **`__init__(self, time, units=None)`**
6275 def __init__(self, time, units=None, calendar=None):
6276 """
6277 **`__init__(self, time, units=None, calendar=None)`**
61186278
61196279 Create a time Variable with units consistent across a multifile
61206280 dataset.
61216281
61226282 **`time`**: Time variable from a `netCDF4.MFDataset`.
61236283
6124 **`units`**: Time units, for example, `days since 1979-01-01`. If None, use
6125 the units from the master variable.
6284 **`units`**: Time units, for example, `'days since 1979-01-01'`. If `None`,
6285 use the units from the master variable.
6286
6287 **`calendar`**: Calendar overload to use across all files, for example,
6288 `'standard'` or `'gregorian'`. If `None`, check that the calendar attribute
6289 is present on each variable and values are unique across files raising a
6290 `ValueError` otherwise.
61266291 """
61276292 import datetime
61286293 self.__time = time
61316296 for name, value in time.__dict__.items():
61326297 self.__dict__[name] = value
61336298
6134 # make sure calendar attribute present in all files.
6135 for t in self._recVar:
6136 if not hasattr(t,'calendar'):
6137 raise ValueError('MFTime requires that the time variable in all files have a calendar attribute')
6138
6139 # Check that calendar is the same in all files.
6140 if len(set([t.calendar for t in self._recVar])) > 1:
6141 raise ValueError('MFTime requires that the same time calendar is used by all files.')
6299 # Make sure calendar attribute present in all files if no default calendar
6300 # is provided. Also assert this value is the same across files.
6301 if calendar is None:
6302 calendars = [None] * len(self._recVar)
6303 for idx, t in enumerate(self._recVar):
6304 if not hasattr(t, 'calendar'):
6305 msg = 'MFTime requires that the time variable in all files ' \
6306 'have a calendar attribute if no default calendar is provided.'
6307 raise ValueError(msg)
6308 else:
6309 calendars[idx] = t.calendar
6310 calendars = set(calendars)
6311 if len(calendars) > 1:
6312 msg = 'MFTime requires that the same time calendar is ' \
6313 'used by all files if no default calendar is provided.'
6314 raise ValueError(msg)
6315 else:
6316 calendar = list(calendars)[0]
6317
6318 # Set calendar using the default or the unique calendar value across all files.
6319 self.calendar = calendar
61426320
61436321 # Override units if units is specified.
61446322 self.units = units or time.units
55 # Usually, nothing else is needed.
66 use_ncconfig=True
77 # path to nc-config script (use if not found in unix PATH).
8 #ncconfig=/usr/local/bin/nc-config
8 #ncconfig=/usr/local/bin/nc-config
99 [directories]
1010 #
1111 # If nc-config doesn't do the trick, you can specify the locations
4545 # If the libraries and include files are installed in separate locations,
4646 # use curl_libdir and curl_incdir.
4747 #curl_dir = /usr/local
48 # location of mpi.h (needed for parallel support)
49 #mpi_incdir=/opt/local/include/mpich-mp
5454 has_nc_inq_format_extended = False
5555 has_cdf5_format = False
5656 has_nc_open_mem = False
57 has_nc_par = False
5758
5859 for d in inc_dirs:
5960 try:
6263 continue
6364
6465 has_nc_open_mem = os.path.exists(os.path.join(d, 'netcdf_mem.h'))
66 has_nc_par = os.path.exists(os.path.join(d, 'netcdf_par.h'))
6567
6668 for line in f:
6769 if line.startswith('nc_rename_grp'):
7274 has_nc_inq_format_extended = True
7375 if line.startswith('#define NC_FORMAT_64BIT_DATA'):
7476 has_cdf5_format = True
77
78 ncmetapath = os.path.join(d,'netcdf_meta.h')
79 if os.path.exists(ncmetapath):
80 has_cdf5 = False
81 for line in open(ncmetapath):
82 if line.startswith('#define NC_HAS_CDF5'):
83 has_cdf5 = True
7584 break
7685
7786 return has_rename_grp, has_nc_inq_path, has_nc_inq_format_extended, \
78 has_cdf5_format, has_nc_open_mem
87 has_cdf5_format, has_nc_open_mem, has_nc_par
7988
8089
8190 def getnetcdfvers(libdirs):
138147 curl_dir = os.environ.get('CURL_DIR')
139148 curl_libdir = os.environ.get('CURL_LIBDIR')
140149 curl_incdir = os.environ.get('CURL_INCDIR')
150 mpi_incdir = os.environ.get('MPI_INCDIR')
141151
142152 USE_NCCONFIG = os.environ.get('USE_NCCONFIG')
143153 if USE_NCCONFIG is not None:
229239 pass
230240 try:
231241 curl_incdir = config.get("directories", "curl_incdir")
242 except:
243 pass
244 try:
245 mpi_incdir = config.get("directories","mpi_incdir")
232246 except:
233247 pass
234248 try:
441455 else:
442456 # append numpy include dir.
443457 import numpy
444
445458 inc_dirs.append(numpy.get_include())
446459
447460 # get netcdf library version.
454467 cmdclass = {}
455468 netcdf4_src_root = osp.join('netCDF4', '_netCDF4')
456469 netcdf4_src_c = netcdf4_src_root + '.c'
470 netcdftime_src_root = osp.join('netcdftime', '_netcdftime')
471 netcdftime_src_c = netcdftime_src_root + '.c'
457472 if 'sdist' not in sys.argv[1:] and 'clean' not in sys.argv[1:]:
458473 sys.stdout.write('using Cython to compile netCDF4.pyx...\n')
459 # remove netCDF4.c file if it exists, so cython will recompile netCDF4.pyx.
474 # remove _netCDF4.c file if it exists, so cython will recompile _netCDF4.pyx.
460475 # run for build *and* install (issue #263). Otherwise 'pip install' will
461 # not regenerate netCDF4.c, even if the C lib supports the new features.
462 if len(sys.argv) >= 2 and os.path.exists(netcdf4_src_c):
463 os.remove(netcdf4_src_c)
476 # not regenerate _netCDF4.c, even if the C lib supports the new features.
477 if len(sys.argv) >= 2:
478 if os.path.exists(netcdf4_src_c):
479 os.remove(netcdf4_src_c)
480 # same for _netcdftime.c
481 if os.path.exists(netcdftime_src_c):
482 os.remove(netcdftime_src_c)
464483 # this determines whether renameGroup and filepath methods will work.
465484 has_rename_grp, has_nc_inq_path, has_nc_inq_format_extended, \
466 has_cdf5_format, has_nc_open_mem = check_api(inc_dirs)
485 has_cdf5_format, has_nc_open_mem, has_nc_par = check_api(inc_dirs)
486 try:
487 import mpi4py
488 except ImportError:
489 has_nc_par = False
467490
468491 f = open(osp.join('include', 'constants.pyx'), 'w')
469492 if has_rename_grp:
502525 sys.stdout.write('netcdf lib does not have cdf-5 format capability\n')
503526 f.write('DEF HAS_CDF5_FORMAT = 0\n')
504527
528 if has_nc_par:
529 sys.stdout.write('netcdf lib has netcdf4 parallel functions\n')
530 f.write('DEF HAS_NC_PAR = 1\n')
531 else:
532 sys.stdout.write('netcdf lib does not have netcdf4 parallel functions\n')
533 f.write('DEF HAS_NC_PAR = 0\n')
534
505535 f.close()
536
537 if has_nc_par:
538 inc_dirs.append(mpi4py.get_include())
539 # mpi_incdir should not be needed if using nc-config
540 # (should be included in nc-config --cflags)
541 if mpi_incdir is not None: inc_dirs.append(mpi_incdir)
542
506543 ext_modules = [Extension("netCDF4._netCDF4",
507544 [netcdf4_src_root + '.pyx'],
508545 libraries=libs,
510547 include_dirs=inc_dirs + ['include'],
511548 runtime_library_dirs=runtime_lib_dirs),
512549 Extension('netcdftime._netcdftime',
513 ['netcdftime/_netcdftime.pyx'])]
550 [netcdftime_src_root + '.pyx'])]
514551 else:
515552 ext_modules = None
516553
517554 setup(name="netCDF4",
518555 cmdclass=cmdclass,
519 version="1.3.0",
520 long_description="netCDF version 4 has many features not found in earlier versions of the library, such as hierarchical groups, zlib compression, multiple unlimited dimensions, and new data types. It is implemented on top of HDF5. This module implements most of the new features, and can read and write netCDF files compatible with older versions of the library. The API is modelled after Scientific.IO.NetCDF, and should be familiar to users of that module.\n\nThis project has a `Subversion repository <http://code.google.com/p/netcdf4-python/source>`_ where you may access the most up-to-date source.",
556 version="1.3.1",
557 long_description="netCDF version 4 has many features not found in earlier versions of the library, such as hierarchical groups, zlib compression, multiple unlimited dimensions, and new data types. It is implemented on top of HDF5. This module implements most of the new features, and can read and write netCDF files compatible with older versions of the library. The API is modelled after Scientific.IO.NetCDF, and should be familiar to users of that module.\n\nThis project is hosted on a `GitHub repository <https://github.com/Unidata/netcdf4-python>`_ where you may access the most up-to-date source.",
521558 author="Jeff Whitaker",
522559 author_email="jeffrey.s.whitaker@noaa.gov",
523560 url="http://github.com/Unidata/netcdf4-python",
0 import glob, os, sys, unittest, netCDF4
0 import glob, os, sys, unittest
11 from netCDF4 import getlibversion,__hdf5libversion__,__netcdf4libversion__,__version__
2 from netCDF4 import __has_cdf5_format__, __has_nc_inq_path__, __has_nc_par__
23
34 # can also just run
45 # python -m unittest discover . 'tst*py'
1314 else:
1415 test_files.remove('tst_unicode3.py')
1516 sys.stdout.write('not running tst_unicode3.py ...\n')
16 if __netcdf4libversion__ < '4.2.1':
17 if __netcdf4libversion__ < '4.2.1' or __has_nc_par__:
1718 test_files.remove('tst_diskless.py')
1819 sys.stdout.write('not running tst_diskless.py ...\n')
19 if __netcdf4libversion__ < '4.1.2':
20 if not __has_nc_inq_path__:
2021 test_files.remove('tst_filepath.py')
2122 sys.stdout.write('not running tst_filepath.py ...\n')
22 if __netcdf4libversion__ < '4.4.0' or sys.maxsize < 2**32:
23 if not __has_cdf5_format__:
2324 test_files.remove('tst_cdf5.py')
2425 sys.stdout.write('not running tst_cdf5.py ...\n')
2526
4041 runner.run(testsuite)
4142
4243 if __name__ == '__main__':
43 import numpy
44 import numpy, cython
4445 sys.stdout.write('\n')
4546 sys.stdout.write('netcdf4-python version: %s\n' % __version__)
4647 sys.stdout.write('HDF5 lib version: %s\n' % __hdf5libversion__)
4748 sys.stdout.write('netcdf lib version: %s\n' % __netcdf4libversion__)
4849 sys.stdout.write('numpy version %s\n' % numpy.__version__)
50 sys.stdout.write('cython version %s\n' % cython.__version__)
4951 runner = unittest.TextTestRunner(verbosity=1)
5052 result = runner.run(testsuite)
5153 if not result.wasSuccessful():
11 import tempfile
22 import unittest
33 import netCDF4
4
45
56 class test_filepath(unittest.TestCase):
67
2223 nc.close()
2324 shutil.rmtree(tmpdir)
2425
26 def test_no_such_file_raises(self):
27 fname = 'not_a_nc_file.nc'
28 with self.assertRaisesRegexp(IOError, fname):
29 netCDF4.Dataset(fname, 'r')
30
31
2532 if __name__ == '__main__':
2633 unittest.main()
44 import numpy as np
55 from numpy import ma
66 from numpy.testing import assert_array_equal
7 from netCDF4 import Dataset
7 from netCDF4 import Dataset, __netcdf4libversion__
88
99 # Test use of vector of missing values.
1010
2222 f = Dataset(self.testfile, 'w')
2323 d = f.createDimension('x',6)
2424 v = f.createVariable('v', "i2", 'x')
25 # issue 730: set fill_value for vlen str vars
26 v2 = f.createVariable('v2',str,'x',fill_value=u'<missing>')
2527
2628 v.missing_value = self.missing_values
2729 v[:] = self.v
30 v2[0]='first'
2831
2932 f.close()
3033
4043
4144 f = Dataset(self.testfile)
4245 v = f.variables["v"]
46 v2 = f.variables["v2"]
4347 self.assertTrue(isinstance(v[:], ma.core.MaskedArray))
4448 assert_array_equal(v[:], self.v_ma)
4549 assert_array_equal(v[2],self.v[2]) # issue #624.
4751 self.assertTrue(isinstance(v[:], np.ndarray))
4852 assert_array_equal(v[:], self.v)
4953
54 # issue 730
55 # this part fails with netcdf 4.1.3
56 # a bug in vlen strings?
57 if __netcdf4libversion__ >= '4.4.0':
58 assert (v2[0]==u'first')
59 assert (v2[1]==u'<missing>')
60
61
5062 f.close()
5163
5264
8585 yr = 1979+nfile
8686 time.units = 'days since %s-01-01' % yr
8787
88 time.calendar = 'standard'
88 # Do not set the calendar attribute on the created files to test calendar
89 # overload.
90 # time.calendar = 'standard'
8991
9092 x = f.createVariable('x','f',('time', 'y', 'z'))
9193 x.units = 'potatoes per square mile'
105107
106108
107109 def runTest(self):
110 # The test files have no calendar attribute on the time variable.
111 calendar = 'standard'
112
108113 # Get the real dates
109114 dates = []
110115 for file in self.files:
111116 f = Dataset(file)
112117 t = f.variables['time']
113 dates.extend(num2date(t[:], t.units, t.calendar))
118 dates.extend(num2date(t[:], t.units, calendar))
114119 f.close()
115120
116121 # Compare with the MF dates
117122 f = MFDataset(self.files,check=True)
118123 t = f.variables['time']
119 mfdates = num2date(t[:], t.units, t.calendar)
120124
121 T = MFTime(t)
125 T = MFTime(t, calendar=calendar)
126 assert_equal(T.calendar, calendar)
122127 assert_equal(len(T), len(t))
123128 assert_equal(T.shape, t.shape)
124129 assert_equal(T.dimensions, t.dimensions)
127132 assert_equal(date2index(datetime.datetime(1980, 1, 2), T), 366)
128133 f.close()
129134
135 # Test exception is raised when no calendar attribute is available on the
136 # time variable.
137 with MFDataset(self.files, check=True) as ds:
138 with self.assertRaises(ValueError):
139 MFTime(ds.variables['time'])
140
141 # Test exception is raised when the calendar attribute is different on the
142 # variables. First, add calendar attributes to file. Note this will modify
143 # the files inplace.
144 calendars = ['standard', 'gregorian']
145 for idx, f in enumerate(self.files):
146 with Dataset(f, 'a') as ds:
147 ds.variables['time'].calendar = calendars[idx]
148 with MFDataset(self.files, check=True) as ds:
149 with self.assertRaises(ValueError):
150 MFTime(ds.variables['time'])
151
152
130153 if __name__ == '__main__':
131154 unittest.main()