Dask Arrays with Xarray
The scientific Python package known as Dask provides Dask Arrays: parallel, larger-than-memory, n-dimensional arrays that make use of blocked algorithms. They are analogous to Numpy arrays, but are distributed. These terms are defined below:
Parallel code uses many or all of the cores on the computer running the code.
Larger-than-memory refers to algorithms that break up data arrays into small pieces, operate on these pieces in an optimized fashion, and stream data from a storage device. This allows a user or programmer to work with datasets of a size larger than the available memory.
A blocked algorithm speeds up large computations by converting them into a series of smaller computations.
In this tutorial, we cover the use of Xarray to wrap Dask arrays. By using Dask arrays instead of Numpy arrays in Xarray data objects, it becomes possible to execute analysis code in parallel with much less code and effort.
Learning Objectives
Learn the distinction between eager and lazy execution, and performing both types of execution with Xarray
Understand key features of Dask Arrays
Learn to perform operations with Dask Arrays in similar ways to performing operations with NumPy arrays
Understand the use of Xarray
DataArrays
andDatasets
as “Dask collections”, and the use of top-level Dask functions such asdask.visualize()
on such collectionsUnderstand the ability to use Dask transparently in all built-in Xarray operations
Prerequisites
Concepts |
Importance |
Notes |
---|---|---|
Necessary |
Familiarity with Data Arrays |
|
Necessary |
Familiarity with Xarray Data Structures |
Time to learn: 30-40 minutes
Imports
For this tutorial, as we are working with Dask, there are a number of Dask packages that must be imported. Also, this is technically an Xarray tutorial, so Xarray and NumPy must also be imported. Finally, the Pythia datasets package is imported, allowing access to the Project Pythia example data library.
import dask
import dask.array as da
import numpy as np
import xarray as xr
from dask.diagnostics import ProgressBar
from dask.utils import format_bytes
from pythia_datasets import DATASETS
Blocked algorithms
As described above, the definition of “blocked algorithm” is an algorithm that replaces a large operation with many small operations. In the case of datasets, this means that a blocked algorithm separates a dataset into chunks, and performs an operation on each.
As an example of how blocked algorithms work, consider a dataset containing a billion numbers, and assume that the sum of the numbers is needed. Using a non-blocked algorithm, all of the numbers are added in one operation, which is extremely inefficient. However, by using a blocked algorithm, the dataset is broken into chunks. (For the purposes of this example, assume that 1,000 chunks are created, with 1,000,000 numbers each.) The sum of the numbers in each chunk is taken, most likely in parallel, and then each of those sums are summed to obtain the final result.
By using blocked algorithms, we achieve the result, in this case one sum of one billion numbers, through the results of many smaller operations, in this case one thousand sums of one million numbers each. (Also note that each of the one thousand sums must then be summed, making the total number of sums 1,001.) This allows for a much greater degree of parallelism, potentially speeding up the code execution dramatically.
dask.array
contains these algorithms
The main object type used in Dask is dask.array
, which implements a subset of the ndarray
(NumPy array) interface. However, unlike ndarray
, dask.array
uses blocked algorithms, which break up the array into smaller arrays, as described above. This allows for the execution of computations on arrays larger than memory, by using parallelism to divide the computation among multiple cores. Dask manages and coordinates blocked algorithms for any given computation by using Dask graphs, which lay out in detail the steps Dask takes to solve a problem. In addition, dask.array
objects, known as Dask Arrays, are lazy; in other words, any computation performed on them is delayed until a specific method is called.
Create a dask.array
object
As stated earlier, Dask Arrays are loosely based on NumPy arrays. In the next set of examples, we illustrate the main differences between Dask Arrays and NumPy arrays. In order to illustrate the differences, we must have both a Dask Array object and a NumPy array object. Therefore, this first example creates a 3-D NumPy array of random data:
shape = (600, 200, 200)
arr = np.random.random(shape)
arr
array([[[0.70779841, 0.28061749, 0.83048023, ..., 0.89867656,
0.84436136, 0.91196346],
[0.26665085, 0.10029881, 0.20806093, ..., 0.52704294,
0.1859516 , 0.63087039],
[0.2229878 , 0.92398193, 0.88034212, ..., 0.04046696,
0.61218389, 0.5097986 ],
...,
[0.31578539, 0.64630712, 0.2040543 , ..., 0.52724195,
0.34308869, 0.01273355],
[0.88395073, 0.27216906, 0.53964105, ..., 0.78675465,
0.26469026, 0.84634013],
[0.57091391, 0.30532079, 0.36719211, ..., 0.12087925,
0.4918076 , 0.18922193]],
[[0.2103409 , 0.25417962, 0.83128983, ..., 0.45762868,
0.74844488, 0.54116346],
[0.14402619, 0.89953606, 0.5969845 , ..., 0.64165787,
0.99341571, 0.8647843 ],
[0.86615325, 0.76254859, 0.65377482, ..., 0.6122808 ,
0.2603268 , 0.23885823],
...,
[0.54626888, 0.82707281, 0.02987539, ..., 0.76769487,
0.01364736, 0.75572541],
[0.34594162, 0.48785491, 0.06364969, ..., 0.69107762,
0.2277181 , 0.14392505],
[0.5703986 , 0.8056349 , 0.83081297, ..., 0.37641165,
0.39922453, 0.22403617]],
[[0.20291046, 0.6774729 , 0.05677663, ..., 0.91892004,
0.86683551, 0.48404201],
[0.24470933, 0.47067116, 0.87002417, ..., 0.42609136,
0.91221637, 0.93053886],
[0.73904077, 0.06639988, 0.4412007 , ..., 0.36713664,
0.28521483, 0.9296752 ],
...,
[0.16260656, 0.39356632, 0.09382441, ..., 0.16201428,
0.2647925 , 0.21822879],
[0.61003516, 0.34268298, 0.92607418, ..., 0.47392578,
0.60874036, 0.50914546],
[0.99130847, 0.2501501 , 0.26049323, ..., 0.28960788,
0.91285312, 0.10934265]],
...,
[[0.18147605, 0.94568554, 0.09449721, ..., 0.52831383,
0.84715045, 0.08179484],
[0.93363769, 0.40436928, 0.43312103, ..., 0.31977142,
0.29394532, 0.84247381],
[0.87041958, 0.03637841, 0.43193719, ..., 0.21067278,
0.3032195 , 0.15301586],
...,
[0.29401464, 0.50712464, 0.48398807, ..., 0.88555735,
0.04300938, 0.44452155],
[0.81977755, 0.92209603, 0.0887396 , ..., 0.05146497,
0.13516942, 0.09761548],
[0.10615471, 0.68969113, 0.90774402, ..., 0.23078073,
0.22570853, 0.68467505]],
[[0.85909204, 0.77654888, 0.93176451, ..., 0.37857493,
0.27800711, 0.61153363],
[0.60769757, 0.90150123, 0.46962334, ..., 0.22385267,
0.7572969 , 0.16494546],
[0.19203774, 0.23162561, 0.12195898, ..., 0.86795227,
0.17911356, 0.00133847],
...,
[0.56354648, 0.43036141, 0.44777469, ..., 0.32422039,
0.22404491, 0.13891162],
[0.97652795, 0.36475606, 0.67803939, ..., 0.80214015,
0.53195345, 0.77818648],
[0.31188964, 0.7701487 , 0.95596295, ..., 0.62504092,
0.74451281, 0.29593383]],
[[0.72864448, 0.05150956, 0.88670521, ..., 0.90379065,
0.31856825, 0.2899122 ],
[0.76636537, 0.04007467, 0.28144395, ..., 0.30719216,
0.37547016, 0.31468012],
[0.86749378, 0.97160656, 0.84917914, ..., 0.27362108,
0.11304553, 0.69428799],
...,
[0.21043948, 0.28610044, 0.88889816, ..., 0.38647392,
0.74293984, 0.85467266],
[0.47724731, 0.65641355, 0.8408336 , ..., 0.78657512,
0.93156735, 0.01914926],
[0.68720365, 0.99841021, 0.29111153, ..., 0.68060243,
0.19731524, 0.60834903]]])
format_bytes(arr.nbytes)
'183.11 MiB'
As shown above, this NumPy array contains about 183 MB of data.
As stated above, we must also create a Dask Array. This next example creates a Dask Array with the same dimension sizes as the existing NumPy array:
darr = da.random.random(shape, chunks=(300, 100, 200))
By specifying values to the chunks
keyword argument, we can specify the array pieces that Dask’s blocked algorithms break the array into; in this case, we specify (300, 100, 200)
.
Specifying Chunks
In this tutorial, we specify Dask Array chunks in a block shape. However, there are many additional ways to specify chunks; see this documentation for more details.
If you are viewing this page as a Jupyter Notebook, the next Jupyter cell will produce a rich information graphic giving in-depth details about the array and each individual chunk.
darr
|
The above graphic contains a symbolic representation of the array, including shape
, dtype
, and chunksize
. (Your view may be different, depending on how you are accessing this page.) Notice that there is no data shown for this array; this is because Dask Arrays are lazy, as described above. Before we call a compute method for this array, we first illustrate the structure of a Dask graph. In this example, we show the Dask graph by calling .visualize()
on the array:
darr.visualize()
As shown in the above Dask graph, our array has four chunks, each one created by a call to NumPy’s “random” method (np.random.random
). These chunks are concatenated into a single array after the calculation is performed.
Manipulate a dask.array
object as you would a numpy array
We can perform computations on the Dask Array created above in a similar fashion to NumPy arrays. These computations include arithmetic, slicing, and reductions, among others.
Although the code for performing these computations is similar between NumPy arrays and Dask Arrays, the process by which they are performed is quite different. For example, it is possible to call sum()
on both a NumPy array and a Dask Array; however, these two sum()
calls are definitely not the same, as shown below.
What’s the difference?
When sum()
is called on a Dask Array, the computation is not performed; instead, an expression of the computation is built. The sum()
computation, as well as any other computation methods called on the same Dask Array, are not performed until a specific method (known as a compute method) is called on the array. (This is known as lazy execution.) On the other hand, calling sum()
on a NumPy array performs the calculation immediately; this is known as eager execution.
Why the difference?
As described earlier, a Dask Array is divided into chunks. Any computations run on the Dask Array run on each chunk individually. If the result of the computation is obtained before the computation runs through all of the chunks, Dask can stop the computation to save CPU time and memory resources.
This example illustrates calling sum()
on a Dask Array; it also includes a demonstration of lazy execution, as well as another Dask graph display:
total = darr.sum()
total
|
total.visualize()
Compute the result
As described above, Dask Array objects make use of lazy execution. Therefore, operations performed on a Dask Array wait to execute until a compute method is called. As more operations are queued in this way, the Dask Array’s Dask graph increases in complexity, reflecting the steps Dask will take to perform all of the queued operations.
In this example, we call a compute method, simply called .compute()
, to run on the Dask Array all of the stored computations:
%%time
total.compute()
CPU times: user 326 ms, sys: 37.3 ms, total: 363 ms
Wall time: 181 ms
np.float64(11999691.109575726)
Exercise with dask.arrays
In this section of the page, the examples are hands-on exercises pertaining to Dask Arrays. If these exercises are not interesting to you, this section can be used strictly as examples regardless of how the page is viewed. However, if you wish to participate in the exercises, make sure that you are viewing this page as a Jupyter Notebook.
For the first exercise, modify the chunk size or shape of the Dask Array created earlier. Call .sum()
on the modified Dask Array, and visualize the Dask graph to view the changes.
da.random.random(shape, chunks=(50, 200, 400)).sum().visualize()
As is obvious from the above exercise, Dask quickly and easily determines a strategy for performing the operations, in this case a sum. This illustrates the appeal of Dask: automatic algorithm generation that scales from simple arithmetic problems to highly complex scientific equations with large datasets and multiple operations.
In this next set of examples, we demonstrate that increasing the complexity of the operations performed also increases the complexity of the Dask graph.
In this example, we use randomly selected functions, arguments and Python slices to create a complex set of operations. We then visualize the Dask graph to illustrate the increased complexity:
z = darr.dot(darr.T).mean(axis=0)[::2, :].std(axis=1)
z
|
z.visualize()
Testing a bigger calculation
While the earlier examples in this tutorial described well the basics of Dask, the size of the data in those examples, about 180 MB, is far too small for an actual use of Dask.
In this example, we create a much larger array, more indicative of data actually used in Dask:
darr = da.random.random((4000, 100, 4000), chunks=(1000, 100, 500)).astype('float32')
darr
|
The dataset created in the previous example is much larger, approximately 6 GB. Depending on how many programs are running on your computer, this may be greater than the amount of free RAM on your computer. However, as Dask is larger-than-memory, the amount of free RAM does not impede Dask’s ability to work on this dataset.
In this example, we again perform randomly selected operations, but this time on the much larger dataset. We also visualize the Dask graph, and then run the compute method. However, as computing complex functions on large datasets is inherently time-consuming, we show a progress bar to track the progress of the computation.
z = (darr + darr.T)[::2, :].mean(axis=2)
z.visualize()
with ProgressBar():
computed_ds = z.compute()
[ ] | 0% Completed | 182.53 us
[ ] | 0% Completed | 107.01 ms
[ ] | 0% Completed | 207.71 ms
[ ] | 0% Completed | 308.52 ms
[ ] | 0% Completed | 409.29 ms
[ ] | 0% Completed | 510.09 ms
[# ] | 2% Completed | 611.33 ms
[## ] | 5% Completed | 712.35 ms
[### ] | 9% Completed | 814.80 ms
[#### ] | 10% Completed | 915.69 ms
[#### ] | 10% Completed | 1.02 s
[#### ] | 10% Completed | 1.12 s
[#### ] | 10% Completed | 1.22 s
[#### ] | 10% Completed | 1.32 s
[##### ] | 13% Completed | 1.42 s
[###### ] | 17% Completed | 1.52 s
[######## ] | 20% Completed | 1.62 s
[######## ] | 21% Completed | 1.72 s
[######## ] | 21% Completed | 1.83 s
[######## ] | 21% Completed | 1.93 s
[######## ] | 21% Completed | 2.03 s
[######## ] | 21% Completed | 2.13 s
[########## ] | 25% Completed | 2.23 s
[########### ] | 28% Completed | 2.33 s
[############ ] | 31% Completed | 2.43 s
[############# ] | 34% Completed | 2.53 s
[############# ] | 34% Completed | 2.63 s
[############# ] | 34% Completed | 2.73 s
[############# ] | 34% Completed | 2.84 s
[############# ] | 34% Completed | 2.94 s
[############## ] | 35% Completed | 3.04 s
[############### ] | 39% Completed | 3.14 s
[################# ] | 43% Completed | 3.24 s
[################## ] | 46% Completed | 3.34 s
[################### ] | 48% Completed | 3.44 s
[################### ] | 48% Completed | 3.54 s
[################### ] | 48% Completed | 3.64 s
[################### ] | 48% Completed | 3.74 s
[################### ] | 48% Completed | 3.84 s
[#################### ] | 51% Completed | 3.94 s
[##################### ] | 54% Completed | 4.05 s
[####################### ] | 58% Completed | 4.15 s
[####################### ] | 59% Completed | 4.25 s
[####################### ] | 59% Completed | 4.35 s
[####################### ] | 59% Completed | 4.45 s
[####################### ] | 59% Completed | 4.55 s
[####################### ] | 59% Completed | 4.65 s
[######################## ] | 62% Completed | 4.75 s
[########################## ] | 65% Completed | 4.85 s
[############################ ] | 70% Completed | 4.96 s
[############################# ] | 72% Completed | 5.06 s
[############################# ] | 74% Completed | 5.16 s
[############################# ] | 74% Completed | 5.26 s
[############################# ] | 74% Completed | 5.36 s
[############################# ] | 74% Completed | 5.46 s
[############################## ] | 75% Completed | 5.56 s
[############################### ] | 78% Completed | 5.66 s
[################################ ] | 81% Completed | 5.77 s
[################################## ] | 85% Completed | 5.87 s
[################################## ] | 85% Completed | 5.97 s
[################################## ] | 85% Completed | 6.07 s
[################################## ] | 85% Completed | 6.17 s
[################################## ] | 85% Completed | 6.27 s
[################################## ] | 85% Completed | 6.37 s
[################################### ] | 89% Completed | 6.48 s
[##################################### ] | 93% Completed | 6.58 s
[###################################### ] | 97% Completed | 6.68 s
[########################################] | 100% Completed | 6.78 s
Dask Arrays with Xarray
While directly interacting with Dask Arrays can be useful on occasion, more often than not Dask Arrays are interacted with through Xarray. Since Xarray wraps NumPy arrays, and Dask Arrays contain most of the functionality of NumPy arrays, Xarray can also wrap Dask Arrays, allowing anyone with knowledge of Xarray to easily start using the Dask interface.
Reading data with Dask
and Xarray
As demonstrated in previous examples, a Dask Array consists of many smaller arrays, called chunks:
darr
|
As shown in the following example, to read data into Xarray as Dask Arrays, simply specify the chunks
keyword argument when calling the open_dataset()
function:
ds = xr.open_dataset(DATASETS.fetch('CESM2_sst_data.nc'), chunks={})
ds.tos
/home/runner/miniconda3/envs/pythia-book-dev/lib/python3.11/site-packages/xarray/conventions.py:287: SerializationWarning: variable 'tos' has multiple fill values {np.float32(1e+20), np.float64(1e+20)} defined, decoding all values to NaN.
var = coder.decode(var, name=name)
<xarray.DataArray 'tos' (time: 180, lat: 180, lon: 360)> Size: 47MB dask.array<open_dataset-tos, shape=(180, 180, 360), dtype=float32, chunksize=(1, 180, 360), chunktype=numpy.ndarray> Coordinates: * time (time) object 1kB 2000-01-15 12:00:00 ... 2014-12-15 12:00:00 * lat (lat) float64 1kB -89.5 -88.5 -87.5 -86.5 ... 86.5 87.5 88.5 89.5 * lon (lon) float64 3kB 0.5 1.5 2.5 3.5 4.5 ... 356.5 357.5 358.5 359.5 Attributes: (12/19) cell_measures: area: areacello cell_methods: area: mean where sea time: mean comment: Model data on the 1x1 grid includes values in all cells f... description: This may differ from "surface temperature" in regions of ... frequency: mon id: tos ... ... time_label: time-mean time_title: Temporal mean title: Sea Surface Temperature type: real units: degC variable_id: tos
While it is a valid operation to pass an empty list to the chunks
keyword argument, this technique does not specify how to chunk the data, and therefore the resulting Dask Array contains only one chunk.
Correct usage of the chunks
keyword argument specifies how many values in each dimension are contained in a single chunk. In this example, specifying the chunks keyword argument as chunks={'time':90}
indicates to Xarray and Dask that 90 time slices are allocated to each chunk on the temporal axis.
Since this dataset contains 180 total time slices, the data variable tos
(holding the sea surface temperature data) is now split into two chunks in the temporal dimension.
ds = xr.open_dataset(
DATASETS.fetch('CESM2_sst_data.nc'),
engine="netcdf4",
chunks={"time": 90, "lat": 180, "lon": 360},
)
ds.tos
/home/runner/miniconda3/envs/pythia-book-dev/lib/python3.11/site-packages/xarray/conventions.py:287: SerializationWarning: variable 'tos' has multiple fill values {np.float32(1e+20), np.float64(1e+20)} defined, decoding all values to NaN.
var = coder.decode(var, name=name)
<xarray.DataArray 'tos' (time: 180, lat: 180, lon: 360)> Size: 47MB dask.array<open_dataset-tos, shape=(180, 180, 360), dtype=float32, chunksize=(90, 180, 360), chunktype=numpy.ndarray> Coordinates: * time (time) object 1kB 2000-01-15 12:00:00 ... 2014-12-15 12:00:00 * lat (lat) float64 1kB -89.5 -88.5 -87.5 -86.5 ... 86.5 87.5 88.5 89.5 * lon (lon) float64 3kB 0.5 1.5 2.5 3.5 4.5 ... 356.5 357.5 358.5 359.5 Attributes: (12/19) cell_measures: area: areacello cell_methods: area: mean where sea time: mean comment: Model data on the 1x1 grid includes values in all cells f... description: This may differ from "surface temperature" in regions of ... frequency: mon id: tos ... ... time_label: time-mean time_title: Temporal mean title: Sea Surface Temperature type: real units: degC variable_id: tos
It is fairly straightforward to retrieve a list of the chunks and their sizes for each dimension; simply call the .chunks
method on an Xarray DataArray
. In this example, we show that the tos
DataArray
now contains two chunks on the time
dimension, with each chunk containing 90 time slices.
ds.tos.chunks
((90, 90), (180,), (360,))
Xarray data structures are first-class dask collections
If an Xarray Dataset
or DataArray
object uses a Dask Array, rather than a NumPy array, it counts as a first-class Dask collection. This means that you can pass such an object to dask.visualize()
and dask.compute()
, in the same way as an individual Dask Array.
In this example, we call dask.visualize
on our Xarray DataArray
, displaying a Dask graph for the DataArray
object:
dask.visualize(ds)
Parallel and lazy computation using dask.array
with Xarray
As described above, Xarray Datasets
and DataArrays
containing Dask Arrays are first-class Dask collections. Therefore, computations performed on such objects are deferred until a compute method is called. (This is the definition of lazy computation.)
z = ds.tos.mean(['lat', 'lon']).dot(ds.tos.T)
z
<xarray.DataArray 'tos' (lon: 360, lat: 180)> Size: 259kB dask.array<sum-aggregate, shape=(360, 180), dtype=float32, chunksize=(360, 180), chunktype=numpy.ndarray> Coordinates: * lat (lat) float64 1kB -89.5 -88.5 -87.5 -86.5 ... 86.5 87.5 88.5 89.5 * lon (lon) float64 3kB 0.5 1.5 2.5 3.5 4.5 ... 356.5 357.5 358.5 359.5
As shown in the above example, the result of the applied operations is an Xarray DataArray
that contains a Dask Array, an identical object type to the object that the operations were performed on. This is true for any operations that can be applied to Xarray DataArrays
, including subsetting operations; this next example illustrates this:
z.isel(lat=0)
<xarray.DataArray 'tos' (lon: 360)> Size: 1kB dask.array<getitem, shape=(360,), dtype=float32, chunksize=(360,), chunktype=numpy.ndarray> Coordinates: lat float64 8B -89.5 * lon (lon) float64 3kB 0.5 1.5 2.5 3.5 4.5 ... 356.5 357.5 358.5 359.5
Because the data subset created above is also a first-class Dask collection, we can view its Dask graph using the dask.visualize()
function, as shown in this example:
dask.visualize(z)
Since this object is a first-class Dask collection, the computations performed on it have been deferred. To run these computations, we must call a compute method, in this case .compute()
. This example also uses a progress bar to track the computation progress.
with ProgressBar():
computed_ds = z.compute()
[ ] | 0% Completed | 113.95 us
[########################################] | 100% Completed | 101.06 ms
Summary
This tutorial covered the use of Xarray to access Dask Arrays, and the use of the chunks
keyword argument to open datasets with Dask data instead of NumPy data. Another important concept introduced in this tutorial is the usage of Xarray Datasets
and DataArrays
as Dask collections, allowing Xarray objects to be manipulated in a similar manner to Dask Arrays. Finally, the concepts of larger-than-memory datasets, lazy computation, and parallel computation, and how they relate to Xarray and Dask, were covered.
Dask Shortcomings
Although Dask Arrays and NumPy arrays are generally interchangeable, NumPy offers some functionality that is lacking in Dask Arrays. The usage of Dask Array comes with the following relevant issues:
Operations where the resulting shape depends on the array values can produce erratic behavior, or fail altogether, when used on a Dask Array. If the operation succeeds, the resulting Dask Array will have unknown chunk sizes, which can cause other sections of code to fail.
Operations that are by nature difficult to parallelize or less useful on very large datasets, such as
sort
, are not included in the Dask Array interface. Some of these operations have supported versions that are inherently more intuitive to parallelize, such astopk
.Development of new Dask functionality is only initiated when such functionality is required; therefore, some lesser-used NumPy functions, such as
np.sometrue
, are not yet implemented in Dask. However, many of these functions can be added as community contributions, or have already been added in this manner.
Learn More
For more in-depth information on Dask Arrays, visit the official documentation page. In addition, this screencast reinforces the concepts covered in this tutorial. (If you are viewing this page as a Jupyter Notebook, the screencast will appear below as an embedded YouTube video.)
from IPython.display import YouTubeVideo
YouTubeVideo(id="9h_61hXCDuI", width=600, height=300)
Resources and references
To find specific reference information about Dask and Xarray, see the official documentation pages listed below:
If you require assistance with a specific issue involving Xarray or Dask, the following resources may be of use:
Dask tag on StackOverflow, for usage questions
github discussions: dask for general, non-bug, discussion, and usage questions
github issues: dask for bug reports and feature requests
github discussions: xarray for general, non-bug, discussion, and usage questions
github issues: xarray for bug reports and feature requests
Certain sections of this tutorial are adapted from the following existing tutorials: