../../_images/dask_horizontal.svg

Dask Arrays with Xarray

The scientific Python package known as Dask provides Dask Arrays: parallel, larger-than-memory, n-dimensional arrays that make use of blocked algorithms. They are analogous to Numpy arrays, but are distributed. These terms are defined below:

  • Parallel code uses many or all of the cores on the computer running the code.

  • Larger-than-memory refers to algorithms that break up data arrays into small pieces, operate on these pieces in an optimized fashion, and stream data from a storage device. This allows a user or programmer to work with datasets of a size larger than the available memory.

  • A blocked algorithm speeds up large computations by converting them into a series of smaller computations.

In this tutorial, we cover the use of Xarray to wrap Dask arrays. By using Dask arrays instead of Numpy arrays in Xarray data objects, it becomes possible to execute analysis code in parallel with much less code and effort.

Learning Objectives

  • Learn the distinction between eager and lazy execution, and performing both types of execution with Xarray

  • Understand key features of Dask Arrays

  • Learn to perform operations with Dask Arrays in similar ways to performing operations with NumPy arrays

  • Understand the use of Xarray DataArrays and Datasets as “Dask collections”, and the use of top-level Dask functions such as dask.visualize() on such collections

  • Understand the ability to use Dask transparently in all built-in Xarray operations

Prerequisites

Concepts

Importance

Notes

Introduction to NumPy

Necessary

Familiarity with Data Arrays

Introduction to Xarray

Necessary

Familiarity with Xarray Data Structures

  • Time to learn: 30-40 minutes

Imports

For this tutorial, as we are working with Dask, there are a number of Dask packages that must be imported. Also, this is technically an Xarray tutorial, so Xarray and NumPy must also be imported. Finally, the Pythia datasets package is imported, allowing access to the Project Pythia example data library.

import dask
import dask.array as da
import numpy as np
import xarray as xr
from dask.diagnostics import ProgressBar
from dask.utils import format_bytes
from pythia_datasets import DATASETS

Blocked algorithms

As described above, the definition of “blocked algorithm” is an algorithm that replaces a large operation with many small operations. In the case of datasets, this means that a blocked algorithm separates a dataset into chunks, and performs an operation on each.

As an example of how blocked algorithms work, consider a dataset containing a billion numbers, and assume that the sum of the numbers is needed. Using a non-blocked algorithm, all of the numbers are added in one operation, which is extremely inefficient. However, by using a blocked algorithm, the dataset is broken into chunks. (For the purposes of this example, assume that 1,000 chunks are created, with 1,000,000 numbers each.) The sum of the numbers in each chunk is taken, most likely in parallel, and then each of those sums are summed to obtain the final result.

By using blocked algorithms, we achieve the result, in this case one sum of one billion numbers, through the results of many smaller operations, in this case one thousand sums of one million numbers each. (Also note that each of the one thousand sums must then be summed, making the total number of sums 1,001.) This allows for a much greater degree of parallelism, potentially speeding up the code execution dramatically.

dask.array contains these algorithms

The main object type used in Dask is dask.array, which implements a subset of the ndarray (NumPy array) interface. However, unlike ndarray, dask.array uses blocked algorithms, which break up the array into smaller arrays, as described above. This allows for the execution of computations on arrays larger than memory, by using parallelism to divide the computation among multiple cores. Dask manages and coordinates blocked algorithms for any given computation by using Dask graphs, which lay out in detail the steps Dask takes to solve a problem. In addition, dask.array objects, known as Dask Arrays, are lazy; in other words, any computation performed on them is delayed until a specific method is called.

Create a dask.array object

As stated earlier, Dask Arrays are loosely based on NumPy arrays. In the next set of examples, we illustrate the main differences between Dask Arrays and NumPy arrays. In order to illustrate the differences, we must have both a Dask Array object and a NumPy array object. Therefore, this first example creates a 3-D NumPy array of random data:

shape = (600, 200, 200)
arr = np.random.random(shape)
arr
array([[[0.60753676, 0.90372524, 0.21112022, ..., 0.68438681,
         0.52425763, 0.02685732],
        [0.39945576, 0.66222823, 0.10058881, ..., 0.52229024,
         0.45999756, 0.91029162],
        [0.25910128, 0.97429004, 0.28725407, ..., 0.68680605,
         0.74166052, 0.94898777],
        ...,
        [0.36440321, 0.69507224, 0.6028613 , ..., 0.6121109 ,
         0.85912931, 0.34311965],
        [0.51332677, 0.82029045, 0.24299256, ..., 0.16390613,
         0.70414225, 0.81931328],
        [0.13985753, 0.91413822, 0.89342226, ..., 0.88044587,
         0.39142216, 0.95355895]],

       [[0.77751784, 0.48455416, 0.48722404, ..., 0.69368473,
         0.70014008, 0.39425539],
        [0.05807709, 0.2855703 , 0.33381583, ..., 0.9803564 ,
         0.76052008, 0.23511979],
        [0.44530711, 0.19123072, 0.09869723, ..., 0.85190871,
         0.13380526, 0.89412768],
        ...,
        [0.67601594, 0.85009825, 0.03409513, ..., 0.46800704,
         0.93910302, 0.60408853],
        [0.83506818, 0.18493444, 0.67199191, ..., 0.63733572,
         0.61538854, 0.63580396],
        [0.80669553, 0.39422049, 0.67215175, ..., 0.79323021,
         0.83899956, 0.61398723]],

       [[0.41705567, 0.12706785, 0.07437466, ..., 0.39905663,
         0.18677747, 0.07123294],
        [0.2781491 , 0.33201042, 0.36597322, ..., 0.04681354,
         0.85864669, 0.07188824],
        [0.14186597, 0.17576606, 0.42885917, ..., 0.86691586,
         0.24471979, 0.02615977],
        ...,
        [0.05027436, 0.73206697, 0.62315389, ..., 0.02267481,
         0.48099898, 0.04019265],
        [0.78653434, 0.93414509, 0.1968681 , ..., 0.24346991,
         0.47444493, 0.74239853],
        [0.66613748, 0.75347136, 0.51986022, ..., 0.9523185 ,
         0.90054486, 0.62894275]],

       ...,

       [[0.16112345, 0.17073939, 0.14382046, ..., 0.84675394,
         0.60916469, 0.81809039],
        [0.65853866, 0.88824557, 0.28543409, ..., 0.61009739,
         0.14972708, 0.65938107],
        [0.90986698, 0.08013676, 0.24940343, ..., 0.35117107,
         0.30163568, 0.68670594],
        ...,
        [0.19438238, 0.38642798, 0.16803595, ..., 0.27835828,
         0.38177222, 0.8592891 ],
        [0.3787547 , 0.41756793, 0.44992785, ..., 0.93625965,
         0.52730521, 0.64408861],
        [0.87624892, 0.38655948, 0.68207197, ..., 0.70493382,
         0.59303305, 0.78357823]],

       [[0.60404857, 0.83168657, 0.08535043, ..., 0.04630349,
         0.2341352 , 0.4592937 ],
        [0.52664595, 0.11948067, 0.74893496, ..., 0.5899774 ,
         0.74225741, 0.12853283],
        [0.34209416, 0.57376714, 0.69469603, ..., 0.21343541,
         0.82395684, 0.39066765],
        ...,
        [0.02917353, 0.91302609, 0.98052368, ..., 0.56199009,
         0.20654442, 0.57308054],
        [0.81494823, 0.95992705, 0.7430457 , ..., 0.13013395,
         0.97348595, 0.39473079],
        [0.43148722, 0.68870897, 0.10208423, ..., 0.23514388,
         0.80311542, 0.62215139]],

       [[0.05055221, 0.56572699, 0.75036305, ..., 0.36865315,
         0.35576539, 0.08165764],
        [0.25610784, 0.95729158, 0.12906134, ..., 0.7898246 ,
         0.73059384, 0.55635562],
        [0.59782228, 0.46988943, 0.93915187, ..., 0.96387755,
         0.00671745, 0.86103159],
        ...,
        [0.24346242, 0.11080079, 0.160305  , ..., 0.03342472,
         0.30155734, 0.37120542],
        [0.9635474 , 0.8992707 , 0.59318836, ..., 0.81790578,
         0.29326384, 0.13702181],
        [0.38777003, 0.61793705, 0.50492581, ..., 0.13442124,
         0.57368211, 0.01105735]]])
format_bytes(arr.nbytes)
'183.11 MiB'

As shown above, this NumPy array contains about 183 MB of data.

As stated above, we must also create a Dask Array. This next example creates a Dask Array with the same dimension sizes as the existing NumPy array:

darr = da.random.random(shape, chunks=(300, 100, 200))

By specifying values to the chunks keyword argument, we can specify the array pieces that Dask’s blocked algorithms break the array into; in this case, we specify (300, 100, 200).

Specifying Chunks

In this tutorial, we specify Dask Array chunks in a block shape. However, there are many additional ways to specify chunks; see this documentation for more details.

If you are viewing this page as a Jupyter Notebook, the next Jupyter cell will produce a rich information graphic giving in-depth details about the array and each individual chunk.

darr
Array Chunk
Bytes 183.11 MiB 45.78 MiB
Shape (600, 200, 200) (300, 100, 200)
Dask graph 4 chunks in 1 graph layer
Data type float64 numpy.ndarray
200 200 600

The above graphic contains a symbolic representation of the array, including shape, dtype, and chunksize. (Your view may be different, depending on how you are accessing this page.) Notice that there is no data shown for this array; this is because Dask Arrays are lazy, as described above. Before we call a compute method for this array, we first illustrate the structure of a Dask graph. In this example, we show the Dask graph by calling .visualize() on the array:

darr.visualize()
../../_images/57f39e3d0829df5b12fc9298b4722d64bea6b8ef6a3e0b911e26309374691f26.png

As shown in the above Dask graph, our array has four chunks, each one created by a call to NumPy’s “random” method (np.random.random). These chunks are concatenated into a single array after the calculation is performed.

Manipulate a dask.array object as you would a numpy array

We can perform computations on the Dask Array created above in a similar fashion to NumPy arrays. These computations include arithmetic, slicing, and reductions, among others.

Although the code for performing these computations is similar between NumPy arrays and Dask Arrays, the process by which they are performed is quite different. For example, it is possible to call sum() on both a NumPy array and a Dask Array; however, these two sum() calls are definitely not the same, as shown below.

What’s the difference?

When sum() is called on a Dask Array, the computation is not performed; instead, an expression of the computation is built. The sum() computation, as well as any other computation methods called on the same Dask Array, are not performed until a specific method (known as a compute method) is called on the array. (This is known as lazy execution.) On the other hand, calling sum() on a NumPy array performs the calculation immediately; this is known as eager execution.

Why the difference?

As described earlier, a Dask Array is divided into chunks. Any computations run on the Dask Array run on each chunk individually. If the result of the computation is obtained before the computation runs through all of the chunks, Dask can stop the computation to save CPU time and memory resources.

This example illustrates calling sum() on a Dask Array; it also includes a demonstration of lazy execution, as well as another Dask graph display:

total = darr.sum()
total
Array Chunk
Bytes 8 B 8 B
Shape () ()
Dask graph 1 chunks in 3 graph layers
Data type float64 numpy.ndarray
total.visualize()
../../_images/caf93b42a7330563cb488018fff85a8cb1fef4526fde9351f6620316599f258c.png

Compute the result

As described above, Dask Array objects make use of lazy execution. Therefore, operations performed on a Dask Array wait to execute until a compute method is called. As more operations are queued in this way, the Dask Array’s Dask graph increases in complexity, reflecting the steps Dask will take to perform all of the queued operations.

In this example, we call a compute method, simply called .compute(), to run on the Dask Array all of the stored computations:

%%time
total.compute()
CPU times: user 405 ms, sys: 52.2 ms, total: 457 ms
Wall time: 253 ms
11998968.955273133

Exercise with dask.arrays

In this section of the page, the examples are hands-on exercises pertaining to Dask Arrays. If these exercises are not interesting to you, this section can be used strictly as examples regardless of how the page is viewed. However, if you wish to participate in the exercises, make sure that you are viewing this page as a Jupyter Notebook.

For the first exercise, modify the chunk size or shape of the Dask Array created earlier. Call .sum() on the modified Dask Array, and visualize the Dask graph to view the changes.

da.random.random(shape, chunks=(50, 200, 400)).sum().visualize()
../../_images/c72da04b1bfe3a506395391dafe24ac8521e9cf45f986769fdc135046116258b.png

As is obvious from the above exercise, Dask quickly and easily determines a strategy for performing the operations, in this case a sum. This illustrates the appeal of Dask: automatic algorithm generation that scales from simple arithmetic problems to highly complex scientific equations with large datasets and multiple operations.

In this next set of examples, we demonstrate that increasing the complexity of the operations performed also increases the complexity of the Dask graph.

In this example, we use randomly selected functions, arguments and Python slices to create a complex set of operations. We then visualize the Dask graph to illustrate the increased complexity:

z = darr.dot(darr.T).mean(axis=0)[::2, :].std(axis=1)
z
Array Chunk
Bytes 468.75 kiB 117.19 kiB
Shape (100, 600) (50, 300)
Dask graph 4 chunks in 12 graph layers
Data type float64 numpy.ndarray
600 100
z.visualize()
../../_images/cca07a6e3939834018b680d960904b8bd955f56435c9f91392b31351f3b130f4.png

Testing a bigger calculation

While the earlier examples in this tutorial described well the basics of Dask, the size of the data in those examples, about 180 MB, is far too small for an actual use of Dask.

In this example, we create a much larger array, more indicative of data actually used in Dask:

darr = da.random.random((4000, 100, 4000), chunks=(1000, 100, 500)).astype('float32')
darr
Array Chunk
Bytes 5.96 GiB 190.73 MiB
Shape (4000, 100, 4000) (1000, 100, 500)
Dask graph 32 chunks in 2 graph layers
Data type float32 numpy.ndarray
4000 100 4000

The dataset created in the previous example is much larger, approximately 6 GB. Depending on how many programs are running on your computer, this may be greater than the amount of free RAM on your computer. However, as Dask is larger-than-memory, the amount of free RAM does not impede Dask’s ability to work on this dataset.

In this example, we again perform randomly selected operations, but this time on the much larger dataset. We also visualize the Dask graph, and then run the compute method. However, as computing complex functions on large datasets is inherently time-consuming, we show a progress bar to track the progress of the computation.

z = (darr + darr.T)[::2, :].mean(axis=2)
z.visualize()
../../_images/427a042f8ccc3ff60b380168fee5dc5514053ef406cbf84f3d56858803efc733.png
with ProgressBar():
    computed_ds = z.compute()
[                                        ] | 0% Completed | 250.71 us
[                                        ] | 0% Completed | 106.14 ms
[                                        ] | 0% Completed | 206.93 ms
[                                        ] | 0% Completed | 307.78 ms
[                                        ] | 0% Completed | 408.49 ms
[                                        ] | 0% Completed | 509.19 ms
[                                        ] | 0% Completed | 613.72 ms
[#                                       ] | 3% Completed | 716.31 ms
[##                                      ] | 6% Completed | 817.21 ms
[###                                     ] | 9% Completed | 918.29 ms
[####                                    ] | 10% Completed | 1.02 s
[####                                    ] | 10% Completed | 1.12 s
[####                                    ] | 10% Completed | 1.22 s
[####                                    ] | 10% Completed | 1.32 s
[####                                    ] | 10% Completed | 1.42 s
[####                                    ] | 12% Completed | 1.52 s
[######                                  ] | 15% Completed | 1.62 s
[#######                                 ] | 18% Completed | 1.72 s
[########                                ] | 21% Completed | 1.83 s
[########                                ] | 21% Completed | 1.93 s
[########                                ] | 21% Completed | 2.03 s
[########                                ] | 21% Completed | 2.13 s
[########                                ] | 21% Completed | 2.23 s
[########                                ] | 22% Completed | 2.33 s
[#########                               ] | 24% Completed | 2.43 s
[##########                              ] | 27% Completed | 2.53 s
[############                            ] | 30% Completed | 2.63 s
[#############                           ] | 33% Completed | 2.74 s
[#############                           ] | 34% Completed | 2.84 s
[#############                           ] | 34% Completed | 2.94 s
[#############                           ] | 34% Completed | 3.04 s
[#############                           ] | 34% Completed | 3.14 s
[#############                           ] | 34% Completed | 3.24 s
[##############                          ] | 35% Completed | 3.34 s
[###############                         ] | 38% Completed | 3.44 s
[################                        ] | 41% Completed | 3.54 s
[#################                       ] | 44% Completed | 3.64 s
[###################                     ] | 48% Completed | 3.74 s
[###################                     ] | 48% Completed | 3.84 s
[###################                     ] | 48% Completed | 3.94 s
[###################                     ] | 48% Completed | 4.05 s
[###################                     ] | 48% Completed | 4.15 s
[###################                     ] | 48% Completed | 4.25 s
[####################                    ] | 51% Completed | 4.35 s
[#####################                   ] | 54% Completed | 4.45 s
[######################                  ] | 56% Completed | 4.55 s
[#######################                 ] | 59% Completed | 4.65 s
[#######################                 ] | 59% Completed | 4.75 s
[#######################                 ] | 59% Completed | 4.85 s
[#######################                 ] | 59% Completed | 4.95 s
[#######################                 ] | 59% Completed | 5.05 s
[#######################                 ] | 59% Completed | 5.15 s
[########################                ] | 62% Completed | 5.26 s
[#########################               ] | 64% Completed | 5.36 s
[###########################             ] | 68% Completed | 5.46 s
[############################            ] | 72% Completed | 5.56 s
[#############################           ] | 74% Completed | 5.66 s
[#############################           ] | 74% Completed | 5.76 s
[#############################           ] | 74% Completed | 5.86 s
[#############################           ] | 74% Completed | 5.96 s
[#############################           ] | 74% Completed | 6.06 s
[##############################          ] | 76% Completed | 6.17 s
[###############################         ] | 78% Completed | 6.27 s
[################################        ] | 82% Completed | 6.37 s
[#################################       ] | 84% Completed | 6.47 s
[##################################      ] | 85% Completed | 6.57 s
[##################################      ] | 85% Completed | 6.67 s
[##################################      ] | 85% Completed | 6.77 s
[##################################      ] | 85% Completed | 6.87 s
[##################################      ] | 85% Completed | 6.97 s
[###################################     ] | 89% Completed | 7.07 s
[####################################    ] | 91% Completed | 7.18 s
[#####################################   ] | 94% Completed | 7.28 s
[####################################### ] | 98% Completed | 7.38 s
[########################################] | 100% Completed | 7.48 s

Dask Arrays with Xarray

While directly interacting with Dask Arrays can be useful on occasion, more often than not Dask Arrays are interacted with through Xarray. Since Xarray wraps NumPy arrays, and Dask Arrays contain most of the functionality of NumPy arrays, Xarray can also wrap Dask Arrays, allowing anyone with knowledge of Xarray to easily start using the Dask interface.

Reading data with Dask and Xarray

As demonstrated in previous examples, a Dask Array consists of many smaller arrays, called chunks:

darr
Array Chunk
Bytes 5.96 GiB 190.73 MiB
Shape (4000, 100, 4000) (1000, 100, 500)
Dask graph 32 chunks in 2 graph layers
Data type float32 numpy.ndarray
4000 100 4000

As shown in the following example, to read data into Xarray as Dask Arrays, simply specify the chunks keyword argument when calling the open_dataset() function:

ds = xr.open_dataset(DATASETS.fetch('CESM2_sst_data.nc'), chunks={})
ds.tos
/home/runner/miniconda3/envs/pythia-book-dev/lib/python3.12/site-packages/xarray/conventions.py:286: SerializationWarning: variable 'tos' has multiple fill values {1e+20, 1e+20} defined, decoding all values to NaN.
  var = coder.decode(var, name=name)
<xarray.DataArray 'tos' (time: 180, lat: 180, lon: 360)> Size: 47MB
dask.array<open_dataset-tos, shape=(180, 180, 360), dtype=float32, chunksize=(1, 180, 360), chunktype=numpy.ndarray>
Coordinates:
  * time     (time) object 1kB 2000-01-15 12:00:00 ... 2014-12-15 12:00:00
  * lat      (lat) float64 1kB -89.5 -88.5 -87.5 -86.5 ... 86.5 87.5 88.5 89.5
  * lon      (lon) float64 3kB 0.5 1.5 2.5 3.5 4.5 ... 356.5 357.5 358.5 359.5
Attributes: (12/19)
    cell_measures:  area: areacello
    cell_methods:   area: mean where sea time: mean
    comment:        Model data on the 1x1 grid includes values in all cells f...
    description:    This may differ from "surface temperature" in regions of ...
    frequency:      mon
    id:             tos
    ...             ...
    time_label:     time-mean
    time_title:     Temporal mean
    title:          Sea Surface Temperature
    type:           real
    units:          degC
    variable_id:    tos

While it is a valid operation to pass an empty list to the chunks keyword argument, this technique does not specify how to chunk the data, and therefore the resulting Dask Array contains only one chunk.

Correct usage of the chunks keyword argument specifies how many values in each dimension are contained in a single chunk. In this example, specifying the chunks keyword argument as chunks={'time':90} indicates to Xarray and Dask that 90 time slices are allocated to each chunk on the temporal axis.

Since this dataset contains 180 total time slices, the data variable tos (holding the sea surface temperature data) is now split into two chunks in the temporal dimension.

ds = xr.open_dataset(
    DATASETS.fetch('CESM2_sst_data.nc'),
    engine="netcdf4",
    chunks={"time": 90, "lat": 180, "lon": 360},
)
ds.tos
/home/runner/miniconda3/envs/pythia-book-dev/lib/python3.12/site-packages/xarray/conventions.py:286: SerializationWarning: variable 'tos' has multiple fill values {1e+20, 1e+20} defined, decoding all values to NaN.
  var = coder.decode(var, name=name)
<xarray.DataArray 'tos' (time: 180, lat: 180, lon: 360)> Size: 47MB
dask.array<open_dataset-tos, shape=(180, 180, 360), dtype=float32, chunksize=(90, 180, 360), chunktype=numpy.ndarray>
Coordinates:
  * time     (time) object 1kB 2000-01-15 12:00:00 ... 2014-12-15 12:00:00
  * lat      (lat) float64 1kB -89.5 -88.5 -87.5 -86.5 ... 86.5 87.5 88.5 89.5
  * lon      (lon) float64 3kB 0.5 1.5 2.5 3.5 4.5 ... 356.5 357.5 358.5 359.5
Attributes: (12/19)
    cell_measures:  area: areacello
    cell_methods:   area: mean where sea time: mean
    comment:        Model data on the 1x1 grid includes values in all cells f...
    description:    This may differ from "surface temperature" in regions of ...
    frequency:      mon
    id:             tos
    ...             ...
    time_label:     time-mean
    time_title:     Temporal mean
    title:          Sea Surface Temperature
    type:           real
    units:          degC
    variable_id:    tos

It is fairly straightforward to retrieve a list of the chunks and their sizes for each dimension; simply call the .chunks method on an Xarray DataArray. In this example, we show that the tos DataArray now contains two chunks on the time dimension, with each chunk containing 90 time slices.

ds.tos.chunks
((90, 90), (180,), (360,))

Xarray data structures are first-class dask collections

If an Xarray Dataset or DataArray object uses a Dask Array, rather than a NumPy array, it counts as a first-class Dask collection. This means that you can pass such an object to dask.visualize() and dask.compute(), in the same way as an individual Dask Array.

In this example, we call dask.visualize on our Xarray DataArray, displaying a Dask graph for the DataArray object:

dask.visualize(ds)
../../_images/14823a00387a43fce8fd7b4035c5b98e68204d306bb0ab6a33c23ad5d09dd244.png

Parallel and lazy computation using dask.array with Xarray

As described above, Xarray Datasets and DataArrays containing Dask Arrays are first-class Dask collections. Therefore, computations performed on such objects are deferred until a compute method is called. (This is the definition of lazy computation.)

z = ds.tos.mean(['lat', 'lon']).dot(ds.tos.T)
z
<xarray.DataArray 'tos' (lon: 360, lat: 180)> Size: 259kB
dask.array<sum-aggregate, shape=(360, 180), dtype=float32, chunksize=(360, 180), chunktype=numpy.ndarray>
Coordinates:
  * lat      (lat) float64 1kB -89.5 -88.5 -87.5 -86.5 ... 86.5 87.5 88.5 89.5
  * lon      (lon) float64 3kB 0.5 1.5 2.5 3.5 4.5 ... 356.5 357.5 358.5 359.5

As shown in the above example, the result of the applied operations is an Xarray DataArray that contains a Dask Array, an identical object type to the object that the operations were performed on. This is true for any operations that can be applied to Xarray DataArrays, including subsetting operations; this next example illustrates this:

z.isel(lat=0)
<xarray.DataArray 'tos' (lon: 360)> Size: 1kB
dask.array<getitem, shape=(360,), dtype=float32, chunksize=(360,), chunktype=numpy.ndarray>
Coordinates:
    lat      float64 8B -89.5
  * lon      (lon) float64 3kB 0.5 1.5 2.5 3.5 4.5 ... 356.5 357.5 358.5 359.5

Because the data subset created above is also a first-class Dask collection, we can view its Dask graph using the dask.visualize() function, as shown in this example:

dask.visualize(z)
../../_images/1ae1aba7705f9f4c003ebc48f3b0f687a2031ed7f9d6cb8c8393bf159c6b7188.png

Since this object is a first-class Dask collection, the computations performed on it have been deferred. To run these computations, we must call a compute method, in this case .compute(). This example also uses a progress bar to track the computation progress.

with ProgressBar():
    computed_ds = z.compute()
[                                        ] | 0% Completed | 128.86 us
[########################################] | 100% Completed | 101.24 ms


Summary

This tutorial covered the use of Xarray to access Dask Arrays, and the use of the chunks keyword argument to open datasets with Dask data instead of NumPy data. Another important concept introduced in this tutorial is the usage of Xarray Datasets and DataArrays as Dask collections, allowing Xarray objects to be manipulated in a similar manner to Dask Arrays. Finally, the concepts of larger-than-memory datasets, lazy computation, and parallel computation, and how they relate to Xarray and Dask, were covered.

Dask Shortcomings

Although Dask Arrays and NumPy arrays are generally interchangeable, NumPy offers some functionality that is lacking in Dask Arrays. The usage of Dask Array comes with the following relevant issues:

  1. Operations where the resulting shape depends on the array values can produce erratic behavior, or fail altogether, when used on a Dask Array. If the operation succeeds, the resulting Dask Array will have unknown chunk sizes, which can cause other sections of code to fail.

  2. Operations that are by nature difficult to parallelize or less useful on very large datasets, such as sort, are not included in the Dask Array interface. Some of these operations have supported versions that are inherently more intuitive to parallelize, such as topk.

  3. Development of new Dask functionality is only initiated when such functionality is required; therefore, some lesser-used NumPy functions, such as np.sometrue, are not yet implemented in Dask. However, many of these functions can be added as community contributions, or have already been added in this manner.

Learn More

For more in-depth information on Dask Arrays, visit the official documentation page. In addition, this screencast reinforces the concepts covered in this tutorial. (If you are viewing this page as a Jupyter Notebook, the screencast will appear below as an embedded YouTube video.)

from IPython.display import YouTubeVideo

YouTubeVideo(id="9h_61hXCDuI", width=600, height=300)

Resources and references