# NumPy Broadcasting

## Overview

Before we begin, broadcasting is a valuable part of the power that NumPy provides. However, there’s no looking past the fact that broadcasting can be conceptually difficult to digest. This information can be helpful and very powerful, but we also suggest moving on to take a look at some of the label-based corners of the Python ecosystem, namely Pandas and Xarray for the ways that they make some of these concepts simpler or easier to use for real-world data.

An introduction to broadcasting

Avoiding loops with vectorization

## Using broadcasting to implicitly loop over data

### What is broadcasting?

Broadcasting is a useful NumPy tool that allows us to perform operations between arrays with different shapes, provided that they are compatible with each other in certain ways. To start, we can create an array below and add 5 to it:

```
import numpy as np
a = np.array([10, 20, 30, 40])
a + 5
```

```
array([15, 25, 35, 45])
```

This works even though 5 is not an array; it works like we would expect, adding 5 to each of the elements in `a`

. This also works if 5 is an array:

```
b = np.array([5])
a + b
```

```
array([15, 25, 35, 45])
```

This takes the single element in `b`

and adds it to each of the elements in `a`

. This won’t work for just any `b`

, though; for instance, the following:

```
b = np.array([5, 6, 7])
a + b
```

```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[4], line 2
1 b = np.array([5, 6, 7])
----> 2 a + b
ValueError: operands could not be broadcast together with shapes (4,) (3,)
```

won’t work. It does work if `a`

and `b`

are the same shape:

```
b = np.array([5, 5, 10, 10])
a + b
```

```
array([15, 25, 40, 50])
```

What if what we really want is pairwise addition of a, b? Without broadcasting, we could accomplish this by looping:

```
b = np.array([1, 2, 3, 4, 5])
```

```
result = np.empty((5, 4), dtype=np.int32)
for row, valb in enumerate(b):
for col, vala in enumerate(a):
result[row, col] = vala + valb
result
```

```
array([[11, 21, 31, 41],
[12, 22, 32, 42],
[13, 23, 33, 43],
[14, 24, 34, 44],
[15, 25, 35, 45]], dtype=int32)
```

We can also do this by manually repeating the arrays to the proper shape for the result, using `np.tile`

. This avoids the need to manually loop:

```
aa = np.tile(a, (5, 1))
aa
```

```
array([[10, 20, 30, 40],
[10, 20, 30, 40],
[10, 20, 30, 40],
[10, 20, 30, 40],
[10, 20, 30, 40]])
```

```
# Turn b into a column array, then tile it
bb = np.tile(b.reshape(5, 1), (1, 4))
bb
```

```
array([[1, 1, 1, 1],
[2, 2, 2, 2],
[3, 3, 3, 3],
[4, 4, 4, 4],
[5, 5, 5, 5]])
```

```
aa + bb
```

```
array([[11, 21, 31, 41],
[12, 22, 32, 42],
[13, 23, 33, 43],
[14, 24, 34, 44],
[15, 25, 35, 45]])
```

### Giving NumPy room for broadcasting

We can also do this using broadcasting, which is where NumPy implicitly repeats the array without using additional memory. With broadcasting, NumPy takes care of repeating for you, provided dimensions are “compatible”. This works as:

Check the number of dimensions of the arrays. If they are different,

*prepend*size one dimensionsCheck if each of the dimensions are compatible: either the same size, or one of them is 1.

```
a.shape
```

```
(4,)
```

```
b.shape
```

```
(5,)
```

Right now, they have the same number of dimensions, 1, but that dimension is incompatible. We can solve this by appending a dimension using `np.newaxis`

when indexing:

```
bb = b[:, np.newaxis]
bb.shape
```

```
(5, 1)
```

```
a + bb
```

```
array([[11, 21, 31, 41],
[12, 22, 32, 42],
[13, 23, 33, 43],
[14, 24, 34, 44],
[15, 25, 35, 45]])
```

```
(a + bb).shape
```

```
(5, 4)
```

This can be written more directly in one line:

```
a + b[:, np.newaxis]
```

```
array([[11, 21, 31, 41],
[12, 22, 32, 42],
[13, 23, 33, 43],
[14, 24, 34, 44],
[15, 25, 35, 45]])
```

### Extending to higher dimensions

This also works for higher dimensions. `x`

, `y`

, and `z`

are here different dimensions, and we can broadcast to perform \(x^2 + y^2 + z^2\),

```
x = np.array([1, 2])
y = np.array([3, 4, 5])
z = np.array([6, 7, 8, 9])
```

First, let’s extend `x`

(and square it) by one dimension, onto which we can broadcast the vector `y ** 2`

,

```
d_2d = x[:, np.newaxis] ** 2 + y**2
```

```
d_2d.shape
```

```
(2, 3)
```

and then further extend this new 2-D array by one more dimension before using broadcasting to add `z ** 2`

across all other dimensions.

```
d_3d = d_2d[..., np.newaxis] + z**2
```

```
d_3d.shape
```

```
(2, 3, 4)
```

Or in one line:

```
h = x[:, np.newaxis, np.newaxis] ** 2 + y[np.newaxis, :, np.newaxis] ** 2 + z**2
```

We can see this one-line result has the same shape and same values as the other multi-step calculation.

```
h.shape
```

```
(2, 3, 4)
```

and we can confirm that the results here are identical,

```
np.all(h == d_3d)
```

```
True
```

Broadcasting is often useful when you want to do calculations with coordinate values, which are often given as 1-D arrays corresponding to positions along a particular array dimension. For example, taking range and azimuth values for radar data (1-D separable polar coordinates) and converting to x,y pairs relative to the radar location.

Given the 3-D temperature field and 1-D pressure coordinates below, let’s calculate \(T * exp(P / 1000)\). We will need to use broadcasting to make the arrays compatible!

```
pressure = np.array([1000, 850, 500, 300])
temps = np.linspace(20, 30, 24).reshape(4, 3, 2)
pressure.shape, temps.shape
```

```
((4,), (4, 3, 2))
```

```
pressure[:, np.newaxis, np.newaxis].shape
```

```
(4, 1, 1)
```

```
temps * np.exp(pressure[:, np.newaxis, np.newaxis] / 1000)
```

```
array([[[54.36563657, 55.54749823],
[56.7293599 , 57.91122156],
[59.09308323, 60.27494489]],
[[52.89636361, 53.91360137],
[54.93083913, 55.94807689],
[56.96531466, 57.98255242]],
[[41.57644944, 42.29328477],
[43.01012011, 43.72695544],
[44.44379078, 45.16062611]],
[[37.56128856, 38.14818369],
[38.73507883, 39.32197396],
[39.90886909, 40.49576423]]])
```

## Vectorize calculations to avoid explicit loops

When working with arrays of data, loops over the individual array elements is a fact of life. However, for improved runtime performance, it is important to avoid performing these loops in Python as much as possible, and let NumPy handle the looping for you. Avoiding these loops frequently, but not always, results in shorter and clearer code as well.

### Look ahead/behind

One common pattern for vectorizing is in converting loops that work over the current point as well as the previous and/or next point. This comes up when doing finite-difference calculations, e.g. approximating derivatives,

```
a = np.linspace(0, 20, 6)
a
```

```
array([ 0., 4., 8., 12., 16., 20.])
```

We can calculate the forward difference for this array with a manual loop as:

```
d = np.zeros(a.size - 1)
for i in range(len(a) - 1):
d[i] = a[i + 1] - a[i]
d
```

```
array([4., 4., 4., 4., 4.])
```

It would be nice to express this calculation without a loop, if possible. To see how to go about this, let’s consider the values that are involved in calculating `d[i]`

, `a[i+1]`

and `a[i]`

. The values over the loop iterations are:

i |
a[i+1] |
a[i] |
---|---|---|

0 |
4 |
0 |

1 |
8 |
4 |

2 |
12 |
8 |

3 |
16 |
12 |

4 |
20 |
16 |

We can express the series of values for `a[i+1]`

then as:

```
a[1:]
```

```
array([ 4., 8., 12., 16., 20.])
```

and `a[i]`

as:

```
a[:-1]
```

```
array([ 0., 4., 8., 12., 16.])
```

This means that we can express the forward difference as:

```
a[1:] - a[:-1]
```

```
array([4., 4., 4., 4., 4.])
```

It should be noted that using slices in this way returns only a **view** on the original array. This means not only can you use the slices to modify the original data (even accidentally), but that this is also a quick operation that does not involve a copy and does not bloat memory usage.

#### 2nd Derivative

A finite difference estimate of the 2nd derivative is given by:

(we’re ignoring \(\Delta x\) here)

Let’s write some vectorized code to calculate this finite difference for `a`

(using slices.) What values should we be expecting to get for the 2nd derivative?

```
2 * a[1:-1] - a[:-2] - a[2:]
```

```
array([0., 0., 0., 0.])
```

### Blocking

Another application where vectorization comes into play to make operations more efficient is when operating on blocks of data. Let’s start by creating some temperature data (rounding to make it easier to see/recognize the values).

```
temps = np.round(20 + np.random.randn(10) * 5, 1)
temps
```

```
array([17.4, 17.7, 17.6, 4.2, 9.6, 19.1, 15.8, 20.2, 25.9, 4.6])
```

Let’s start by writing a loop to take a 3-point running mean of the data. We’ll do this by iterating over all points in the array and average the 3 points centered on that point. We’ll simplify the problem by avoiding dealing with the cases at the edges of the array.

```
avg = np.zeros_like(temps)
for i in range(1, len(temps) - 1):
sub = temps[i - 1 : i + 2]
avg[i] = sub.mean()
```

```
avg
```

```
array([ 0. , 17.56666667, 13.16666667, 10.46666667, 10.96666667,
14.83333333, 18.36666667, 20.63333333, 16.9 , 0. ])
```

As with the case of doing finite differences, we can express this using slices of the original array:

```
# i - 1 i i + 1
(temps[:-2] + temps[1:-1] + temps[2:]) / 3
```

```
array([17.56666667, 13.16666667, 10.46666667, 10.96666667, 14.83333333,
18.36666667, 20.63333333, 16.9 ])
```

Another option to solve this is not using slicing but by using a powerful NumPy tool: `as_strided`

. This tool can result in some odd behavior, so take care when using–the trade-off is that this can be used to do some powerful operations. What we’re doing here is altering how NumPy is interpreting the values in the memory that underpins the array. So for this array:

```
temps
```

```
array([17.4, 17.7, 17.6, 4.2, 9.6, 19.1, 15.8, 20.2, 25.9, 4.6])
```

we can create a view of the array with a new, bigger shape, with rows made up of overlapping values. We do this by specifying a new shape of 8x3, one row for each of the length 3 blocks we can fit in the original 1-D array of data. We then use the `strides`

argument to control how NumPy walks between items in each dimension. The last item in the strides tuple is just as normal–it says that the number of bytes to walk between items is just the size of an item. (Increasing this would skip items.) The first item says that when we go to a new, in this case row, only advance the size of a single item. This is what gives us overlapping rows.

```
block_size = 3
new_shape = (len(temps) - block_size + 1, block_size)
bytes_per_item = temps.dtype.itemsize
temps_strided = np.lib.stride_tricks.as_strided(
temps, shape=new_shape, strides=(bytes_per_item, bytes_per_item)
)
temps_strided
```

```
array([[17.4, 17.7, 17.6],
[17.7, 17.6, 4.2],
[17.6, 4.2, 9.6],
[ 4.2, 9.6, 19.1],
[ 9.6, 19.1, 15.8],
[19.1, 15.8, 20.2],
[15.8, 20.2, 25.9],
[20.2, 25.9, 4.6]])
```

Now that we have this view of the array with the rows representing overlapping blocks, we can operate across the rows with `mean`

and the `axis=-1`

argument to get our running average:

```
temps_strided.mean(axis=-1)
```

```
array([17.56666667, 13.16666667, 10.46666667, 10.96666667, 14.83333333,
18.36666667, 20.63333333, 16.9 ])
```

It should be noted that there are no copies going on here, so if we change a value at a single indexed location, the change actually shows up in multiple locations:

```
temps_strided[0, 2] = 2000
temps_strided
```

```
array([[ 17.4, 17.7, 2000. ],
[ 17.7, 2000. , 4.2],
[2000. , 4.2, 9.6],
[ 4.2, 9.6, 19.1],
[ 9.6, 19.1, 15.8],
[ 19.1, 15.8, 20.2],
[ 15.8, 20.2, 25.9],
[ 20.2, 25.9, 4.6]])
```

### Finding the difference between min and max

Another operation that crops up when slicing and dicing data is trying to identify a set of indexes, along a particular axis, within a larger multidimensional array. For instance, say we have a 3-D array of temperatures, and want to identify the location of the \(-10^oC\) isotherm within each column:

```
pressure = np.linspace(1000, 100, 25)
temps = np.random.randn(25, 30, 40) * 3 + np.linspace(25, -100, 25).reshape(-1, 1, 1)
```

NumPy has the function `argmin()`

which returns the index of the minimum value. We can use this to find the minimum absolute difference between the value and -10:

```
# Using axis=0 to tell it to operate along the pressure dimension
inds = np.argmin(np.abs(temps - -10), axis=0)
inds
```

```
array([[7, 7, 7, ..., 7, 6, 7],
[6, 6, 7, ..., 6, 7, 8],
[6, 6, 7, ..., 7, 6, 7],
...,
[7, 7, 6, ..., 7, 7, 7],
[7, 6, 7, ..., 7, 6, 6],
[7, 7, 7, ..., 7, 7, 6]])
```

```
inds.shape
```

```
(30, 40)
```

Great! We have an array representing the index of the point closest to \(-10^oC\) in each column of data. We could use this to look up into our pressure coordinates to find the pressure level for each column:

```
pressure[inds]
```

```
array([[737.5, 737.5, 737.5, ..., 737.5, 775. , 737.5],
[775. , 775. , 737.5, ..., 775. , 737.5, 700. ],
[775. , 775. , 737.5, ..., 737.5, 775. , 737.5],
...,
[737.5, 737.5, 775. , ..., 737.5, 737.5, 737.5],
[737.5, 775. , 737.5, ..., 737.5, 775. , 775. ],
[737.5, 737.5, 737.5, ..., 737.5, 737.5, 775. ]])
```

How about using that to find the actual temperature value that was closest?

```
temps[inds, :, :].shape
```

```
(30, 40, 30, 40)
```

Unfortunately, this replaced the pressure dimension (size 25) with the shape of our index array (30 x 40), giving us a 30 x 40 x 30 x 40 array (imagine what would have happened with real data!). One solution here would be to loop:

```
output = np.empty(inds.shape, dtype=temps.dtype)
for (i, j), val in np.ndenumerate(inds):
output[i, j] = temps[val, i, j]
output
```

```
array([[ -8.78278557, -10.53727122, -9.10207905, ..., -11.57546987,
-6.10911926, -9.70379803],
[-10.49748592, -7.470107 , -10.83656215, ..., -7.22545269,
-10.47014846, -10.15013026],
[ -7.52075047, -9.85980534, -13.86299958, ..., -11.73055992,
-7.12960412, -10.34395298],
...,
[-10.41282395, -12.22772572, -10.23350586, ..., -13.14716962,
-11.6303902 , -8.2712197 ],
[-12.2131624 , -10.46875405, -16.71195652, ..., -7.55715855,
-6.62508829, -10.35125227],
[-16.10401907, -12.90198601, -11.07173365, ..., -11.72506402,
-9.97553919, -9.46217566]])
```

Of course, what we really want to do is avoid the explicit loop. Let’s temporarily simplify the problem to a single dimension. If we have a 1-D array, we can pass a 1-D array of indices (a full) range, and get back the same as the original data array:

```
pressure[np.arange(pressure.size)]
```

```
array([1000. , 962.5, 925. , 887.5, 850. , 812.5, 775. , 737.5,
700. , 662.5, 625. , 587.5, 550. , 512.5, 475. , 437.5,
400. , 362.5, 325. , 287.5, 250. , 212.5, 175. , 137.5,
100. ])
```

```
np.all(pressure[np.arange(pressure.size)] == pressure)
```

```
True
```

We can use this to select all the indices on the other dimensions of our temperature array. We will also need to use the magic of broadcasting to combine arrays of indices across dimensions.

Now let’s consider a vectorized solution:

```
y_inds = np.arange(temps.shape[1])[:, np.newaxis]
x_inds = np.arange(temps.shape[2])
temps[inds, y_inds, x_inds]
```

```
array([[ -8.78278557, -10.53727122, -9.10207905, ..., -11.57546987,
-6.10911926, -9.70379803],
[-10.49748592, -7.470107 , -10.83656215, ..., -7.22545269,
-10.47014846, -10.15013026],
[ -7.52075047, -9.85980534, -13.86299958, ..., -11.73055992,
-7.12960412, -10.34395298],
...,
[-10.41282395, -12.22772572, -10.23350586, ..., -13.14716962,
-11.6303902 , -8.2712197 ],
[-12.2131624 , -10.46875405, -16.71195652, ..., -7.55715855,
-6.62508829, -10.35125227],
[-16.10401907, -12.90198601, -11.07173365, ..., -11.72506402,
-9.97553919, -9.46217566]])
```

Let’s say we want to find the relative humidity at the \(-10^oC\) isotherm

```
np.all(output == temps[inds, y_inds, x_inds])
```

```
True
```

## Summary

We’ve previewed some advanced NumPy capabilities with a focus on *vectorization*, or using clever broadcasting and windows of our data to enhance the speed and readability of our calculations. Doing so can reduce explicit construction of loops in your code and keep calculations running quickly!

### What’s next

This is an advanced NumPy topic, and important to designing your own calculations in a way for them to be as scalable and quick as possible. Please check out some of the following links to explore this topic further. We also suggest diving into label-based indexing and subsetting with Pandas and Xarray, where some of this broadcasting can be simplified or have added context.