# NumPy Broadcasting

## Overview

Before we begin, it is important to know that broadcasting is a valuable part of the power that NumPy provides. However, there’s no looking past the fact that broadcasting can be conceptually difficult to digest. This information can be helpful and very powerful, but it may be more prudent to first start learning the other label-based elements of the Python ecosystem, Pandas and Xarray. This can make understanding NumPy broadcasting easier or simpler when using real-world data. When you are ready to learn about NumPy broadcasting, this section is organized as follows:

An introduction to broadcasting

Avoiding loops with vectorization

## Using broadcasting to implicitly loop over data

### What is broadcasting?

Broadcasting is a useful NumPy tool that allows us to perform operations between arrays with different shapes, provided that they are compatible with each other in certain ways. To start, we can create an array below and add 5 to it:

```
a = np.array([10, 20, 30, 40])
a + 5
```

```
array([15, 25, 35, 45])
```

This works even though 5 is not an array. It behaves as expected, adding 5 to each of the elements in `a`

. This also works if 5 is an array:

```
b = np.array([5])
a + b
```

```
array([15, 25, 35, 45])
```

This takes the single element in `b`

and adds it to each of the elements in `a`

. This won’t work for just any `b`

, though; for instance, the following won’t work:

```
b = np.array([5, 6, 7])
a + b
```

```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[4], line 2
1 b = np.array([5, 6, 7])
----> 2 a + b
ValueError: operands could not be broadcast together with shapes (4,) (3,)
```

It does work if `a`

and `b`

are the same shape:

```
b = np.array([5, 5, 10, 10])
a + b
```

```
array([15, 25, 40, 50])
```

What if what we really want is pairwise addition of a and b? Without broadcasting, we could accomplish this by looping:

```
b = np.array([1, 2, 3, 4, 5])
```

```
result = np.empty((5, 4), dtype=np.int32)
for row, valb in enumerate(b):
for col, vala in enumerate(a):
result[row, col] = vala + valb
result
```

```
array([[11, 21, 31, 41],
[12, 22, 32, 42],
[13, 23, 33, 43],
[14, 24, 34, 44],
[15, 25, 35, 45]], dtype=int32)
```

We can also do this by manually repeating the arrays to the proper shape for the result, using `np.tile`

. This avoids the need to manually loop:

```
aa = np.tile(a, (5, 1))
aa
```

```
array([[10, 20, 30, 40],
[10, 20, 30, 40],
[10, 20, 30, 40],
[10, 20, 30, 40],
[10, 20, 30, 40]])
```

```
# Turn b into a column array, then tile it
bb = np.tile(b.reshape(5, 1), (1, 4))
bb
```

```
array([[1, 1, 1, 1],
[2, 2, 2, 2],
[3, 3, 3, 3],
[4, 4, 4, 4],
[5, 5, 5, 5]])
```

```
aa + bb
```

```
array([[11, 21, 31, 41],
[12, 22, 32, 42],
[13, 23, 33, 43],
[14, 24, 34, 44],
[15, 25, 35, 45]])
```

### Giving NumPy room for broadcasting

We can also do this using broadcasting, which is where NumPy implicitly repeats the array without using additional memory. With broadcasting, NumPy takes care of repeating for you, provided dimensions are “compatible”. This works as follows:

Check the number of dimensions of the arrays. If they are different,

*prepend*dimensions of size one until the arrays are the same dimension shape.Check if each of the dimensions are compatible. This works as follows:

Each dimension is checked.

If one of the arrays has a size of 1 in the checked dimension, or both arrays have the same size in the checked dimension, the check passes.

If all dimension checks pass, the dimensions are compatible.

For example, consider the following arrays:

```
a.shape
```

```
(4,)
```

```
b.shape
```

```
(5,)
```

Right now, these arrays both have the same number of dimensions. They both have only one dimension, but that dimension is incompatible. We can solve this by appending a dimension using `np.newaxis`

when indexing, like this:

```
bb = b[:, np.newaxis]
bb.shape
```

```
(5, 1)
```

```
a + bb
```

```
array([[11, 21, 31, 41],
[12, 22, 32, 42],
[13, 23, 33, 43],
[14, 24, 34, 44],
[15, 25, 35, 45]])
```

```
(a + bb).shape
```

```
(5, 4)
```

We can also make the code more succinct by performing the newaxis and addition operations in a single line, like this:

```
a + b[:, np.newaxis]
```

```
array([[11, 21, 31, 41],
[12, 22, 32, 42],
[13, 23, 33, 43],
[14, 24, 34, 44],
[15, 25, 35, 45]])
```

### Extending to higher dimensions

The same broadcasting ability and rules also apply for arrays of higher dimensions. Consider the following arrays `x`

, `y`

, and `z`

, which are all different dimensions. We can use newaxis and broadcasting to perform \(x^2 + y^2 + z^2\):

```
x = np.array([1, 2])
y = np.array([3, 4, 5])
z = np.array([6, 7, 8, 9])
```

First, we extend the `x`

array using newaxis, and then square it. Then, we square `y`

, and broadcast it onto the extended `x`

array:

```
d_2d = x[:, np.newaxis] ** 2 + y**2
```

```
d_2d.shape
```

```
(2, 3)
```

Finally, we further extend this new 2-D array to a 3-D array using newaxis, square the `z`

array, and then broadcast `z`

onto the newly extended array:

```
d_3d = d_2d[..., np.newaxis] + z**2
```

```
d_3d.shape
```

```
(2, 3, 4)
```

As described above, we can also perform these operations in a single line of code, like this:

```
h = x[:, np.newaxis, np.newaxis] ** 2 + y[np.newaxis, :, np.newaxis] ** 2 + z**2
```

We can use the shape method to see the shape of the array created by the single line of code above. As you can see, it matches the shape of the array created by the multi-line process above:

```
h.shape
```

```
(2, 3, 4)
```

We can also use the all method to confirm that both arrays contain the same data:

```
np.all(h == d_3d)
```

```
True
```

Broadcasting is often useful when you want to do calculations with coordinate values, which are often given as 1-D arrays corresponding to positions along a particular array dimension. For example, we can use broadcasting to help with taking range and azimuth values for radar data (1-D separable polar coordinates) and converting to x,y pairs relative to the radar location.

Given the 3-D temperature field and 1-D pressure coordinates below, let’s calculate \(T * exp(P / 1000)\). We will need to use broadcasting to make the arrays compatible. The following code demonstrates how to use newaxis and broadcasting to perform this calculation:

```
pressure = np.array([1000, 850, 500, 300])
temps = np.linspace(20, 30, 24).reshape(4, 3, 2)
pressure.shape, temps.shape
```

```
((4,), (4, 3, 2))
```

```
pressure[:, np.newaxis, np.newaxis].shape
```

```
(4, 1, 1)
```

```
temps * np.exp(pressure[:, np.newaxis, np.newaxis] / 1000)
```

```
array([[[54.36563657, 55.54749823],
[56.7293599 , 57.91122156],
[59.09308323, 60.27494489]],
[[52.89636361, 53.91360137],
[54.93083913, 55.94807689],
[56.96531466, 57.98255242]],
[[41.57644944, 42.29328477],
[43.01012011, 43.72695544],
[44.44379078, 45.16062611]],
[[37.56128856, 38.14818369],
[38.73507883, 39.32197396],
[39.90886909, 40.49576423]]])
```

## Vectorize calculations to avoid explicit loops

When working with arrays of data, loops over the individual array elements is a fact of life. However, for improved runtime performance, it is important to avoid performing these loops in Python as much as possible, and let NumPy handle the looping for you. Avoiding these loops frequently, but not always, results in shorter and clearer code as well.

### Look ahead/behind

One common pattern for vectorizing is in converting loops that work over the current point, in addition to the previous point and/or the next point. This comes up when doing finite-difference calculations, e.g., approximating derivatives:

```
a = np.linspace(0, 20, 6)
a
```

```
array([ 0., 4., 8., 12., 16., 20.])
```

We can calculate the forward difference for this array using a manual loop, like this:

```
d = np.zeros(a.size - 1)
for i in range(len(a) - 1):
d[i] = a[i + 1] - a[i]
d
```

```
array([4., 4., 4., 4., 4.])
```

It would be nice to express this calculation without a loop, if possible. To see how to go about this, let’s consider the values that are involved in calculating `d[i]`

; in other words, the values `a[i+1]`

and `a[i]`

. The values over the loop iterations are:

i |
a[i+1] |
a[i] |
---|---|---|

0 |
4 |
0 |

1 |
8 |
4 |

2 |
12 |
8 |

3 |
16 |
12 |

4 |
20 |
16 |

We can then express the series of values for `a[i+1]`

as follows:

```
a[1:]
```

```
array([ 4., 8., 12., 16., 20.])
```

We can also express the series of values for `a[i]`

as follows:

```
a[:-1]
```

```
array([ 0., 4., 8., 12., 16.])
```

This means that we can express the forward difference using the following statement:

```
a[1:] - a[:-1]
```

```
array([4., 4., 4., 4., 4.])
```

It should be noted that using slices in this way returns only a **view** on the original array. In other words, you can use the slices to modify the original data, either intentionally or accidentally. Also, this is a quick operation that does not involve a copy and does not bloat memory usage.

#### 2nd Derivative

A finite-difference estimate of the 2nd derivative is given by the following equation (ignoring \(\Delta x\)):

Let’s write some vectorized code to calculate this finite difference for `a`

, using slices. Analyze the code below, and compare the result to the values you would expect to see from the 2nd derivative of `a`

.

```
2 * a[1:-1] - a[:-2] - a[2:]
```

```
array([0., 0., 0., 0.])
```

### Blocking

Another application that can become more efficient using vectorization is operating on blocks of data. Let’s start by creating some temperature data (rounding to make it easier to see and recognize the values):

```
temps = np.round(20 + np.random.randn(10) * 5, 1)
temps
```

```
array([24. , 21.1, 20. , 20. , 23.8, 15.7, 14.2, 18.9, 20.2, 20.7])
```

Let’s start by writing a loop to take a 3-point running mean of the data. We’ll do this by iterating over all points in the array and averaging the 3 points centered on each point. We’ll simplify the problem by avoiding dealing with the cases at the edges of the array:

```
avg = np.zeros_like(temps)
for i in range(1, len(temps) - 1):
sub = temps[i - 1 : i + 2]
avg[i] = sub.mean()
```

```
avg
```

```
array([ 0. , 21.7 , 20.36666667, 21.26666667, 19.83333333,
17.9 , 16.26666667, 17.76666667, 19.93333333, 0. ])
```

As with the case of doing finite differences, we can express this using slices of the original array instead of loops:

```
# i - 1 i i + 1
(temps[:-2] + temps[1:-1] + temps[2:]) / 3
```

```
array([21.7 , 20.36666667, 21.26666667, 19.83333333, 17.9 ,
16.26666667, 17.76666667, 19.93333333])
```

Another option to solve this type of problem is to use the powerful NumPy tool `as_strided`

instead of slicing. This tool can result in some odd behavior, so take care when using it. However, the trade-off is that the `as_strided`

tool can be used to perform powerful operations. What we’re doing here is altering how NumPy is interpreting the values in the memory that underpins the array. Take this array, for example:

```
temps
```

```
array([24. , 21.1, 20. , 20. , 23.8, 15.7, 14.2, 18.9, 20.2, 20.7])
```

Using `as_strided`

, we can create a view of this array with a new, bigger shape, with rows made up of overlapping values. We do this by specifying a new shape of 8x3. There are 3 columns, for fitting blocks of data containing 3 values each, and 8 rows, to correspond to the 8 blocks of data of that size that are possible in the original 1-D array. We can then use the `strides`

argument to control how NumPy walks between items in each dimension. The last item in the strides tuple simply states that the number of bytes to walk between items is just the size of an item. (Increasing this last item would skip items.) The first item says that when we go to a new element (in this example, a new row), only advance the size of a single item. This is what gives us overlapping rows. The code for these operations looks like this:

```
block_size = 3
new_shape = (len(temps) - block_size + 1, block_size)
bytes_per_item = temps.dtype.itemsize
temps_strided = np.lib.stride_tricks.as_strided(
temps, shape=new_shape, strides=(bytes_per_item, bytes_per_item)
)
temps_strided
```

```
array([[24. , 21.1, 20. ],
[21.1, 20. , 20. ],
[20. , 20. , 23.8],
[20. , 23.8, 15.7],
[23.8, 15.7, 14.2],
[15.7, 14.2, 18.9],
[14.2, 18.9, 20.2],
[18.9, 20.2, 20.7]])
```

Now that we have this view of the array with the rows representing overlapping blocks, we can operate across the rows with `mean`

and the `axis=-1`

argument to get our running average:

```
temps_strided.mean(axis=-1)
```

```
array([21.7 , 20.36666667, 21.26666667, 19.83333333, 17.9 ,
16.26666667, 17.76666667, 19.93333333])
```

It should be noted that there are no copies going on here, so if we change a value at a single indexed location, the change actually shows up in multiple locations:

```
temps_strided[0, 2] = 2000
temps_strided
```

```
array([[ 24. , 21.1, 2000. ],
[ 21.1, 2000. , 20. ],
[2000. , 20. , 23.8],
[ 20. , 23.8, 15.7],
[ 23.8, 15.7, 14.2],
[ 15.7, 14.2, 18.9],
[ 14.2, 18.9, 20.2],
[ 18.9, 20.2, 20.7]])
```

### Finding the difference between min and max

Another operation that crops up when slicing and dicing data is trying to identify a set of indices along a particular axis, contained within a larger multidimensional array. For instance, say we have a 3-D array of temperatures, and we want to identify the location of the \(-10^oC\) isotherm within each column:

```
pressure = np.linspace(1000, 100, 25)
temps = np.random.randn(25, 30, 40) * 3 + np.linspace(25, -100, 25).reshape(-1, 1, 1)
```

NumPy has the function `argmin()`

, which returns the index of the minimum value. We can use this to find the minimum absolute difference between the value and -10:

```
# Using axis=0 to tell it to operate along the pressure dimension
inds = np.argmin(np.abs(temps - -10), axis=0)
inds
```

```
array([[6, 6, 6, ..., 7, 6, 6],
[7, 5, 7, ..., 6, 7, 6],
[6, 7, 7, ..., 8, 7, 7],
...,
[7, 6, 6, ..., 6, 7, 6],
[6, 7, 6, ..., 6, 7, 6],
[7, 6, 7, ..., 7, 7, 7]])
```

```
inds.shape
```

```
(30, 40)
```

Great! We now have an array representing the index of the point closest to \(-10^oC\) in each column of data. We can use this new array as a lookup index for our pressure coordinate array to find the pressure level for each column:

```
pressure[inds]
```

```
array([[775. , 775. , 775. , ..., 737.5, 775. , 775. ],
[737.5, 812.5, 737.5, ..., 775. , 737.5, 775. ],
[775. , 737.5, 737.5, ..., 700. , 737.5, 737.5],
...,
[737.5, 775. , 775. , ..., 775. , 737.5, 775. ],
[775. , 737.5, 775. , ..., 775. , 737.5, 775. ],
[737.5, 775. , 737.5, ..., 737.5, 737.5, 737.5]])
```

Now, we can try to find the closest actual temperature value using the new array:

```
temps[inds, :, :].shape
```

```
(30, 40, 30, 40)
```

Unfortunately, this replaced the pressure dimension (size 25) with the shape of our index array (30 x 40), giving us a 30 x 40 x 30 x 40 array. Obviously, if scientifically relevant data values were being used, this result would almost certainly make such data invalid. One solution would be to set up a loop with the `ndenumerate`

function, like this:

```
output = np.empty(inds.shape, dtype=temps.dtype)
for (i, j), val in np.ndenumerate(inds):
output[i, j] = temps[val, i, j]
output
```

```
array([[-11.15579172, -9.56148347, -6.24480023, ..., -11.14381785,
-10.51947605, -8.04133348],
[ -8.11274792, -5.63861951, -7.92727315, ..., -10.53328463,
-7.43866362, -12.20450045],
[ -9.28162091, -12.57411167, -11.18511869, ..., -11.3621979 ,
-8.58291621, -8.9064357 ],
...,
[-11.78394804, -10.12215939, -8.55883978, ..., -7.97893025,
-12.25480418, -8.03566453],
[ -9.44891066, -9.09691661, -10.42541318, ..., -7.95080845,
-10.48765897, -8.36904393],
[-11.22496579, -6.73477857, -10.15724126, ..., -9.01312472,
-9.98584235, -8.29346056]])
```

Of course, what we really want to do is avoid the explicit loop. Let’s temporarily simplify the problem to a single dimension. If we have a 1-D array, we can pass a 1-D array of indices (a full range), and get back the same as the original data array:

```
pressure[np.arange(pressure.size)]
```

```
array([1000. , 962.5, 925. , 887.5, 850. , 812.5, 775. , 737.5,
700. , 662.5, 625. , 587.5, 550. , 512.5, 475. , 437.5,
400. , 362.5, 325. , 287.5, 250. , 212.5, 175. , 137.5,
100. ])
```

```
np.all(pressure[np.arange(pressure.size)] == pressure)
```

```
True
```

We can use this to select all the indices on the other dimensions of our temperature array. We will also need to use the magic of broadcasting to combine arrays of indices across dimensions.

This can be written as a vectorized solution. For example:

```
y_inds = np.arange(temps.shape[1])[:, np.newaxis]
x_inds = np.arange(temps.shape[2])
temps[inds, y_inds, x_inds]
```

```
array([[-11.15579172, -9.56148347, -6.24480023, ..., -11.14381785,
-10.51947605, -8.04133348],
[ -8.11274792, -5.63861951, -7.92727315, ..., -10.53328463,
-7.43866362, -12.20450045],
[ -9.28162091, -12.57411167, -11.18511869, ..., -11.3621979 ,
-8.58291621, -8.9064357 ],
...,
[-11.78394804, -10.12215939, -8.55883978, ..., -7.97893025,
-12.25480418, -8.03566453],
[ -9.44891066, -9.09691661, -10.42541318, ..., -7.95080845,
-10.48765897, -8.36904393],
[-11.22496579, -6.73477857, -10.15724126, ..., -9.01312472,
-9.98584235, -8.29346056]])
```

Now, we can use this new array to find, for example, the relative humidity at the \(-10^oC\) isotherm:

```
np.all(output == temps[inds, y_inds, x_inds])
```

```
True
```

## Summary

We’ve previewed some advanced NumPy capabilities, with a focus on *vectorization*; in other words, using clever broadcasting and data windowing techniques to enhance the speed and readability of our calculation code. By making use of vectorization, you can reduce explicit construction of loops in your code, and improve speed of calculation throughout the execution of such code.

### What’s next

This is an advanced NumPy topic; however, it is important to learn this topic in order to design calculation code that maximizes scalability and speed. If you would like to explore this topic further, please review the links below. We also suggest diving into label-based indexing and subsetting with Pandas and Xarray, where some of this broadcasting can be simplified, or have added context.