Open In Colab

Handling Missing Data

The difference between data found in many tutorials and data in the real world is that real-world data is rarely clean and homogeneous. In particular, many interesting datasets will have some amount of data missing. To make matters even more complicated, different data sources may indicate missing data in different ways.

In this section, we will discuss some general considerations for missing data, discuss how Pandas chooses to represent it, and demonstrate some built-in Pandas tools for handling missing data in Python.

None: Pythonic missing data

The first sentinel value used by Pandas is None, a Python singleton object that is often used for missing data in Python code. Because it is a Python object, None cannot be used in any arbitrary NumPy/Pandas array, but only in arrays with data type 'object' (i.e., arrays of Python objects):

In [ ]:
import numpy as np
import pandas as pd
In [ ]:
vals1 = np.array([1, None, 3, 4])
vals1
Out[ ]:
array([1, None, 3, 4], dtype=object)

This dtype=object means that the best common type representation NumPy could infer for the contents of the array is that they are Python objects. While this kind of object array is useful for some purposes, any operations on the data will be done at the Python level, with much more overhead than the typically fast operations seen for arrays with native types:

In [ ]:
for dtype in ['object', 'int']:
    print("dtype =", dtype)
    %timeit np.arange(1E6, dtype=dtype).sum()
    print()
dtype = object
10 loops, best of 5: 72.7 ms per loop

dtype = int
The slowest run took 5.66 times longer than the fastest. This could mean that an intermediate result is being cached.
1000 loops, best of 5: 1.84 ms per loop

The use of Python objects in an array also means that if you perform aggregations like sum() or min() across an array with a None value, you will generally get an error:

In [ ]:
vals1.sum()
---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-4-30a3fc8c6726> in <module>()
----> 1 vals1.sum()

/usr/local/lib/python3.7/dist-packages/numpy/core/_methods.py in _sum(a, axis, dtype, out, keepdims, initial, where)
     45 def _sum(a, axis=None, dtype=None, out=None, keepdims=False,
     46          initial=_NoValue, where=True):
---> 47     return umr_sum(a, axis, dtype, out, keepdims, initial, where)
     48 
     49 def _prod(a, axis=None, dtype=None, out=None, keepdims=False,

TypeError: unsupported operand type(s) for +: 'int' and 'NoneType'

This reflects the fact that addition between an integer and None is undefined.

NaN: Missing numerical data

The other missing data representation, NaN (acronym for Not a Number), is different; it is a special floating-point value recognized by all systems that use the standard IEEE floating-point representation:

In [ ]:
vals2 = np.array([1, np.nan, 3, 4]) 
vals2.dtype

Notice that NumPy chose a native floating-point type for this array: this means that unlike the object array from before, this array supports fast operations pushed into compiled code. You should be aware that NaN is a bit like a data virus–it infects any other object it touches. Regardless of the operation, the result of arithmetic with NaN will be another NaN:

In [ ]:
1 + np.nan
In [ ]:
0 *  np.nan

Note that this means that aggregates over the values are well defined (i.e., they don't result in an error) but not always useful:

In [ ]:
vals2.sum(), vals2.min(), vals2.max()
Out[ ]:
(nan, nan, nan)

NumPy does provide some special aggregations that will ignore these missing values:

In [ ]:
np.nansum(vals2), np.nanmin(vals2), np.nanmax(vals2)
Out[ ]:
(8.0, 1.0, 4.0)

Keep in mind that NaN is specifically a floating-point value; there is no equivalent NaN value for integers, strings, or other types.

NaN and None in Pandas

NaN and None both have their place, and Pandas is built to handle the two of them nearly interchangeably, converting between them where appropriate:

In [ ]:
pd.Series([1, np.nan, 2, None])
Out[ ]:
0    1.0
1    NaN
2    2.0
3    NaN
dtype: float64

For types that don't have an available sentinel value, Pandas automatically type-casts when NA values are present. For example, if we set a value in an integer array to np.nan, it will automatically be upcast to a floating-point type to accommodate the NA:

In [ ]:
x = pd.Series(range(2), dtype=int)
x
Out[ ]:
0    0
1    1
dtype: int64
In [ ]:
x[0] = None
x
Out[ ]:
0    NaN
1    1.0
dtype: float64

Notice that in addition to casting the integer array to floating point, Pandas automatically converts the None to a NaN value. (Be aware that there is a proposal to add a native integer NA to Pandas in the future; as of this writing, it has not been included).

While this type of magic may feel a bit hackish compared to the more unified approach to NA values in domain-specific languages like R, the Pandas sentinel/casting approach works quite well in practice and in my experience only rarely causes issues.

The following table lists the upcasting conventions in Pandas when NA values are introduced:

Typeclass Conversion When Storing NAs NA Sentinel Value
floating No change np.nan
object No change None or np.nan
integer Cast to float64 np.nan
boolean Cast to object None or np.nan

Keep in mind that in Pandas, string data is always stored with an object dtype.

Operating on Null Values

As we have seen, Pandas treats None and NaN as essentially interchangeable for indicating missing or null values. To facilitate this convention, there are several useful methods for detecting, removing, and replacing null values in Pandas data structures. They are:

  • isnull(): Generate a boolean mask indicating missing values
  • notnull(): Opposite of isnull()
  • dropna(): Return a filtered version of the data
  • fillna(): Return a copy of the data with missing values filled or imputed

We will conclude this section with a brief exploration and demonstration of these routines.

Detecting null values

Pandas data structures have two useful methods for detecting null data: isnull() and notnull(). Either one will return a Boolean mask over the data. For example:

In [ ]:
data = pd.Series([1, np.nan, 'hello', None])
In [ ]:
data.isnull()
Out[ ]:
0    False
1     True
2    False
3     True
dtype: bool

Boolean masks can be used directly as a Series or DataFrame index:

In [ ]:
data[data.notnull()]
Out[ ]:
0        1
2    hello
dtype: object

The isnull() and notnull() methods produce similar Boolean results for DataFrames.

Dropping null values

In addition to the masking used before, there are the convenience methods, dropna() (which removes NA values) and fillna() (which fills in NA values). For a Series, the result is straightforward:

In [ ]:
data.dropna()
Out[ ]:
0        1
2    hello
dtype: object

For a DataFrame, there are more options. Consider the following DataFrame:

In [ ]:
df = pd.DataFrame([[1,      np.nan, 2],
                   [2,      3,      5],
                   [np.nan, 4,      6]])
df
Out[ ]:
0 1 2
0 1.0 NaN 2
1 2.0 3.0 5
2 NaN 4.0 6

We cannot drop single values from a DataFrame; we can only drop full rows or full columns. Depending on the application, you might want one or the other, so dropna() gives a number of options for a DataFrame.

By default, dropna() will drop all rows in which any null value is present:

In [ ]:
df.dropna()
Out[ ]:
0 1 2
1 2.0 3.0 5

Alternatively, you can drop NA values along a different axis; axis=1 drops all columns containing a null value:

In [ ]:
df.dropna(axis='columns')
Out[ ]:
2
0 2
1 5
2 6

But this drops some good data as well; you might rather be interested in dropping rows or columns with all NA values, or a majority of NA values. This can be specified through the how or thresh parameters, which allow fine control of the number of nulls to allow through.

The default is how='any', such that any row or column (depending on the axis keyword) containing a null value will be dropped. You can also specify how='all', which will only drop rows/columns that are all null values:

In [ ]:
df[3] = np.nan
df
Out[ ]:
0 1 2 3
0 1.0 NaN 2 NaN
1 2.0 3.0 5 NaN
2 NaN 4.0 6 NaN
In [ ]:
df.dropna(axis='columns', how='all')
Out[ ]:
0 1 2
0 1.0 NaN 2
1 2.0 3.0 5
2 NaN 4.0 6

For finer-grained control, the thresh parameter lets you specify a minimum number of non-null values for the row/column to be kept:

In [ ]:
df.dropna(axis='rows', thresh=3)
Out[ ]:
0 1 2 3
1 2.0 3.0 5 NaN

Here the first and last row have been dropped, because they contain only two non-null values.

Filling null values

Sometimes rather than dropping NA values, you'd rather replace them with a valid value. This value might be a single number like zero, or it might be some sort of imputation or interpolation from the good values. You could do this in-place using the isnull() method as a mask, but because it is such a common operation Pandas provides the fillna() method, which returns a copy of the array with the null values replaced.

Consider the following Series:

In [ ]:
data = pd.Series([1, np.nan, 2, None, 3], index=list('abcde'))
data
Out[ ]:
a    1.0
b    NaN
c    2.0
d    NaN
e    3.0
dtype: float64

We can fill NA entries with a single value, such as zero: