# Introduction

Data preparation is a crucial step in the data science process. Raw data often needs to be cleaned, pre-processed, re-formatted, combined, enriched, corrected, and consolidated before it can be used for analysis or modeling. Feeding our models with good quality data is essential to ensure that we get reliable and accurate results.

Numeric data, due to its nature, has unique pre-processing methods that are different from those used for categorical or text data. In this article, we’ll discuss three common techniques for preparing numeric data: mean centering, standardization, and normalization.

# Mean Centering

Mean centering is the process of subtracting the mean value of a variable from each of its values. This shifts the center of the data to zero, but doesn’t change its scale. Mean centering is useful when we want to focus on the variation of the data rather than its absolute values.

In Python, we can perform mean centering using NumPy:

+++ import numpy as np

data_centered = data - np.mean(data) +++

# Standardization

Standardization, also known as z-score normalization, transforms the data to have a mean of zero and a standard deviation of one. This is done by subtracting the mean from each value and then dividing by the standard deviation. Standardization makes the data unitless and allows for comparison between variables with different scales.

In Python, we can use scikit-learn’s `StandardScaler`

for standardization:

+++ from sklearn.preprocessing import StandardScaler

scaler = StandardScaler() data_standardized = scaler.fit_transform(data) +++

See this scikit-learn example for more details on the importance of scaling.

# Normalization

Normalization scales the data to a fixed range, usually between 0 and 1. This is useful when we want all features to have the same scale but don’t require them to be normally distributed. Min-max scaling is a common normalization technique that subtracts the minimum value and divides by the range (maximum - minimum).

In Python, we can use scikit-learn’s `MinMaxScaler`

for normalization:

+++ from sklearn.preprocessing import MinMaxScaler

scaler = MinMaxScaler() data_normalized = scaler.fit_transform(data) +++

Choosing the right preprocessing technique depends on the nature of your data and the requirements of the model you’re using. It’s often a good idea to try different methods and see which one works best for your specific problem.

In summary, mean centering, standardization, and normalization are three fundamental techniques for preparing numeric data. They help improve the performance and convergence of many machine learning algorithms and should be part of every data scientist’s toolkit.