Handling Character Data for Machine Learning
Learn about different methods of encoding character attributes for creating useful machine learning models, including frequency-based encoding and hash encoding.
Join the DZone community and get the full member experience.
Join For FreeCreating machine learning projects with numerical attributes is easy. Most of the open-source data available for building ML models has numerical attributes. However, when we deal with enterprise data, the case is a bit different. Character or string data dominate the dataset in enterprises, making it hard to create a very accurate machine learning model. We have to clean messy strings, pull strings apart, and extract useful strings embedded in a text to bring it into a form that can be used in a machine learning pipeline.
We will cover this discussion in two parts. The first piece will focus on different methods of encoding character attributes for creating useful machine learning models. The second part will focus on manipulating and extracting useful text out of the messy strings using R.
Let's speak about categorical data. Categorical data are variables that have a finite number of label values. They cannot be continuous, and generally, the values represent a fixed category. Based on the nature of categorical attributes, they can either be nominal or ordinal. When there is a natural ordering of values for categorical data, it is termed as ordinal. For example, categorical values such as high, medium, and low are ordered values. When we cannot derive any relationship or order from the categorical values, it is termed as nominal. For example, colors such as red, yellow, and green have no order. The values in a categorical attribute are referred mostly as classes or categories.
Most of the time, we require categorical attributes to be converted to a numeric format in order for the machine learning algorithm to work. Even for algorithms like decision trees where there is no requirement for encoding the values, properly encoded categorical attributes outperform the latter. We discuss below some of the techniques to encode the categorical attributes:
Label encoding: As the name suggests, label encoding transforms labels into numerical labels. Label encoding is suited better for the ordinal type of categorical data. The labels are always in between 0 and n-1, where n is the number of classes.
One hot encoding: This method is also termed as dummy coding. Using this method, dummy columns are created for each class of a categorical attribute. For each dummy attribute, the presence of the class is represented by 1 and its absence is represented by 0.
Frequency-based encoding: Using this method, the occurrence frequency is calculated for each class out of the total classes. The corresponding relative frequency is assigned as encoded values for the classes.
Target mean encoding: It can only be used for supervised machine learning problem where there is a target or response attribute. Each class of the categorical input attribute is encoded as a function of the mean of the target.
Binary encoding: The classes are first converted to the numeric format, and then they are converted to their equivalent binary strings. The binary digits are then split into separate columns.
Hash encoding: This method is also popularly known as feature hashing. A hash function is used to map data to a number. This method may map different classes to the same bucket, but is useful when there are hundreds of classes/categories present for an attribute.
Apart from these six techniques, there are some more like sum, polynomial, backward difference, and helmert encoding for categorical attributes. You can find more information about these on Will's blog. All of the techniques are also implemented in Python and are available in the package category_encoders
. One can pip install category_encoders
to install the package and try to use the different encoding methods for a Machine Learning project. The scikit-learn categorical encoding GitHub page has more information about various functions in the package.
There are some more methods, like leaving one out method for encoding, and you can refer the below videos to know about this technique and feature engineering in general.
Opinions expressed by DZone contributors are their own.
Comments