Information Preprocessing in R – GeeksforGeeks

Welcome, adventurous knowledge fanatics! At present we have a good time an thrilling journey crammed with a lot of twists, turns, and enjoyable, as we dive into the world of knowledge cleansing and visualization via R Programming Language. Seize your digital backpacks, put in your knowledge detective hats, Able to unravel the secrets and techniques of a dataset crammed with take a look at outcomes and fascinating options.

Information Preprocessing in R

Putting in and loading the tidyverse package deal.

The Tidyverse Metapackage – Our journey begins with the mysterious meta-package known as “tidyverse.” With a easy incantation, “library(tidyverse)” we unlock the highly effective instruments and unleash the magic of knowledge manipulation and visualization.

Itemizing information within the “../enter” listing.

As we discover additional, we bump into a mystical listing often called “../enter/”. With a flick of our code wand, we unleash its secrets and techniques and reveal a listing of hidden information.



record.information(path = "../enter")

── Attaching core tidyverse packages ──────────────────────── tidyverse 2.0.0 ──
✔ dplyr 1.1.2 ✔ readr 2.1.4
✔ forcats 1.0.0 ✔ stringr 1.5.0
✔ ggplot2 3.4.2 ✔ tibble 3.2.1
✔ lubridate 1.9.2 ✔ tidyr 1.3.0
✔ purrr 1.0.1
── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
✖ dplyr::filter() masks stats::filter()
✖ dplyr::lag() masks stats::lag()
ℹ Use the conflicted package deal (<>) to pressure all conflicts to turn into errors

The Magical Powers of “psych“, we bump into a uncommon gem often called “psych.” With, “set up.packages(‘psych’)” adopted by “library(psych),” we unlock its potent skills.


set up.packages("psych")


Studying the Dataset

We stumbled upon a treasured artifact—a CSV file named “Expanded_data_with_more_features.csv“. With a wave of our wand and the invocation of “learn.csv()“, we summon the info into our realm.


knowledge <- learn.csv("/kaggle/enter/students-exam-scores/"


Information Exploration & Evaluation

Our journey begins with a easy job—discovering what lies inside our dataset. We load the info and take a sneak peek. With the “head()” operate, we unveil the primary 5 rows. We revel within the pleasure of exploring the size of our dataset utilizing “dim()“.


30641 * 15

You might have summoned the names() operate to disclose the key names of the columns in your dataset. By capturing the output within the variable variable_names and invoking the print() spell, you’ve efficiently unveiled the hidden names of the columns, permitting you to understand the construction of your dataset.


variable_names <- names(knowledge)



 [1] "X"                   "Gender"              "EthnicGroup"        
[4] "ParentEduc" "LunchType" "TestPrep"
[7] "ParentMaritalStatus" "PracticeSport" "IsFirstChild"
[10] "NrSiblings" "TransportMeans" "WklyStudyHours"
[13] "MathScore" "ReadingScore" "WritingScore"

Subsequent, you solid the str() spell upon your dataset. This spell reveals the info kinds of every variable (column) granting you perception into their mystical nature. By deciphering the output of this spell, you may perceive the kinds of variables current in your dataset, similar to character (chr) numeric (num), or components, amongst others.


'knowledge.body':    30641 obs. of  15 variables:
$ X : int 0 1 2 3 4 5 6 7 8 9 ...
$ Gender : chr "feminine" "feminine" "feminine" "male" ...
$ EthnicGroup : chr "" "group C" "group B" "group A" ...
$ ParentEduc : chr "bachelor's diploma" "some faculty" "grasp's diploma"
"affiliate's diploma" ...
$ LunchType : chr "commonplace" "commonplace" "commonplace" "free/diminished" ...
$ TestPrep : chr "none" "" "none" "none" ...
$ ParentMaritalStatus: chr "married" "married" "single" "married" ...
$ PracticeSport : chr "recurrently" "typically" "typically" "by no means" ...
$ IsFirstChild : chr "sure" "sure" "sure" "no" ...
$ NrSiblings : int 3 0 4 1 0 1 1 1 3 NA ...
$ TransportMeans : chr "school_bus" "" "school_bus" "" ...
$ WklyStudyHours : chr "< 5" "5 - 10" "< 5" "5 - 10" ...
$ MathScore : int 71 69 87 45 76 73 85 41 65 37 ...
$ ReadingScore : int 71 90 93 56 78 84 93 43 64 59 ...
$ WritingScore : int 74 88 91 42 75 79 89 39 68 50 ...

Ah, however wait! Your eager eyes have noticed an anomaly. The variable “WklyStudyHours” seems to have been labeled as a personality (chr) as a substitute of its rightful numeric (num) nature. Worry not, for you possess the facility to appropriate this. To rectify this discrepancy, you need to use knowledge$WklyStudy…….. Ooopz can’t join with the spell maintain exploring the journey see within the subsequent half hahaha hahaha hahaha  

Information Cleansing & Formatting

The Magical Transformation Ah, However our dataset has just a few quirks, like lacking values and undesirable index columns. Worry not, With a splash of code, we bid farewell to the further index column and wave our wands to transform clean areas into the paranormal realm of NA values. Step-by-step, we breathe life into our dataset, making certain every column shines brightly with the right knowledge sort.


knowledge$X <- NULL


columns <- colnames(knowledge)


for (column in columns) {

  knowledge[[column]] <- ifelse(trimws(knowledge[[column]]) == ""

                           NA, knowledge[[column]])


Dealing with Lacking Values

A World of Lacking Values Ahoy! We’ve stumbled upon lacking values! However fret not, we will fill these gaps and restore stability to our dataset. One after the other, we rescue columns from the clutches of vacancy, changing the void with the commonest classes. The lacking values tremble as we conquer them, remodeling our dataset into a whole and harmonious entity.



Dealing with Categorical Information

Unveiling the Secrets and techniques of Categorical Columns. Let’s begin with the EthnicGroup column first because it has 1840 lacking values.


unique_values <- distinctive(knowledge$EthnicGroup)



NA 'group C' 'group B' 'group A' 'group D' 'group E'


ggplot(knowledge, aes(x = EthnicGroup)) +

  geom_bar(fill = "steelblue") +

  labs(title = "Distribution of EthnicGroup",

       x = "EthnicGroup", y = "Depend")




frequency <- desk(knowledge$EthnicGroup)

mode <- names(frequency)[which.max(frequency)]



[1] "group C"


knowledge$EthnicGroup[$EthnicGroup)] <- mode


ggplot(knowledge, aes(x = EthnicGroup)) +

  geom_bar(fill = "steelblue") +

  labs(title = "Distribution of EthnicGroup",

       x = "EthnicGroup", y = "Depend")



First, we search the exact values inside the “EthnicGroup” to find the superior ethnic teams current in our dataset. Then we create a bar plot. After this, we calculate the frequency rely for each class. This step permits us to know the distribution of ethnic corporations further quantitatively. We then determine the category with the best rely. Within the occasion of missing values inside the “EthnicGroup” column, we handle this problem by the use of filling those gaps with the mode’s price. Lastly, we create one other bar plot of the “EthnicGroup” column, this time after eliminating the lacking values. The visualization serves as a distinction to the previous plot.

By executing the beneath code block we can impute the lacking values within the ParentEduc, WklyStudyHours, and NrSiblings.


knowledge <- na.omit(knowledge, cols=c("ParentEduc",



median_value <- median(knowledge$NrSiblings, 

                       na.rm = TRUE)


knowledge$NrSiblings <- ifelse($NrSiblings),

                          median_value, knowledge$NrSiblings)

Now we are going to use a bit bit refined methodology to fill within the lacking values within the remaining columns of the info.


column <- c("TransportMeans", "IsFirstChild", "TestPrep",

            "ParentMaritalStatus", "PracticeSport")


for(column in columns){



    frequency <- desk(knowledge$column)



    most_common <- names(frequency)[which.max(frequency)]




    knowledge$column[$column)] <- most_common


Now let’s test whether or not there are any extra lacking values left within the dataset or not.





Descriptive statistical measures of a dataset assist us higher visualize and discover the dataset at hand with out going via every statement of the dataset. We are able to get all of the descriptive statistical measures of the dataset utilizing the describe() operate.



Function scaling

There are occasions when we’ve totally different options within the dataset on totally different scales. So, whereas utilizing the gradient descent algorithm for coaching of the Machine Studying Algorithms it’s suggested to make use of options that are on the identical scale to have secure and quick coaching of our algorithm. There are totally different strategies of characteristic scaling like standardization and normalization which is often known as Min-Max Scaling.


Standardizing is like giving your stats a change! Think about you’ve a gaggle of buddies, all of whom are of various heights and weights. Some are lengthy, some are heavy, tough to match straight. So what do you guys do? You carry a magic tailor with their requirements! The tailor takes every good friend and measures their top and weight. Then, these measurements are transformed to a brand new scale the place every individual’s top and weight are adjusted to equal imply and worth. Now, all your folks are “standardized” in a technique. Modified for ease of comparability and evaluation.

Thus, standardization is all about changing knowledge to a typical scale to facilitate comparability and evaluation. It’s like giving your stats a trendy makeover to disclose their true magnificence and energy!

  1. Create a brand new dataframe standardized_data as a duplicate of the unique knowledge dataframe.
  2. Standardize the MathScore column in standardized_data utilizing the size() operate.
  3. Print the primary few standardized values of MathScore utilizing head().


standardized_data <- knowledge.body(knowledge)


standardized_math <- scale(standardized_data$MathScore)




[1,] 0.2758342
[2,] 1.3395404
[3,] 0.6082424
[4,] 0.4087975
[5,] 1.2065771
[6,] -1.7186149


Normalization is like giving your info a makeover to make them first-class! It’s like dressing up your numbers and making them sense assured and pleasing. Similar to getting every individual to placed on the same-sized t-shirt, normalization adjusts the values so all of them match correctly inside a particular vary(0-1). It’s like bringing concord to your statistics, making it easier to look at and analyze. So, let’s get your info geared up to shine with some normalization magic!

  • Create a brand new dataframe normalized_data as a duplicate of the unique knowledge.
  • Normalize the values within the MathScore column utilizing the formulation:

(normalized_data$MathScore - min(normalized_data$MathScore)) / (max(normalized_data$MathScore) - min(normalized_data$MathScore)).

  • The normalized MathScore values are saved within the variable normalized_math, starting from 0 to 1


normalized_data <- knowledge.body(knowledge)


normalized_math <- (normalized_data$MathScore -

                    min(normalized_data$MathScore)) /

(max(normalized_data$MathScore) - min(normalized_data$MathScore))




0.71 0.87 0.76 0.73 0.85 0.41

Function Encoding

Function encoding is like instructing your knowledge to talk the identical language as your pc. It’s like giving your knowledge a crash course in communications. Function encoding is a method for changing categorical or non-numeric knowledge right into a numeric illustration .

One Sizzling Encoding

One Sizzling encoding is a strategy to characterize categorical variables as numeric values. We’re creating a brand new knowledge body known as “encoded_data” utilizing one-hot encoding for the “PracticeSport” variable.

  1. Every distinctive worth within the “PracticeSport” column is remodeled right into a separate column with binary values (0s and 1s) indicating the presence or absence of that worth.
  2. The unique knowledge body “knowledge” is mixed with the encoded columns utilizing the “cbind” operate, ensuing within the new “encoded_data” knowledge body.


encoded_data <- as.knowledge.body(mannequin.matrix(~ PracticeSport - 1, knowledge))


encoded_data <- cbind(knowledge, encoded_data)





Ordinal encoding

  1. Ordinal encoding is used to characterize categorical variables with ordered or hierarchical relationships utilizing numerical values.
  2. Within the case of the “PracticeSport” column, we assigned numerical values to the classes “by no means”, “typically”, and “recurrently” based mostly on their order. “by no means” was assigned 0, “typically” was assigned 1, and “recurrently” was assigned 2.


encoded_data <- knowledge


mapping <- c("by no means" = 0, "typically" = 1,

             "recurrently" = 2)


encoded_data$PracticeSport <- mapping[as.character(encoded_data$PracticeSport)]





Dealing with outliers

Field Plots: The Guardians of Numerical Columns – MathScore, ReadingScore, and WritingScore—maintain untold tales inside. However behold, outliers lurk within the shadows, threatening to skew our insights. Worry not, for we summon the facility of field plots to conquer these outliers and restore stability to our numerical realm.


boxplot(knowledge[c("MathScore", "ReadingScore", "WritingScore")])


The code calculates the bounds for outlier detection in a number of columns of the knowledge dataframe. It then eliminates the outliers from every column and creates a field plot. This allows to find and address outliers inside the info.


columns <- c("MathScore", "WritingScore", "ReadingScore")


outlier_threshold <- 1.5


lower_bounds <- c()

upper_bounds <- c()


for (column in columns) {

  column_iqr <- IQR(knowledge[[column]])

  column_lower_bound <- quantile(knowledge[[column]], 0.25) - outlier_threshold * column_iqr

  column_upper_bound <- quantile(knowledge[[column]], 0.75) + outlier_threshold * column_iqr


  lower_bounds <- c(lower_bounds, column_lower_bound)

  upper_bounds <- c(upper_bounds, column_upper_bound)



for (i in 1:size(columns)) {

  column <- columns[i]

  lower_bound <- lower_bounds[i]

  upper_bound <- upper_bounds[i]


  knowledge <- knowledge[data[[column]] >= lower_bound & knowledge[[column]] <= upper_bound, ]



boxplot_data <- knowledge[, columns, drop = FALSE]

boxplot(boxplot_data, names = columns, principal = "Field Plot of Numerical Columns")



And that’s a wrap! However keep in mind, that is just the start of our knowledge journey. There’s an unlimited world of study, modeling, and interpretation awaiting us. So gear up, maintain on tight, and let’s proceed unraveling the mysteries hidden inside the knowledge!

Preserve exploring and have enjoyable in your data-driven escapades!

(Visited 5 times, 1 visits today)

0 0 votes
Article Rating
Notify of
Inline Feedbacks
View all comments
Ask ChatGPT
Set ChatGPT API key
Find your Secret API key in your ChatGPT User settings and paste it here to connect ChatGPT with your Tutor LMS website.
Would love your thoughts, please comment.x