class: title-slide, left, bottom # Lecture 20 ---- ## **DANL 200: Introduction to Data Analytics** ### Byeong-Hak Choe ### November 10, 2022 --- # Announcement ### <p style="color:#00449E"> Changes in Syllabus - The main change in the syllabus is that we will have class pop-up quizzes. - The scores from the quizzes will be added to the total percentage grade as a bonus credit. --- class: inverse, center, middle # Factors <html><div style='float:left'></div><hr color='#EB811B' size=1px width=796px></html> --- # Factors ### <p style="color:#00449E"> Creating factors - In `R`, factors are categorical variables, variables that have a fixed and known set of possible values. ```r x1 <- c("Dec", "Apr", "Jan", "Mar") ``` - Using a string to record variable `x1` has two problems: 1. There are only twelve possible months, and there's nothing saving us from typos. 2. It doesn't sort in a useful way. ```r x2 <- c("Dec", "Apr", "Jam", "Mar") sort(x1) ``` --- # Factors ### <p style="color:#00449E"> Creating factors with `factor()` - We can fix both of these problems with `factor()`. - To create a factor, we must start by creating a list of the valid `levels`. - Any values not in the set will be silently converted to `NA`. - If we omit the `levels`, they'll be taken from the data in alphabetical order: .pull-left[ ```r months <- c( "Jan", "Feb", "Mar", "Apr", "May", "Jun", "Jul", "Aug", "Sep", "Oct", "Nov", "Dec") x1 <- c("Dec", "Apr", "Jan", "Mar") y1 <- factor(x1, levels = months) sort(y1) ``` ] .pull-right[ ```r x2 <- c("Dec", "Apr", "Jam", "Mar") y2 <- factor(x2, levels = months) y2 factor(x1) ``` ] --- # Factors ### <p style="color:#00449E"> Creating factors with `factor()` - Sometimes we'd prefer that the order of the levels match the order of the first appearance in the data. - We can do that when creating the factor by setting levels to `unique()`. - If we ever need to access the set of valid levels directly, we can do so with `levels()`. ```r x1 f1 <- factor(x1, levels = unique(x1)) f1 levels(f1) ``` --- # Factors ### <p style="color:#00449E"> General Social Survey - We're going to focus on the data frame, `forcats::gss_cat`.which is a sample of data from the General Social Survey. - When factors are stored in a data frame, we can see them with `count()`. ```r gss_cat gss_cat %>% [?](race) ``` --- # Factors ### <p style="color:#00449E"> Modifying factor order - It's often useful to change the order of the factor levels in a visualization. - Imagine we want to explore the average number of hours spent watching TV per day across `relig`: .pull-left[ ```r relig_summary <- gss_cat %>% group_by([?]) %>% summarize( age = [?](age, [?]), tvhours = [?](tvhours, [?]), n = [?] ) ``` ] .pull-right[ ```r ggplot(relig_summary, aes(tvhours, relig)) + [?]() ``` ] --- # Factors ### <p style="color:#00449E"> Modifying factor order - We can reorder the levels using `fct_reorder(f, x, fun)`, which can take three arguments. - `f`: the factor whose levels we want to modify. - `x`: a numeric vector that we want to use to reorder the levels. - Optionally, `fun`: a function that's used if there are multiple values of `x` for each value of `f`. The default value is *median*. ```r relig_summary %>% mutate(relig = [?](relig, tvhours)) %>% ggplot(aes(tvhours, relig)) + [?]() ``` --- # Factors ### <p style="color:#00449E"> Modifying factor order - We can use `relevel()` to set the first level (*reference level*). - `relevel(x, ref = ...)` takes at least the two arguments: - `x`: factor variable - `ref`: reference level or first level .pull-left[ ```r rincome_summary <- gss_cat %>% group_by([?]) %>% summarize( age = [?](age, [?]), tvhours = [?](tvhours, [?]), n = [?] ) ``` ] .pull-right[ ```r ggplot(rincome_summary, aes(age, [?](rincome, age) ) ) + geom_point() ggplot(rincome_summary, aes(age, [?](rincome, "Not applicable") ) ) + geom_point() ``` ] --- class: inverse, center, middle # Dates and Times <html><div style='float:left'></div><hr color='#EB811B' size=1px width=796px></html> --- # Dates and times - The `lubridate` package makes it easier to work with dates and times in `R`. ```r library(tidyverse) library(lubridate) library(nycflights13) ``` --- # Dates and times ### <p style="color:#00449E"> Creating date/times - To get the current date or date-time we can use `today()` or `now()`. - The `lubridate` functions are a combination of y, m, d, h, m, and s. .pull-left[ ```r today() now() ``` ] .pull-right[ ```r ymd("2017-01-31") [?]("January 31st, 2017") [?]("31-Jan-2017") [?](20170131) [?]("2017-01-31 20:11:59") [?]("01/31/2017 08:01") [?](20170131, [?] = "UTC") ``` ] --- # Dates and times ### <p style="color:#00449E"> Creating date/times - To create a date/time from individual components, use `make_date()` for dates, or `make_datetime()` for date-times: ```r flights %>% select(year, month, day, hour, minute) flights %>% select(year, month, day, hour, minute) %>% mutate(departure = [?](year, month, day, hour, minute)) flights %>% select(year, month, day, hour, minute) %>% mutate(departure = [?](year, month, day)) ``` --- # Dates and times ### <p style="color:#00449E"> Creating date/times - We can visualize time series data using `lubridate` functions. ```r flights_dt %>% ggplot(aes(dep_time)) + geom_freqpoly(binwidth = 86400) # 86400 seconds = 1 day flights_dt %>% filter(dep_time < ymd(20130102)) %>% ggplot(aes(dep_time)) + geom_freqpoly(binwidth = 600) # 600 s = 10 minutes ``` --- # Dates and times ### <p style="color:#00449E"> Creating date/times - `as_datetime()` and `as_date()` switch between a date-time and a date. ```r [?](today()) [?](now()) ``` --- class: inverse, center, middle # Modeling Methods - Linear Regression <html><div style='float:left'></div><hr color='#EB811B' size=1px width=796px></html> --- # Linear Regression Methods ### <p style="color:#00449E"> Applications - With linear regression, we can predict outcomes and infer relationships between variables: - AdWord valuation: how much the company should spend to buy certain AdWords on search engines - Estimating price elasticity (the rate at which a price increase will decrease sales, and vice versa) of various products or product classes - Predicting the probability that a loan will default - Inferring how much a marketing campaign will increase traffic or sales --- # Linear Regression ### <p style="color:#00449E"> - Linear regression is the fundamental method we should always try for our empirical analysis. - The predicted outcome; - The relationship between the explanatory variables and the outcome variable. - Issues related to the linear regression methods include ... - **Colinearity**: too high correlation between the explanatory variables. - **Omitted variable bias**: bias in the model because of omitting important explanatory variables that are related with the existing explanatory variables. - **Selection bias**: bias in the model because of the *selection* of sample data does not properly represent the entire population. --- # Linear Regression ### <p style="color:#00449E"> Example - Suppose we want to predict personal income based on how many years a person lives. - In other words, for every person `i`, we want to predict `PINCP[i]` based on `AGEP[i]`. - We also want to estimate how an increase in `AGEP[i]` is associated with `PINCP[i]`. --- # Linear Regression ### <p style="color:#00449E"> Linear Relationship - Linear regression assumes that ... - The outcome `PINCP[i]` is linearly related to the input `AGEP[i]`: `$$\texttt{PINCP[i]} \;=\quad \texttt{b0} \,+\, \texttt{b1*AGEP[i]} \,+\, \texttt{e[i]}$$` where `e[i]` is a statistical error term. --- # Linear Regression ### <p style="color:#00449E"> The Linear Relationship between `PINCP` and `AGEP` <img src="../lec_figs/reg_pincp_age.png" width="67%" style="display: block; margin: auto;" /> --- # Linear Regression ### <p style="color:#00449E"> Example - Suppose we also want to estimate how gender will affect personal income. - Linear regression assumes that ... - The outcome `PINCP[i]` is linearly related to each of the inputs `AGEP[i]` and `SEX[i]`: `$$\texttt{PINCP[i]} \;=\quad \texttt{f(AGEP[i], SEX[i])} \,+\, \texttt{e[i]} \qquad\qquad\qquad\qquad\\ \;=\quad \texttt{b0} \,+\, \texttt{b1*AGEP[i]} \,+\, \texttt{b2*SEX[i]}\,+\, \texttt{e[i]}$$` - A variable on the left-hand side is called an outcome variable or a dependent variable. - Variables on the right-hand side are called explanatory variables, independent variables, or input variables. - Coefficients `\(\texttt{b[1]}, ... , \texttt{b[P]}\;\)` on the right-hand side are called beta coefficients. --- # Linear Regression ### <p style="color:#00449E"> Goals of Linear Regression - The goals of linear regression are ... 1. Find the estimated values of `b1` and `b2`: `\(\quad \hat{\texttt{b1}}\)` and `\(\hat{\texttt{b2}}\)`. 2. Make a prediction on `PINCP[i]` for each person `i`: `\(\quad \widehat{\texttt{PINCP}}\texttt{[i]}\)`. `$$\widehat{\texttt{PINCP}}\texttt{[i]} \;=\quad \hat{\texttt{b0}} \,+\, \hat{\texttt{b1}}\texttt{*AGEP[i]} \,+\, \hat{\texttt{b2}}\texttt{*SEX[i]}$$` - We will use the hat notation `\((\,\hat{\texttt{ }}\,)\)` to distinguish *estimated* beta coefficients and *predicted* outcomes from *true* values of beta coefficients and *true* values of outcome variables, respectively. --- # Linear Regression ### <p style="color:#00449E"> More Assumptions - Assumptions on the linear regression model are that ... - The outcome variable is a linear combination of the explanatory variables. - Errors have a mean value of 0. - Errors are *uncorrelated* with explanatory variables. --- # Linear Regression ### <p style="color:#00449E"> Beta estimates - Linear regression finds the beta coefficients `\(( \texttt{b[0]}, ... , \texttt{b[P]} )\)` such that ... – The linear function `\(\texttt{f(x[i, ])}\)` is as near as possible to `\(\texttt{y[i]}\)` for all `\(\texttt{(x[i, ], y[i])}\)` pairs in the data. - In other words, the estimator for the beta coefficients is chosen to minimize the sum of squares of the *residual errors* (SSR): - `\(\texttt{Residual_Error[i] = y[i] - } \hat{\texttt{y}}\,\texttt{[i]}\)`. - `\(\texttt{SSR} = \texttt{Residual_Error[1]}^{2} + \cdots + \texttt{Residual_Error[N]}^{2}\)`. --- # Linear Regression ### <p style="color:#00449E"> Example of Prediction .pull-left[ - Linear regression often does an excellent job, even when the actual relation between `\(\texttt{x[i, ]}\)` and `\(\texttt{y[i]}\)` is not linear. - For example, `\(y = x^2\;\)` vs. `\(\;f(x) = -22 + 11x\)` ] .pull-right[ <img src="../lec_figs/pds_fig7_3.png" width="100%" style="display: block; margin: auto;" /> ] --- # Linear Regression ### <p style="color:#00449E"> Evaluating Models - **Training data**: When we're building a model to make predictions or to identify the relationships, we need *data* to build the model. - **Testing data**: We also need data to test whether the model works well on *new data*. .pull-left[ - So, we split data into training and test sets when building a linear regression model. ] .pull-right[ <img src="../lec_figs/pds_fig4_12.png" width="100%" style="display: block; margin: auto;" /> ] --- # Linear Regression ### <p style="color:#00449E"> Evaluating Models - We need to ensure that our model will perform well in the real world. <div class="figure" style="text-align: center"> <img src="../lec_figs/pds_fig6_6.png" alt="Schematic of model construction and evaluation" width="52%" /> <p class="caption">Schematic of model construction and evaluation</p> </div> --- # Linear Regression ### <p style="color:#00449E"> A Little Bit of Statistics for the Uniform Distribution .pull-left[ - The probability density function for the uniform distribution looks like: - With the uniform distribution, any values of `\(x\)` between 0 and 1 is equally likely drawn. - `runif(n)` generates `n` uniform random numbers between 0 and 1. ] .pull-right[ <img src="../lec_figs/unifpdf.png" width="75%" style="display: block; margin: auto;" /> ] - We will use the uniform distribution when splitting data into training and testing data sets. --- class: inverse, center, middle # Linear Regression using **R** <html><div style='float:left'></div><hr color='#EB811B' size=1px width=796px></html> --- # Linear Regression ### <p style="color:#00449E"> Example of Linear Regression using **R** - We will use the 2016 US Census PUMS dataset. - Full-time employees between 20 and 50 years of age with income between $1,000 and $250,000; - Personal data recorded includes personal income and demographic variables: - `PINCP`: personal income - `AGEP`: age - `SEX`: sex --- # Linear Regression ### <p style="color:#00449E"> Spliting Data into Training and Testing Data .panelset[ .panel[.panel-name[Step 1. set.seed()] ```r # Importing the cleaned small sample of data psub <- readRDS( url('https://bcdanl.github.io/data/psub.RDS') ) # Making the random sampling reproducible by setting the random seed. set.seed(3454351) # 3454351 is just any number. # The set.seed() function sets the starting number # used to generate a sequence of random numbers. # With set.seed(), we can replicate the random number generation: # If we start with that same seed number in the set.seed() each time, # we run the same random process, # so that we can replicate the same random numbers. ``` ] .panel[.panel-name[Step 2. runif()] ```r # How many random numbers do we need? gp <- runif([?]) # a number generation from a random variable that follows Unif(0,1) # Splits 50-50 into training and test sets # using filter() and gp dtrain <- filter(psub, gp >= [?]) dtest <- filter(psub, gp < [?]) # A vector can be used for CONDITION in the filter(data.frame, CONDITION) # if the length of the vector is the same as that of the data.frame. ``` ] ] --- # Linear Regression ### <p style="color:#00449E"> Exploratory Data Analysis (EDA) - Use summary statistics and visualization to explore the data, particularly for the following variables: - `PINCP`: personal income - `AGEP`: age - `SEX`: sex - It's often a better idea to get some sense of how the data behaves through EDA before doing any statistical analysis. ```r install.packages("GGally") # to use GGally::ggpairs() ggpairs( select(dtrain, PINCP, AGEP, SEX) ) # for correlogram or correlation matrix ``` --- # Linear Regression ### <p style="color:#00449E"> Building a linear regression model using `lm()` ```r model <- lm(formula = PINCP ~ AGEP + SEX, data = dtrain) model <- lm(PINCP ~ AGEP + SEX, data = dtrain) ``` In the above line of R commands, ... - `model`: R object to save the estimation result of linear regression - `lm()`: Linear regression modeling function - `PINCP ~ AGEP + SEX`: Formula for linear regression - `PINCP`: Outcome/Dependent variable - `AGEP, SEX`: Input/Independent/Explanatory variables - `dtrain`: Data frame to use for training --- # Linear Regression using **R** ### <p style="color:#00449E"> Making predictions with a linear regression model using `predict()` ```r dtest$pred <- predict(model, newdata = dtest) ``` - In the above line of R commands, ... - `dtest$pred`: Adding a new column `pred` to the `dtest` data frame. `mutate()` also works. - `predict()`: Function to get the predicted outcome using `model` and `dtest` - `model`: R object to save the estimation result of linear regression - `dtest`: Data frame to use in prediction - We can make prediction using `dtrain` data frame too. --- # Linear Regression using **R** ### <p style="color:#00449E"> Summary of the regression result ```r summary(model) # This produces the output of the linear regression. ``` <img src="../lec_figs/pds_fig7_10a.png" width="60%" style="display: block; margin: auto;" /> --- # Linear Regression using **R** ### <p style="color:#00449E"> Indicator variables - Linear regression handles a factor variable with `m` possible levels by converting it to `m-1` indicator variables, and the rest `1` category, the first level of the factor variable, becomes a reference level. - The value of any indicator variable is either 0 or 1. - E.g., the indicator variable, `SEXFemale`, is follows: $$ \texttt{SEXFemale[i] }\\ = \begin{cases} \texttt{1} & \text{if a person } \texttt{i} \text{ is } \texttt{female};\\\\ \texttt{0} & \text{otherwise}.\qquad\qquad\quad\, \end{cases} $$ - The level `male` becomes a reference level when interpreting the beta estimate for `SEXFemale`. --- # Linear Regression using **R** ### <p style="color:#00449E"> Setting a reference level - If the independent variable includes factor variables, we can set a reference level for each factor variable using `relevel(VARIABLE, ref = "LEVEL")`. .panelset[ .panel[.panel-name[code] ```r dtrain$SEX, <- relevel(dtrain$SEX, ref = "Male") model <- lm(log(PINCP) ~ AGEP + SEX, data = dtrain) summary(model) ``` ] .panel[.panel-name[variable] - E.g., the indicator variable, `SEXMale`, is follows: $$ \texttt{SEXMale[i] }\\ = \begin{cases} \texttt{1} & \text{if a person } \texttt{i} \text{ is } \texttt{male};\\\\ \texttt{0} & \text{otherwise}.\qquad\qquad\quad \end{cases} $$ - The level `female` now becomes a reference level. - Note: Changing the reference level does not change the regression result. ] ] --- # Linear Regression using **R** ### <p style="color:#00449E"> Interpreting Estimated Coefficients The model is ... `$$\texttt{PINCP[i]} \;=\quad \texttt{b0} \,+\, \texttt{b1*AGEP[i]} \,+\,\texttt{b2*SEX.Male[i]}\,+\, \texttt{e[i]}$$` All else being equal, ... .panelset[ .panel[.panel-name[`AGEP`] - The model gives an estimated bonus of `b1` to the income for being one year older than the equivalent person (same sex but one year younger). - All else being equal, an increase in `AGEP` by one unit is associated with an increase in `PINCP` by `b1`. ] .panel[.panel-name[`SEX.Male`] - The model gives an estimated bonus of `b2` to the income for being a male relative to the equivalent female person (same age but different gender). - All else being equal, an increase in `SEX.Male` by one unit is associated with an increase in `PINCP` by `b2`. ] ] --- # Linear Regression using **R** ### <p style="color:#00449E"> Interpreting Estimated Coefficients Consider the predicted incomes of the two male persons, `Ben` and `Bob`, whose ages are 51 and 50 respectively. `$$\widehat{\texttt{PINCP[Ben]}} \;=\quad \hat{\texttt{b0}} \,+\, \hat{\texttt{b1}}\texttt{ * AGEP[Ben]} \,+\, \hat{\texttt{b2}}\texttt{ * SEX.Male[Ben]}\\ \widehat{\texttt{PINCP[Bob]}} \;=\quad \hat{\texttt{b0}} \,+\, \hat{\texttt{b1}}\texttt{ * AGEP[Bob]} \,+\, \hat{\texttt{b2}}\texttt{ * SEX.Male[Bob]}$$` `$$\Leftrightarrow\qquad\widehat{\texttt{PINCP[Ben]}} \,-\, \widehat{\texttt{PINCP[Bob]}}\qquad \\ \;=\quad \hat{\texttt{b1}}\texttt{ * }(\texttt{AGEP[Ben]} - \texttt{AGEP[Bob]})\\ \;=\quad \hat{\texttt{b1}}\texttt{ * }\texttt{(51 - 50)}\qquad\qquad\quad\;\;\\ \;=\quad \hat{\texttt{b1}}\qquad\qquad\qquad\qquad\quad\;\;\;\,$$` --- # Linear Regression using **R** ### <p style="color:#00449E"> Interpreting Estimated Coefficients Consider the predicted incomes of the two persons, `Ben` and `Linda`, whose ages are the same as 50. `Ben` is `male` and `Linda` is `female`. `$$\widehat{\texttt{PINCP[Ben]}} \;=\quad \hat{\texttt{b0}} \,+\, \hat{\texttt{b1}}\texttt{ * AGEP[Ben]} \,+\, \hat{\texttt{b2}}\texttt{ * SEX.Male[Ben]}\;\;\,\\ \widehat{\texttt{PINCP[Linda]}} \;=\quad \hat{\texttt{b0}} \,+\, \hat{\texttt{b1}}\texttt{ * AGEP[Linda]} \,+\, \hat{\texttt{b2}}\texttt{ * SEX.Male[Linda]}$$` `$$\Leftrightarrow\qquad\widehat{\texttt{PINCP[Ben]}} \,-\, \widehat{\texttt{PINCP[Linda]}}\qquad\qquad\qquad \\ \;=\quad \hat{\texttt{b2}}\texttt{ * }(\texttt{SEX.Male[Ben]} - \texttt{SEX.Male[Linda]})\\ \;=\quad \hat{\texttt{b2}}\texttt{ * }\texttt{(1 - 0)}\qquad\qquad\quad\qquad\qquad\quad\;\;\\ \;=\quad \hat{\texttt{b2}}\qquad\qquad\qquad\qquad\quad\qquad\qquad\quad\;$$` --- # Linear Regression using **R** ### <p style="color:#00449E"> Interpreting Estimated Coefficients - What does it mean for a beta estimate `\(\hat{\texttt{b}}\)` to be statistically significant at 5% level? - It means that the null hypothesis `\(H_{0}: \texttt{b} = 0\)` is rejected for a given significance level 5%. - "2 standard error rule" of thumb: The true value of `\(\texttt{b}\)` is 95% likely to be in the confidence interval `\((\, \hat{\texttt{b}} - 2 * \texttt{Std. Error}\;,\; \hat{\texttt{b}} + 2 * \texttt{Std. Error} \,)\)`. - The standard error tells us how uncertain our estimate of the coefficient `b` is. - We should look for the stars! --- # Linear Regression using **R** ### <p style="color:#00449E"> **R-squared** - **R-squared** is a measure of how well the model “fits” the data, or its “goodness of fit.” - **R-squared** can be thought of as *what fraction of the `y`'s variation is explained by the independent variables*. - **R-squared** will be higher for models with more explanatory variables, regardless of whether the additional explanatory variables actually improve the model or not. - We want **R-squared** to be *fairly* large and **R-squareds** that are similar on testing and training. - The adjusted **R-squared** is the multiple **R-squared** penalized for the number of input variables. --- # Linear Regression using **R** ### <p style="color:#00449E"> Visualizations to diagnose the quality of modeling results .panelset[ .panel[.panel-name[Quality?] - The following two visualizations from the linear regression are useful to determine the quality of linear regression: 1. Actual vs. predicted outcome plot; 2. Residual plot. ] .panel[.panel-name[Actual vs. Predicted] - The following is **the actual vs predicted outcome plot**. ```r ggplot( data = dtest, aes(x = pred, y = PINCP) ) + geom_point( alpha = 0.2, color = "darkgray" ) + geom_smooth( color = "darkblue" ) + geom_abline( color = "red", linetype = 2 ) # y = x, perfect prediction line ``` ] .panel[.panel-name[Residuals] - The following is **the residual plot**. - `Residual[i] = y[i] - Predicted_y[i]`. ```r ggplot(data = dtest, aes(x = pred, y = PINCP - pred)) + geom_point(alpha = 0.2, color = "darkgray") + geom_smooth( color = "darkblue" ) + geom_hline( aes( yintercept = 0 ), # perfect prediction color = "red", linetype = 2) + labs(x = 'Predicted PINCP', y = "Residual error") ``` ] .panel[.panel-name[Q & A] - From the plot of actual vs. predicted outcomes and the plot of residuals, we should ask the following the two questions ourselves: - On average, are the predictions correct? - Are there systematic errors? - A well-behaved plot will bounce *randomly* and form a cloud roughly around the perfect prediction line. ] .panel[.panel-name[Sys. Errors] .pull-left[ <img src="../lec_figs/pds_fig7_8.png" width="100%" style="display: block; margin: auto;" /> ] .pull-right[ <div class="figure" style="text-align: center"> <img src="../lec_figs/residual-hetero.png" alt="An example of systematic errors in model predictions" width="90%" /> <p class="caption">An example of systematic errors in model predictions</p> </div> ] ] ] --- # Linear Regression using **R** ### <p style="color:#00449E"> Practical considerations in linear regression .panelset[ .panel[.panel-name[Correlation vs. Causation] - Correlation does not imply causation: - Just because a coefficient is significant, doesn’t mean our explanatory variable causes the response of our outcome variable. - In order to test cause-and-effect relationships through regression, we would often need data from (quasi-)experiments to remove *selection bias*. ] .panel[.panel-name[RCT & A/B testing] - To achieve causality, researchers conduct experiments such as randomized controlled trials (RCT) and A/B testing: - The treatment group receives the treatment whose effect the researcher is interested in. - The control group receives either no treatment or a placebo. - The treatment variable indicates the status of treatment and control. - In linear regression, if all explanatory variables apart from the treatment variable are made equal across the two groups, selection bias is mostly eliminated, so that we may infer *causality* from beta estimates. ] .panel[.panel-name[Practical significance] - There is a difference between practical significance and statistical significance: - Whether an association between `x` and `y` is *practically significant* depends heavily on *the unit of measurement*. - E.g., We regressed income (measured in $) on height, and got a statistically significant beta estimate of 100, with a standard error of 20. - **Q**. Is 100 a large effect? ] ] --- # Linear Regression using **R** ### <p style="color:#00449E"> More Explanatory Variables in the Model - In the 2016 US Census PUMS dataset, personal data recorded includes occupation, level of education, personal income, and many other demographic variables: - `COW`: class of worker - `SCHL`: level of education --- # Linear Regression using **R** ### <p style="color:#00449E"> More Explanatory Variables in the Model - Suppose we also want to assess how personal income varies with (1) a bachelor's degree, (2) a class of work on personal income, (3) age, and (4) gender. 1. Conduct the exploratory data analysis. 2. Based on the visualization, set a hypothesis regarding the relationship between having bachelor's degree and `PINCP`. 3. Train the linear regression model. 4. Interpret the beta coefficients from the linear regression result. 5. Calculate the predicted `PINCP` using the testing data. 6. Draw the actual vs. predicted outcome plot and the residual plot.