In this project, the open-source R programming language is used to model the progression in the COVID-19 pandemic in different U.S. counties. R is maintained by an international team of developers who make the language available at The Comprehensive R Archive Network. Readers interested in reusing our code and reproducing our results should have R installed locally on their machines. R can be installed on a number of different operating systems (see Windows, Mac, and Linux for the installation instructions for these systems). We also recommend using the RStudio interface for R. The reader can download RStudio for free by following the instructions at the link. For non-R users, we recommend the Hands-on Programming with R for a brief overview of the software’s functionality. Hereafter, we assume that the reader has an introductory understanding of the R programming language.
In the code chunk below, we load the packages used to support our analysis. Note that the code of this and any of the code chunks can be hidden by clicking on the ‘Hide’ button to facilitate the navigation. The reader can hide all code and/or download the Rmd file associated with this document by clicking on the Code button on the top right corner of this document. Our input and output files can also be accessed/ downloaded from fmegahed/covid19.
if(require(pacman)==FALSE) install.packages("pacman") # check to see if the pacman package is installed; if not install it
if(require(devtools)==FALSE) install.packages("devtools") # check to see if the devtools package is installed; if not install it
# to check and install if these packages are not found locally on machine
if(require(albersusa)==FALSE) devtools::install_github('hrbrmstr/albersusa') #install package if needed
# check if packages are not installed; if yes, install missing packages
pacman::p_load(tidyverse, magrittr, janitor, dataPreparation, lubridate, skimr, # for data analysis
COVID19, rvest, readxl, # for extracting relevant data
DT, pander, stargazer, knitr, # for formatting and nicely printed outputs
scales, RColorBrewer, DataExplorer, tiff, grid,# for plots
plotly, albersusa, tigris, leaflet, tmap, # for maps
zoo, fpp2, NbClust, # for TS analysis and clustering
VIM, nnet, caret, MuMIn, # for multinomial regression modeling
conflicted) # for managing conflicts in functions with same names
# Handling conflicting function names from packages
conflict_prefer('combine', 'dplyr') # Preferring dplyr::combine over any other package
conflict_prefer('select', "dplyr") #Preferring dplyr::select over any other package
conflict_prefer("summarize", "dplyr") # similar to above but with dplyr::summarize
conflict_prefer("filter", "dplyr") # Preferring filter from dplyr
conflict_prefer("dist", "stats") # Preferring dist from stats
conflict_prefer("as.dist", "stats") # Preferring as.dist from stats
# Custom Functions
source_url('https://raw.githubusercontent.com/fmegahed/covid19-deaths/master/Markdown/custom_functions.R')
set.seed(2020) # to assist with reproducibility
sInfo = sessionInfo() # saving all the packages/functions & session info
For our analysis, we fuse data from multiple sources. We describe the process of obtaining and merging each of these sources in the subsections below.
In this section, we utilize the COVID19 package to obtain the following information: (Guidotti & Ardia, 2020)
From this information, we have also computed the new daily and weekly confirmed cases/deaths per county. The data is stored in a tidy format, but can be expanded to a wide format using pivot_wider()
from the tidyverse package.
endDate = '2021-01-02'
endDatePrintV = format(ymd(endDate), format = "%b %d, %Y")
counties = covid19(country = "US",
level = 3, # for county
start = "2020-03-01", # First Sunday in March
end = endDate, # end Date
raw = FALSE, # to ensure that all counties have the same grid of dates
amr = NULL, # we are not using the apple mobility data for our analysis
gmr = NULL, # we are not using the Google mobility data for our analysis
wb = NULL, # world bank data not helpful for county level analysis
verbose = FALSE)
counties %<>% # next line removes non-contiguous US states/territories
filter(!administrative_area_level_2 %in% c('Alaska', 'Hawaii', 'Puerto Rico', 'Northern Mariana Islands', 'Virgin Islands')) %>%
fast_filter_variables(verbose = FALSE) %>% #dropping invariant columns or bijections
filter(!is.na(key_numeric)) %>% # these are not counties
group_by(id) %>% # grouping the data by the id column to make computations correct
arrange(id, date) %>% # to ensure correct calculations
mutate(day = wday(date, label = TRUE) %>% factor(ordered = F), # day of week
newCases = c(NA, diff(confirmed)), # computing new daily cases per county
newDeaths = c(NA, diff(deaths)) ) # computing new daily deaths per county
# manually identifying factor variables
factorVars = c("school_closing", "workplace_closing", "cancel_events",
"gatherings_restrictions", "transport_closing", "stay_home_restrictions",
"internal_movement_restrictions", "international_movement_restrictions",
"information_campaigns", "testing_policy", "contact_tracing")
counties %<>% # converting those variables into character and then factor
mutate_at(.vars = vars(any_of(factorVars)), .funs = as.character) %>%
mutate_at(.vars = vars(any_of(factorVars)), .funs = as.factor)
# Saving the data into an RDS file
saveRDS(counties, paste0("../Data/counties.rds"))
In the code chunk below, we obtain seven additional datasets, whose variables can explain the differences between the time-series of the number of COVID cases per county:
A. Rural/ Underserved Counties: From the Consumer Financial Protection Bureau, we have obtained the Final 2020 List titled: Rural or underserved counties. Per the website, the procedure for determining the classification of a county is as follows: “Beginning in 2020, the rural or underserved counties lists use a methodology for identifying underserved counties described in the Bureau’s interpretive rule: Truth in Lending Act (Regulation Z); Determining “Underserved” Areas Using Home Mortgage Disclosure Act Data.”
B. Based on the US Census Data, we extracted the land area in square miles for each county, which we combined with population to compute each county’s population density, which we hypothesize to be predictive of hotspots for COVID transmission based on the available COVID-19 literature.
C. Based on the MIT Election Data and Science Lab (2018), we have obtained the voting results for all counties in the 2016 Presidential elections. The data was used to compute the percentage of total votes that went to President Trump, with the underlying hypothesis that the politicization of COVID response (e.g., perception/willingness to use face masks, policies and the population’s reaction to the disease) may be explained by party affiliation.
D. We extracted an overall government response index capturing the strength of COVID-19 response policies on a state (and the District of Columbia) level from the Blavatnik School of Government’s GitHub Repository. This index captures 13 different indicators, capturing the ``full range of government response’’. Details for how this indicator is computed can be found at BSG-WP-2020/034.
E. Based on the following Kaiser Health News Webpage, we extracted by county information on the percent of population aged 60+ and the number of ICU beds per Seniors.
F. We have engineered a region
variable based on the CDC’s 10 Regions Framework. While geographic regions are hypothesized to be a factor in disease outbreaks, we chose to utilize the CDC regions specifically based on the following explanation from the aforementioned link:
> “CDC’s National Center for Chronic Disease Prevention and Health Promotion (NCCDPHP) is strengthening the consistency and quality of the guidance, communications, and technical assistance provided to states to improve coordination across our state programs”
G. Based on the Census’s Small Area Income and Poverty Estimates (SAIPE) Program, we extracted the estimate for the percent of population in poverty. The estimate is based on 2018 data (released in December 2019). At the time of the start of our analysis, these estimates were the most up to date publicly available data.
crossSectionalData = counties %>% ungroup() %>%
select(id, key_numeric, key_google_mobility, population,
administrative_area_level_2, administrative_area_level_3) %>%
unique()
# [A] Rural or Urban Classification of the County
ru = read.csv("https://www.consumerfinance.gov/documents/8911/cfpb_rural-underserved-list_2020.csv")
ru %<>% transmute(key_numeric = FIPS.Code, #renaming FIPS.Code to key_numeric
countyType = "Rural/Underserved") # creates two vars and drop old vars
crossSectionalData = merge(crossSectionalData, ru, by = "key_numeric", all.x = TRUE) # to define NA counties
crossSectionalData$countyType %<>% replace_na("Other") # for any county not in the Consumer FIN data replace NA by Other
# [B] Population Density of Each County
download.file("https://www2.census.gov/library/publications/2011/compendia/usa-counties/excel/LND01.xls",
destfile = "../Data/LND01.xls", mode = "wb") # downloading Land Area Data Per the 2010 Census
areas = read_excel("../Data/LND01.xls") %>% # reading the Excel file
select(STCOU, LND110210D) #selecting only the FIPS and the Land Area from the 2010 Census variables
colnames(areas) = c("key_numeric", "LandAreaSqMiles2010") # Renaming the columns
areas$key_numeric %<>% as.numeric() # to remove leading 0
crossSectionalData = merge(crossSectionalData, areas, by ="key_numeric", all.x = TRUE) # adding the area to data frame
crossSectionalData$popDensity = crossSectionalData$population / crossSectionalData$LandAreaSqMiles2010 # creating the population density variable
crossSectionalData %<>% select(-c(population, LandAreaSqMiles2010)) #dropping two variables used in creating pop density
# [C] 2016 Presidential Elections County Data from Harvard https://doi.org/10.7910/DVN/VOQCHQ
elections = read.csv("../Data/countypres_2000-2016.csv") %>% # reading the downloaded CSV
filter(year == 2016 & party == "republican") %>% # just keeping data for recent election and republican votes
mutate(key_numeric = FIPS, # renaming FIPS to key_numeric
percRepVotes = 100*(candidatevotes/totalvotes) ) %>% # computing percent of republican votes (from total votes)
select(key_numeric, percRepVotes) # keeping only the key and variable used in merge
crossSectionalData %<>% merge(elections, by = "key_numeric", all.x = TRUE) # merge with the counties data
# [D] Policy Data
policy = read_csv('https://raw.githubusercontent.com/OxCGRT/USA-covid-policy/master/data/OxCGRT_US_latest.csv')
policy = filter(policy, !is.na(RegionName) | !RegionName %in% c('Alaska', 'Hawaii'))
policy$state = toupper(policy$RegionName) # a state variable = an upper case of existing RegionName
policy$Date %<>% ymd() # converting the Date data to a date format
policySummary = policy %>% # calculating a summary table of median value for the GovernmentResponseIndex per state
filter(Date >= '2020-03-01' & Date <= endDate) %>% # to match our COVID Data timeSeries
group_by(state) %>% # perform computations using the median value, per state, for each index
summarise(GovernmentResponseIndexMedian = median(GovernmentResponseIndex, na.rm = TRUE))
policySummary$state %<>% str_replace('WASHINGTON DC', 'DISTRICT OF COLUMBIA') %>% str_to_title()
crossSectionalData %<>% merge(policySummary, by.x = "administrative_area_level_2", by.y = 'state', all.x = TRUE)
# [E] Kaiser Health News Data on the County Level
hospitals = read.csv("../Data/data-FPBfZ.csv") %>% # downloaded from KHN on 2020-10-26 (~9:30 pm EDT)
transmute(State = State, # keeping the State Variable | transmute drops variables that are not in call
County = County, # keeping the County Variable
PercentSeniors = Percent.of.Population.Aged.60., # Shortening Original Variable Name
icuBedsPer10000Seniors = 10000 * ICU.Beds/Population.Aged.60.) # Computing icuBedsPer10000Seniors
crossSectionalData %<>% merge(hospitals,
by.x = c("administrative_area_level_2", "administrative_area_level_3"),
by.y = c("State", "County"), all.x = TRUE)
# [F] CDC Regions for Each State
regionsCDC = data.frame(States = c('Connecticut', 'Maine', 'Massachusetts', 'New Hampshire', 'Rhode Island' ,
'Vermont', 'New York', # End of Region A
'Delaware', 'District of Columbia', 'Maryland', 'Pennsylvania',
'Virginia', 'West Virginia', 'New Jersey', # End of Region B
'North Carolina', 'South Carolina', 'Georgia', 'Florida', # Region C
'Kentucky', 'Tennessee', 'Alabama', 'Mississippi', # Region D
'Illinois', 'Indiana', 'Michigan', 'Minnesota', 'Ohio',
'Wisconsin', # End of Region E
'Arkansas', 'Louisiana', 'New Mexico', 'Oklahoma', 'Texas', # Region F
'Iowa', 'Kansas', 'Missouri', 'Nebraska', # Region G
'Colorado', 'Montana', 'North Dakota', 'South Dakota',
'Utah', 'Wyoming', # End of Region H
'Arizona', 'California', 'Hawaii', 'Nevada', # Region I
'Alaska', 'Idaho', 'Oregon', 'Washington' # Region J
),
regions = c(rep('A', 7), rep('B', 7), rep('C', 4),
rep('D', 4), rep('E', 6), rep('F', 5),
rep('G', 4), rep('H', 6), rep('I', 4),
rep('J', 4) ) )
crossSectionalData %<>% merge(regionsCDC, by.x = 'administrative_area_level_2', by.y = 'States', all.x = TRUE) # merge
# [G] Poverty Estimates
download.file("https://www2.census.gov/programs-surveys/saipe/datasets/2018/2018-state-and-county/est18all.xls",
destfile = "../Data/est18all.xls", mode = "wb") # downloading the data for poverty estimates (latest 2018)
poverty = read_excel("../Data/est18all.xls", skip = 3) %>% # reading the data in R
transmute(key_numeric = paste0(`State FIPS Code`, `County FIPS Code`) %>% as.numeric, # creating the key from two variables
povertyPercent = as.numeric(`Poverty Percent, All Ages`) ) # shortening povertyPercent Variable's Name
crossSectionalData %<>% merge(poverty, by = "key_numeric", all.x = TRUE) # merge
# Final Transformations before Saving the Counties Data
crossSectionalData %<>% mutate_at(.vars = c('countyType', 'regions'), as.factor) # converting the two vars to factor
# Saving the data into an RDS file
saveRDS(crossSectionalData, paste0("../Data/crossSectionalData.rds"))
# Tabulating the results and providing a way to export the table to different formats
datatable(crossSectionalData %>% select(-c(id, key_numeric, administrative_area_level_2, administrative_area_level_3)),
extensions = c('FixedColumns', 'Buttons'), options = list(
dom = 'Bfrtip',
scrollX = TRUE,
buttons = c('copy', 'csv', 'excel', 'pdf'),
fixedColumns = list(leftColumns = 1)),
rownames = FALSE) %>%
formatRound(columns= c('popDensity', 'percRepVotes', 'GovernmentResponseIndexMedian',
'PercentSeniors', 'icuBedsPer10000Seniors', 'povertyPercent'),
digits=1)
In this section, we perform an exploratory analysis on the data obtained from the multiple sources.
noGoogleNAs = filter(crossSectionalData, !is.na(key_google_mobility)) # removing NAs from key_google_mobility
idIndex = sample(noGoogleNAs$id, 9) # sampling 9 counties by id
# Saving cumulative deaths figure to an tiff file
tiff(filename = '../Figures/sampleCumulativeCases.tiff',
width = 1366, height =768, pointsize = 16)
counties %>% filter(id %in% idIndex) %>%
ggplot(aes(x = date, y = confirmed, group = id, color = key_google_mobility)) +
geom_line(size = 1.25) +
scale_x_date(date_breaks = "1 month", date_labels = "%b") +
facet_wrap(~ key_google_mobility, scales = 'free_y', ncol = 3) +
theme(legend.position = 'none') +
labs(color = '', x = 'Month', y = 'Cumulative Cases By County',
caption = paste0('Based on Data from March 01, 2020 - ', endDatePrintV)) +
scale_color_brewer(type = 'qual', palette = 'Paired')
invisible( dev.off() ) # to suppress the unwanted output from dev.off
# Creating an interactive plot for the markdown
p = ggplot2::last_plot() + geom_line(size = 0.75) + # modifying the plot for plotly
theme_bw(base_size = 9) + theme(legend.position = 'none') # to make margins smaller
ggplotly(p, height = 768) %>% layout_ggplotly()
# Saving new daily cases figure to an tiff file
tiff(filename = '../Figures/sampleNewDailyCases.tiff',
width = 1366, height =768, pointsize = 16)
counties %>% filter(id %in% idIndex) %>%
ggplot(aes(x = date, y = newCases, group = id, color = key_google_mobility)) +
geom_line(size = 1.25) +
scale_x_date(date_breaks = "1 month", date_labels = "%b") +
facet_wrap(~ key_google_mobility, scales = 'free_y', ncol = 3) +
theme(legend.position = 'none') +
labs(color = '', x = 'Month', y = 'New Daily Cases By County',
caption = paste0('Based on Data from March 01, 2020 - ', endDatePrintV)) +
scale_color_brewer(type = 'qual', palette = 'Paired')
invisible( dev.off() ) # to suppress the unwanted output from dev.off
# Creating an interactive plot for the markdown
p = ggplot2::last_plot() + geom_line(size = 0.75) + # modifying the plot for plotly
theme_bw(base_size = 9) + theme(legend.position = 'none') # to make margins smaller
ggplotly(p, height = 768) %>% layout_ggplotly()
crossSectionalData$fips = str_pad(crossSectionalData$key_numeric,
width = 5, side = 'left', pad = '0')
# Retrieving the U.S. county composite map as a simplefeature
cty_sf = counties_sf("aeqd") %>% filter(!state %in% c('Alaska', 'Hawaii')) # from albersua
cty_sf %<>% geo_join(crossSectionalData, by_sp= 'fips', by_df= 'fips')
# Saving a higher quality tiff file for use in the paper
tiff(filename = '../Figures/countyTypes.tiff', width = 1366, height =768, pointsize = 16)
tm_shape(cty_sf) + tm_polygons('countyType', title = 'County Type', palette = "Paired")
invisible( dev.off() ) # to suppress the unwanted output from dev.off
# Printing the png output in the Markdown doc
tm_shape(cty_sf) + tm_polygons('countyType', title = 'County Type', palette = "Paired")
# Saving a higher quality tiff file for use in the paper
tiff(filename = '../Figures/popDensity.tiff', width = 1366, height =768, pointsize = 16)
tm_shape(cty_sf) + tm_polygons('popDensity', title = 'Population Density', palette = "Greens",
style = 'quantile')
invisible( dev.off() ) # to suppress the unwanted output from dev.off
# Printing the png output in the Markdown doc
tm_shape(cty_sf) + tm_polygons('popDensity', title = 'Population Density', palette = "Greens",
style = 'quantile')
# Saving a higher quality tiff file for use in the paper
tiff(filename = '../Figures/repVotes.tiff', width = 1366, height =768, pointsize = 16)
tm_shape(cty_sf) + tm_polygons('percRepVotes', title = '% Republican Votes', palette = "Reds")
invisible( dev.off() ) # to suppress the unwanted output from dev.off
# Printing the png output in the Markdown doc
tm_shape(cty_sf) + tm_polygons('percRepVotes', title = '% Republican Votes', palette = "Reds")
state_sf = usa_sf("aeqd") %>% filter(!name %in% c('Alaska', 'Hawaii')) # from albersua
state_sf %<>% geo_join(crossSectionalData, by_sp= 'name', by_df= 'administrative_area_level_2')
# Saving a higher quality tiff file for use in the paper
tiff(filename = '../Figures/govResponse.tiff', width = 1366, height =768, pointsize = 16)
tm_shape(state_sf) + tm_polygons('GovernmentResponseIndexMedian',
title = 'Median Value of the Government Response Index', palette = "-Greens")
invisible( dev.off() ) # to suppress the unwanted output from dev.off
# Printing the png output in the Markdown doc
tm_shape(state_sf) + tm_polygons('GovernmentResponseIndexMedian',
title = 'Median Value of the Government Response Index', palette = "-Greens")
# Saving a higher quality tiff file for use in the paper
tiff(filename = '../Figures/percSeniors.tiff', width = 1366, height =768, pointsize = 16)
tm_shape(cty_sf) + tm_polygons('PercentSeniors', title = '% Seniors', palette = "Greens")
invisible( dev.off() ) # to suppress the unwanted output from dev.off
# Printing the png output in the Markdown doc
tm_shape(cty_sf) + tm_polygons('PercentSeniors', title = '% Seniors', palette = "Greens")
# Saving a higher quality tiff file for use in the paper
tiff(filename = '../Figures/cdcRegions.tiff', width = 1366, height =768, pointsize = 16)
tm_shape(state_sf) + tm_polygons('regions', title = 'CDC Region', palette = "Paired")
invisible( dev.off() ) # to suppress the unwanted output from dev.off
# Printing the png output in the Markdown doc
tm_shape(state_sf) + tm_polygons('regions', title = 'CDC Region', palette = "Paired")
# Saving a higher quality tiff file for use in the paper
tiff(filename = '../Figures/povertyPercent.tiff', width = 1366, height =768, pointsize = 16)
tm_shape(cty_sf) + tm_polygons('povertyPercent', title = 'Poverty Percent', palette = "Greens")
invisible( dev.off() ) # to suppress the unwanted output from dev.off
# Printing the png output in the Markdown doc
tm_shape(cty_sf) + tm_polygons('povertyPercent', title = 'Poverty Percent', palette = "Greens")
It is important to note that, in our estimation, there are three important decisions to be made when performing time-series clustering:
Preparation of the Different Time-Series to be Clustered In this section, we have (a) selected the new daily cases per county as the primary variable of interest, (b) smoothed that variable using a seven-day moving average, and (c) scaled the observations within each county’s 7-day moving median of new daily deaths such that it is bounded between 0 and 1. This allows us to compare the shape of the time-series/profile across counties of different populations and where the magnitude of the cases is quite different.
Choice of Distance Measure: The Euclidean distance, The Euclidean Distance i.e., the \(l_2\) norm, is the most commonly used distance measure since it is computationally efficient. However, it may not be suitable for applications where the time-series are of different length in addition to being sensitive to noise, scale and time shifts (Sardá-Espinosa, 2017).
Choice of Clustering Algorithm: A large number of clustering algorithms have been proposed in the literature. Most common clustering approaches are shape-based, which include \(k-\)means clustering and hierarchical clustering. The reader is referred to Aghabozorgi et al. (2015) for a detailed review. In our preliminary analysis, we have chosen to use the hierarchical clustering approach since it provides an easy to understand dendogram and the number of counties was small. However, in our full analysis, we will use the \(k-\)means clustering algorithm since it is computationally efficient. Furthermore, we overcame the traditional limitation of having to pre-specify \(k\) by utilizing 26 indexes for determining the optimal number of clusters in a data set based on the excellent approach and package implementation of Charrad et al. (2014).
clusteringPrep = counties %>% # from the counties
select(id, date, key_google_mobility, newCases) %>% # selecting minimal amount of cols for visual inspection
arrange(id, date) %>% # arranged to ensure correct calculations
mutate(newMM7 = rollmedianr(newCases, k = 7, fill = NA), # 7-day moving median of new (adjusted) cases
maxMA7 = max(newMM7, na.rm = T), # obtaining the max per county to scale data
scalednewMM7 = pmax(0, newMM7/maxMA7, na.rm = TRUE) ) %>% # scaling data to a 0-1 scale by county
select(id, key_google_mobility, date, scalednewMM7) %>% # dropping the variable newCases
pivot_wider(names_from = date, values_from = scalednewMM7) # converting the data to a wide format for clustering
constantColumns = which_are_constant(clusteringPrep, verbose = F) # identifying constant columns
datesDropped = colnames(clusteringPrep)[constantColumns] # used for printing the names after the code chunk
clusteringPrep %<>% select(-all_of(constantColumns) ) %>% # speeds up clustering by dec length of series
as.data.frame() # data needs to be data frame for clustering
row.names(clusteringPrep) = clusteringPrep[,1] # needed for tsclust
clusteringPrep = clusteringPrep[,-1] # dropping the id column since it is now row.name
The following dates were removed from our data frame since the scalednewMM7
variable was constant across all counties: 2020-03-01, 2020-03-02, 2020-03-03, 2020-03-04, 2020-03-05, 2020-03-06 and 2020-03-07.
clusteringPrep %<>% select(-c(key_google_mobility)) # removing this variable so we can cluster
nc = NbClust(clusteringPrep, distance = "euclidean", # euclidean distance
min.nc = 2, max.nc = 49, # searching for optimal k between k=2 and k=49
method = "kmeans", # using the k-means method
index = "all") # using 26 of the 30 indices in the package
## *** : The Hubert index is a graphical method of determining the number of clusters.
## In the plot of Hubert index, we seek a significant knee that corresponds to a
## significant increase of the value of the measure i.e the significant peak in Hubert
## index second differences plot.
##
## *** : The D index is a graphical method of determining the number of clusters.
## In the plot of D index, we seek a significant knee (the significant peak in Dindex
## second differences plot) that corresponds to a significant increase of the value of
## the measure.
##
## *******************************************************************
## * Among all indices:
## * 2 proposed 2 as the best number of clusters
## * 12 proposed 3 as the best number of clusters
## * 5 proposed 4 as the best number of clusters
## * 1 proposed 24 as the best number of clusters
## * 1 proposed 39 as the best number of clusters
## * 2 proposed 48 as the best number of clusters
##
## ***** Conclusion *****
##
## * According to the majority rule, the best number of clusters is 3
##
##
## *******************************************************************
kclus = nc$Best.partition %>% as.data.frame() %>% #obtaining the best partition/ cluster assignment for optimal k
rename(., cluster_group = .) %>% rownames_to_column("County")
#converting the wide to tall data and adding the cluster groupings
clusters = clusteringPrep %>%
rownames_to_column(var = "County") %>%
pivot_longer(cols = starts_with("2020"), names_to = "Date") %>%
inner_join(., kclus, by = "County") %>%
mutate(cluster_group = as.factor(cluster_group))
idClusters = clusters %>% select(c(County, cluster_group)) # creating a look-up table of county and cluster group
colnames(idClusters) = c('id', 'cluster_group') # renaming the columns
idClusters %<>% unique() #removing the duplicates due to different dates (we had that to ensure that the clustering was applied correctly)
# Adding Cluster Grouping to a subset of the counties data frame
clusterCounties = counties %>%
select(c(id, key_numeric, key_google_mobility, administrative_area_level_2, administrative_area_level_3)) %>%
inner_join(., idClusters, by ='id') %>%
mutate(cluster_group = paste0('C', cluster_group)) %>%
unique()
# saving the results as a RDS File
saveRDS(clusterCounties, '../Data/clusterCounties.rds')
In this subsection, we provide three plots:
spaghettiDF = counties %>% # from the counties
select(id, date, newCases, key_google_mobility) %>% # selecting minimal columns
left_join(clusterCounties[, c('id', 'cluster_group')], by = 'id') %>% # to get clusters
arrange(id, date) %>% # arranged to ensure correct calculations
mutate(newMM7 = rollmedianr(newCases, k = 7, fill = NA), # 7-day ma of new (adjusted) deaths
maxMA7 = max(newMM7, na.rm = T), # obtaining the max per county to scale data
scalednewMM7 = pmax(0, newMM7/maxMA7, na.rm = TRUE) ) %>%
ungroup() %>% select(date, cluster_group, scalednewMM7, key_google_mobility) %>%
group_by(date, cluster_group)
spaghettiDF$cluster_group %<>% as.factor()
# Creating a Named Color Scale
colorPal = brewer.pal(n= levels(spaghettiDF$cluster_group) %>% length(), 'Set2')
names(colorPal) = levels(spaghettiDF$cluster_group)
# Saving spaghetti plot to an tiff file
tiff(filename = '../Figures/spaghettiPlot.tiff', width = 1366, height =768, pointsize = 16)
spaghettiDF %>%
ggplot(aes(x = date, y = scalednewMM7, color = cluster_group, group = key_google_mobility)) +
scale_x_date(date_breaks = "1 month", date_labels = "%b") +
geom_line(size = 0.25, alpha = 0.1) +
stat_summary(aes(group = 1),
fun= median,
geom = "line",
size = 1.25, col = 'black') +
facet_wrap(~ cluster_group, ncol = 1) +
theme(legend.position = 'none') +
labs(x = 'Month', y = 'Scaled New Cases By Cluster By Day',
caption = paste0('Solid black line represents the median for each cluster |
Based on Data from March 01, 2020 - ', endDatePrintV) ) +
scale_color_manual(values = colorPal)
invisible( dev.off() ) # to suppress the unwanted output from dev.off
# Printing a png version of the plot in Markdown (lower quality image for quicker compilation of HTML)
readTIFF("../Figures/spaghettiPlot.tiff") %>% grid.raster()
# Creating a data frame containing statistical summaries of the time series by cluster_group
summaryDf = spaghettiDF %>%
summarise(Median = median(scalednewMM7, na.rm= TRUE),
`First Quartile` = quantile(scalednewMM7, probs = 0.25, na.rm= TRUE),
`Third Quartile` = quantile(scalednewMM7, probs = 0.75, na.rm= TRUE)) %>%
pivot_longer(cols = c(`First Quartile`, Median, `Third Quartile`),
names_to = 'Statistic')
tiff(filename = '../Figures/summaryPlot.tiff', width = 1366, height =768, pointsize = 16)
summaryDf %>%
ggplot(aes(x = date, y = value, color = cluster_group, linetype = Statistic)) +
scale_x_date(date_breaks = "1 month", date_labels = "%b") +
geom_line(size = 1.25) +
scale_linetype_manual(values = c('dotted', 'solid', 'twodash')) +
facet_wrap(~ cluster_group, ncol = 1) +
theme(legend.position = 'top') +
labs(color = '', x = 'Month', y = 'Quartiles of Scaled New Cases By Cluster By Day',
caption = paste0('Based on Data from March 01, 2020 - ', endDatePrintV)) +
scale_color_manual(values = colorPal)
invisible( dev.off() ) # to suppress the unwanted output from dev.off
# Printing a png version of the plot in Markdown (lower quality image for quicker compilation of HTML)
readTIFF("../Figures/summaryPlot.tiff") %>% grid.raster()
# Joining the clusterCounties results with the existing county simple features object (cty_sf)
clusterCounties$fips = str_pad(clusterCounties$key_numeric, width = 5, side = 'left', pad = '0')
clusterCounties %<>% ungroup()
cty_sf %<>% left_join(clusterCounties[, c('fips', 'cluster_group')], by = 'fips') # adding cluster_group to cty_sf
# Creating a static visual for the paper
tiff(filename = '../Figures/clusterMap.tiff', width = 1366, height =768, pointsize = 16)
tm_shape(cty_sf) + tm_polygons('cluster_group', title = 'Cluster #', palette = colorPal) +
tm_credits(paste0('Based on Data from March 01, 2020 - ', endDatePrintV), position=c("right", "bottom"))
invisible( dev.off() ) # to suppress the unwanted output from dev.off
# Creating an interactive visual Using the Leaflet Package
#### Creating a longlat projection (required by leaflet)
leaflet_sf = counties_sf("longlat") %>% filter(!state %in% c('Alaska', 'Hawaii')) # from albersua
leaflet_sf %<>% geo_join(crossSectionalData, by_sp= 'fips', by_df= 'fips') %>%
left_join(clusterCounties[, c('fips', 'cluster_group')], by = 'fips')
#### Setting the Color Scheme
leafletPal = colorFactor('Set2', domain = leaflet_sf$cluster_group, na.color = "white")
#### The visual
leaflet(height=500) %>% # initializing the leaflet map
setView(lng = -96, lat = 37.8, zoom = 3.8) %>% # setting the view on Continental US
addTiles() %>% # adding the default tiles
addPolygons(data = leaflet_sf, stroke = FALSE, fillColor = ~leafletPal(leaflet_sf$cluster_group), # adding the data
weight = 2, opacity = 1, color = "white", dashArray = "3", fillOpacity = 0.7, # adding color specs
popup = paste("County:", leaflet_sf$name, '<br>',
"Cluster #:", leaflet_sf$cluster_group, '<br>',
"Population Density:", round(leaflet_sf$popDensity, 1), '<br>')) %>% #pop-up Menu
addLegend(position = "bottomleft", pal = leafletPal, values = leaflet_sf$cluster_group,
title = "Cluster #", opacity = 1) # legend formatting
In the previous section, we showed that by using solely a scaled and smoothed time series of daily cases per county, the counties are grouped into categories (whose time-series have distinct shapes based on the Euclidean distance measure). In this section, we attempt to model the factors that are associated with the cluster assignment.
multiClassDF = select(clusterCounties, id, cluster_group) %>%
left_join(crossSectionalData, by = 'id') %>%
select(-c(administrative_area_level_2, administrative_area_level_3, id, key_numeric))
saveRDS(multiClassDF, '../Data/multiClassDF.rds') # saving the data
skim(multiClassDF) # printing a nice summary table of the data
Name | multiClassDF |
Number of rows | 3108 |
Number of columns | 11 |
_______________________ | |
Column type frequency: | |
character | 3 |
factor | 2 |
numeric | 6 |
________________________ | |
Group variables | None |
Variable type: character
skim_variable | n_missing | complete_rate | min | max | empty | n_unique | whitespace |
---|---|---|---|---|---|---|---|
cluster_group | 0 | 1.00 | 2 | 2 | 0 | 3 | 0 |
key_google_mobility | 195 | 0.94 | 14 | 35 | 0 | 2913 | 0 |
fips | 0 | 1.00 | 5 | 5 | 0 | 3108 | 0 |
Variable type: factor
skim_variable | n_missing | complete_rate | ordered | n_unique | top_counts |
---|---|---|---|---|---|
countyType | 0 | 1 | FALSE | 2 | Rur: 1596, Oth: 1512 |
regions | 0 | 1 | FALSE | 10 | E: 524, F: 503, G: 412, C: 372 |
Variable type: numeric
skim_variable | n_missing | complete_rate | mean | sd | p0 | p25 | p50 | p75 | p100 | hist |
---|---|---|---|---|---|---|---|---|---|---|
popDensity | 1 | 1.00 | 369.63 | 6667.59 | 0.24 | 17.28 | 45.31 | 119.57 | 365169.38 | ▇▁▁▁▁ |
percRepVotes | 1 | 1.00 | 63.30 | 15.65 | 4.09 | 54.48 | 66.35 | 74.92 | 96.03 | ▁▂▅▇▃ |
GovernmentResponseIndexMedian | 1 | 1.00 | 47.62 | 7.79 | 26.94 | 44.30 | 47.22 | 50.66 | 70.28 | ▁▂▇▂▁ |
PercentSeniors | 36 | 0.99 | 24.85 | 5.52 | 5.80 | 21.30 | 24.45 | 27.80 | 64.20 | ▁▇▂▁▁ |
icuBedsPer10000Seniors | 36 | 0.99 | 5.49 | 9.43 | 0.00 | 0.00 | 0.00 | 8.71 | 248.16 | ▇▁▁▁▁ |
povertyPercent | 0 | 1.00 | 15.18 | 6.11 | 2.60 | 10.90 | 14.15 | 18.30 | 54.00 | ▆▇▂▁▁ |
multiClassDF %>% plot_boxplot(by = 'cluster_group', ncol = 2L,
ggtheme = theme_bw(),
geom_boxplot_args = list('outlier.shape' = 1))
multiClassDF$cluster_group %<>% as.factor() # convert to a factor
# impute without using cluster_group, key_google_mobility and fips
multiClassImputed = VIM::kNN(multiClassDF, imp_var = FALSE,
dist_var = colnames(multiClassDF)[3:10])
saveRDS(multiClassImputed, '../Data/multiClassImputed.rds') # saving the data
# Creating a df (which will be used for analysis)
df = multiClassImputed # setting df to equal to the multiclass object
df$clustReLeveled = relevel(df$cluster_group, ref = maxCat(df$cluster_group) ) # setting the ref level
df = df %>% select(-c(cluster_group, # removed since it is now redundant with the clustReLeveled variable
key_google_mobility, # removed since they are identifier variables
icuBedsPer10000Seniors, percRepVotes)) # did not sig. improve predictions
saveRDS(df, '../Data/df.rds') # saving the data
df = readRDS('../Data/df.rds') # loading the data
df %<>% select(-fips) # removed since it was only used in the spatial model
df$popDensity = log(df$popDensity) # since it is highly skewed and we are using a linear model
finalModel = quiet(multinom(clustReLeveled ~ ., data = df)) # building the multinomial model
# Printing the model in the markdown
stargazer(finalModel, type = 'html', p.auto = FALSE, out="../Data/multi.html", single.row = FALSE)
Dependent variable: | ||
C1 | C3 | |
(1) | (2) | |
countyTypeRural/Underserved | 0.644*** | 0.341** |
(0.146) | (0.160) | |
popDensity | -0.269*** | -0.969*** |
(0.056) | (0.068) | |
GovernmentResponseIndexMedian | 0.032*** | 0.061*** |
(0.010) | (0.011) | |
PercentSeniors | -0.018 | -0.001 |
(0.013) | (0.013) | |
regionsB | 0.310 | 0.108 |
(0.386) | (0.355) | |
regionsC | -1.263** | -0.369 |
(0.496) | (0.362) | |
regionsD | 1.245*** | -1.103*** |
(0.371) | (0.398) | |
regionsE | 3.546*** | -0.485 |
(0.353) | (0.416) | |
regionsF | 1.199*** | 0.501 |
(0.388) | (0.361) | |
regionsG | 5.401*** | 3.681*** |
(0.505) | (0.501) | |
regionsH | 4.436*** | 1.605*** |
(0.495) | (0.504) | |
regionsI | -0.577 | -0.800* |
(0.565) | (0.421) | |
regionsJ | 1.649*** | -0.484 |
(0.411) | (0.435) | |
povertyPercent | -0.048*** | -0.019* |
(0.011) | (0.012) | |
Constant | -1.712** | -0.407 |
(0.843) | (0.902) | |
Akaike Inf. Crit. | 4,154.940 | 4,154.940 |
Note: | p<0.1; p<0.05; p<0.01 |
# examining how well the model performed on our entire dataset
# Recall that we are fitting an explanatory model and not a predictive model
predictedClass = predict(finalModel, df)
saveRDS(finalModel, '../Data/finalModel.rds') # saving it
# Computing the Confusion Metrics and By Class Metrics
confMatrix = confusionMatrix(predictedClass, df$clustReLeveled)
saveRDS(confMatrix, '../Data/confMatrix.rds') # saving the data
# Printing the resulting tables nicely
pander(confMatrix$table)
C2 | C1 | C3 | |
---|---|---|---|
C2 | 1191 | 224 | 202 |
C1 | 130 | 835 | 151 |
C3 | 59 | 75 | 241 |
pander(confMatrix$byClass)
Sensitivity | Specificity | Pos Pred Value | Neg Pred Value | |
---|---|---|---|---|
Class: C2 | 0.863 | 0.7535 | 0.7365 | 0.8732 |
Class: C1 | 0.7363 | 0.8576 | 0.7482 | 0.8499 |
Class: C3 | 0.4057 | 0.9467 | 0.6427 | 0.8708 |
Precision | Recall | F1 | Prevalence | Detection Rate | |
---|---|---|---|---|---|
Class: C2 | 0.7365 | 0.863 | 0.7948 | 0.444 | 0.3832 |
Class: C1 | 0.7482 | 0.7363 | 0.7422 | 0.3649 | 0.2687 |
Class: C3 | 0.6427 | 0.4057 | 0.4974 | 0.1911 | 0.07754 |
Detection Prevalence | Balanced Accuracy | |
---|---|---|
Class: C2 | 0.5203 | 0.8083 |
Class: C1 | 0.3591 | 0.797 |
Class: C3 | 0.1207 | 0.6762 |
pander(confMatrix$overall)
Accuracy | Kappa | AccuracyLower | AccuracyUpper | AccuracyNull |
---|---|---|---|---|
0.7294 | 0.56 | 0.7134 | 0.745 | 0.444 |
AccuracyPValue | McnemarPValue |
---|---|
4.759e-228 | 9.491e-28 |
predictedProbs = fitted(finalModel) # computing predicted probabilities for each of the outcome levels
mapResults = cbind(multiClassDF, predictedProbs) # col binding predProbs for each cluster with multiClassDF
# Finding indices to subset the data
numberOfClusters = unique(mapResults$cluster_group) %>% as.character() %>% length()
startCol = ncol(mapResults) - numberOfClusters + 1
endCol = ncol(mapResults)
# Finding whether the predicted and actual clusters matched for each county
mapResults$LargestProbCluster = colnames(mapResults[, startCol:endCol])[apply(mapResults[, startCol:endCol], 1, which.max)]
mapResults$match = ifelse(mapResults$cluster_group == mapResults$LargestProbCluster, 'Yes', 'No') %>% as.factor()
# Retrieving the U.S. county composite map as a simplefeature (since it has been overwritten)
cty_sf = counties_sf("aeqd") %>% filter(!state %in% c('Alaska', 'Hawaii')) # from albersusa
cty_sf %<>% geo_join(mapResults, by_sp= 'fips', by_df= 'fips')
# Creating a static visual for use in the paper
tiff(filename = '../Figures/clusterMatchMap.tiff', width = 1366, height =768, pointsize = 16)
tm_shape(cty_sf) + tm_polygons('match', title = 'Cluster Match', style = 'cont', palette = "div") +
tm_layout(aes.palette = list(div = list("Yes" = "#CAB2D6", "No" = "#6A3D9A"))) +
tm_credits(paste0('Based on Data from March 01, 2020 - ', endDatePrintV), position=c("right", "bottom"))
invisible( dev.off() ) # to suppress the unwanted output from dev.off
# Creating an interactive visual Using the Leaflet Package
#### Creating a longlat projection (required by leaflet)
leaflet_sf = counties_sf("longlat") %>% filter(!state %in% c('Alaska', 'Hawaii')) # from albersua
leaflet_sf %<>% geo_join(mapResults, by_sp= 'fips', by_df= 'fips')
#### Setting the Color Scheme
leafletPal = colorFactor(palette = c("#CAB2D6", "#6A3D9A"), levels = c('Yes', 'No'), na.color = "white")
#### The visual
leaflet(height=500) %>% # initializing the leaflet map
setView(lng = -96, lat = 37.8, zoom = 3.8) %>% # setting the view on Continental US
addTiles() %>% # adding the default tiles
addPolygons(data = leaflet_sf, stroke = FALSE, fillColor = ~leafletPal(leaflet_sf$match), # adding the data
weight = 2, opacity = 1, color = "white", dashArray = "3", fillOpacity = 0.7, # adding color specs
popup = paste("County:", leaflet_sf$name, '<br>',
"Cluster #:", leaflet_sf$cluster_group, '<br>',
"Cluster Predicted:", leaflet_sf$LargestProbCluster, '<br>',
"Cluster Match:", leaflet_sf$match, '<br>')) %>% #pop-up Menu
addLegend(position = "bottomleft", pal = leafletPal, values = leaflet_sf$match,
title = "Cluster Match", opacity = 1) # legend formatting
In the code chunk below, we capitalize on the MuMIn package to generate a model selection table of models with combinations (subsets) of fixed effect terms in the global model. More specifically, the code below identifies the combination of predictors (and hence the number of model parameters) that would minimize the AIC.
dfNoNAs <- na.omit(df) # since the MuMIn package expects no NAs in the data frame
fm <- quiet(multinom(clustReLeveled ~ ., data = dfNoNAs, na.action = na.fail)) # create model
ms <- quiet(dredge(fm, rank = 'AIC')) # generating the model selection table
# Printing the models with the lowest AIC first
datatable(ms %>% as.data.frame() %>% select(-c(delta, weight)),
extensions = c('FixedColumns', 'Buttons'), options = list(
dom = 'Bfrtip',
scrollX = TRUE,
buttons = c('copy', 'csv', 'excel', 'pdf', 'print'),
fixedColumns = list(leftColumns = 0, rightColumns = 3)),
rownames = FALSE) %>%
formatRound(columns= c('logLik', 'AIC'), digits=0)
With the use of an alternative model selection criterion and this evaluation, we confirm that the optimal multinomial regression model:
In this R Markdown document, we have shown that our proposed two stage framework for modeling the smoothed and scaled time series of new daily cases can provide insights into the shape of the outbreak’s time-series and some of its associated factors. Specifically, we have shown that:
In the appendix, we print all the R packages used in our analysis and their versions to assist with reproducing our results/analysis.
pander(sessionInfo(), compact = TRUE) # printing the session info for reproducibility
R version 4.0.3 (2020-10-10)
Platform: x86_64-w64-mingw32/x64 (64-bit)
locale: LC_COLLATE=English_United States.1252, LC_CTYPE=English_United States.1252, LC_MONETARY=English_United States.1252, LC_NUMERIC=C and LC_TIME=English_United States.1252
attached base packages: grid, stats, graphics, grDevices, utils, datasets, methods and base
other attached packages: conflicted(v.1.0.4), MuMIn(v.1.43.17), caret(v.6.0-86), lattice(v.0.20-41), nnet(v.7.3-15), VIM(v.6.1.0), colorspace(v.2.0-0), NbClust(v.3.0), expsmooth(v.2.3), fma(v.2.4), forecast(v.8.13), fpp2(v.2.4), zoo(v.1.8-8), tmap(v.3.3), leaflet(v.2.0.4.1), tigris(v.1.0), plotly(v.4.9.3), tiff(v.0.1-6), DataExplorer(v.0.8.2), RColorBrewer(v.1.1-2), scales(v.1.1.1), knitr(v.1.31), stargazer(v.5.2.2), pander(v.0.6.3), DT(v.0.17), readxl(v.1.3.1), rvest(v.0.3.6), xml2(v.1.3.2), COVID19(v.2.3.2), skimr(v.2.1.2), dataPreparation(v.1.0.1), progress(v.1.2.2), Matrix(v.1.3-2), lubridate(v.1.7.9.2), janitor(v.2.1.0), magrittr(v.2.0.1), forcats(v.0.5.1), stringr(v.1.4.0), dplyr(v.1.0.3), purrr(v.0.3.4), readr(v.1.4.0), tidyr(v.1.1.2), tibble(v.3.0.5), tidyverse(v.1.3.0), albersusa(v.0.4.1), devtools(v.2.3.2), usethis(v.2.0.0), pacman(v.0.5.1) and ggplot2(v.3.3.3)
loaded via a namespace (and not attached): tidyselect(v.1.1.0), htmlwidgets(v.1.5.3), ranger(v.0.12.1), maptools(v.1.0-2), pROC(v.1.17.0.1), munsell(v.0.5.0), codetools(v.0.2-18), units(v.0.6-7), withr(v.2.4.1), highr(v.0.8), uuid(v.0.1-4), rstudioapi(v.0.13), stats4(v.4.0.3), robustbase(v.0.93-7), vcd(v.1.4-8), TTR(v.0.24.2), labeling(v.0.4.2), repr(v.1.1.3), farver(v.2.0.3), rprojroot(v.2.0.2), vctrs(v.0.3.6), generics(v.0.1.0), ipred(v.0.9-9), xfun(v.0.20), R6(v.2.5.0), cachem(v.1.0.1), assertthat(v.0.2.1), networkD3(v.0.4), rgeos(v.0.5-5), gtable(v.0.3.0), lwgeom(v.0.2-5), processx(v.3.4.5), timeDate(v.3043.102), rlang(v.0.4.10), splines(v.4.0.3), rgdal(v.1.5-23), lazyeval(v.0.2.2), ModelMetrics(v.1.2.2.2), dichromat(v.2.0-0), broom(v.0.7.4), reshape2(v.1.4.4), yaml(v.2.2.1), abind(v.1.4-5), modelr(v.0.1.8), crosstalk(v.1.1.1), backports(v.1.2.1), quantmod(v.0.4.18), lava(v.1.6.8.1), tools(v.4.0.3), ellipsis(v.0.3.1), raster(v.3.4-5), sessioninfo(v.1.1.1), Rcpp(v.1.0.6), plyr(v.1.8.6), base64enc(v.0.1-3), classInt(v.0.4-3), ps(v.1.5.0), prettyunits(v.1.1.1), rpart(v.4.1-15), fracdiff(v.1.5-1), tmaptools(v.3.1-1), haven(v.2.3.1), fs(v.1.5.0), leafem(v.0.1.3), data.table(v.1.13.6), openxlsx(v.4.2.3), lmtest(v.0.9-38), reprex(v.1.0.0), pkgload(v.1.1.0), hms(v.1.0.0), evaluate(v.0.14), XML(v.3.99-0.5), rio(v.0.5.16), gridExtra(v.2.3), testthat(v.3.0.1), compiler(v.4.0.3), KernSmooth(v.2.23-18), crayon(v.1.4.0), htmltools(v.0.5.1.1), DBI(v.1.1.1), dbplyr(v.2.0.0), MASS(v.7.3-53), rappdirs(v.0.3.3), sf(v.0.9-7), boot(v.1.3-26), car(v.3.0-10), cli(v.2.3.0), quadprog(v.1.5-8), gower(v.0.2.2), parallel(v.4.0.3), igraph(v.1.2.6), pkgconfig(v.2.0.3), foreign(v.0.8-81), laeken(v.0.5.1), sp(v.1.4-5), recipes(v.0.1.15), foreach(v.1.5.1), prodlim(v.2019.11.13), snakecase(v.0.11.0), callr(v.3.5.1), digest(v.0.6.27), rmarkdown(v.2.6), cellranger(v.1.1.0), leafsync(v.0.1.0), curl(v.4.3), urca(v.1.3-0), lifecycle(v.0.2.0), nlme(v.3.1-151), jsonlite(v.1.7.2), tseries(v.0.10-48), carData(v.3.0-4), desc(v.1.2.0), viridisLite(v.0.3.0), pillar(v.1.4.7), survival(v.3.2-7), fastmap(v.1.1.0), httr(v.1.4.2), DEoptimR(v.1.0-8), pkgbuild(v.1.2.0), glue(v.1.4.2), xts(v.0.12.1), remotes(v.2.2.0), zip(v.2.1.1), png(v.0.1-7), iterators(v.1.0.13), leaflet.providers(v.1.9.0), class(v.7.3-18), stringi(v.1.5.3), stars(v.0.5-1), memoise(v.2.0.0) and e1071(v.1.7-4)
Email: fmegahed@miamioh.edu | Phone: +1-513-529-4185 | Website: Miami University Official↩︎
Email: farmerl2@miamioh.edu | Phone: +1-513-529-4823 | Website: Miami University Official↩︎
Email: longwen.zhao@slu.edu | Website: LinkedIn↩︎
Email: steve.rigdon@slu.edu | Website: Saint Louis University Official↩︎