Holdings Information
Modern statistics for the social and behavioral sciences : a practical introduction / Rand Wilcox.
Bibliographic Record Display
-
Title:Modern statistics for the social and behavioral sciences : a practical introduction / Rand Wilcox.
-
Author/Creator:Wilcox, Rand R.
-
Published/Created:Boca Raton, FL : CRC Press, ©2012.
-
Holdings
Holdings Record Display
-
Location:KOERNER LIBRARY stacks (Floor 1)Where is this?
-
Call Number: HA29 .W51367 2012
-
Number of Items:1
-
Status:c.1 On loan - Due on 04-10-2024
-
Location:KOERNER LIBRARY stacks (Floor 1)Where is this?
-
Library of Congress Subjects:Social sciences--Statistical methods.
Psychology--Statistical methods.
-
Description:xx, 840 p. : ill. ; 25 cm.
-
Notes:Includes bibliographical references and index.
-
ISBN:9781439834565 (hbk. : alk. paper)
1439834563 (hbk. : alk. paper)
-
Contents:Machine generated contents note: 1. Introduction
1.1. Samples versus Populations
1.2. Software
1.3. R Basics
1.3.1. Entering Data
1.3.2. R Functions and Packages
1.3.3. Data Sets
1.3.4. Arithmetic Operations
2. Numerical And Graphical Summaries Of Data
2.1. Basic Summation Notation
2.2. Measures of Location
2.2.1. Sample Mean
2.2.2. R Function Mean
2.2.3. Sample Median
2.2.4. R Function for the Median
2.2.5. Criticism of the Median: It Might Trim Too Many Values
2.2.6. R Function for the Trimmed Mean
2.2.7. Winsorized Mean
2.2.8. R Function winmean
2.2.9. What Is a Measure of Location?
2.3. Measures of Variation or Scale
2.3.1. Sample Variance and Standard Deviation
2.3.2. R Functions for the Variance and Standard Deviation
2.3.3. Interquartile Range
2.3.4. R Function idealf
2.3.5. Winsorized Variance
2.3.6. R Function winvar
2.3.7. Median Absolute Deviation
2.3.8. R Function mad
2.3.9. Average Absolute Distance from the Median
2.3.10. Other Robust Measures of Variation
2.3.11. R Functions bivar, pbvar, tauvar, and tbs
2.4. Detecting Outliers
2.4.1. Method Based on the Mean and Variance
2.4.2. Better Outlier Detection Rule: The MAD-Median Rule
2.4.3. R Function out
2.4.4. Boxplot
2.4.5. R Function boxplot
2.4.6. Modifications of the Boxplot Rule for Detecting Outliers
2.4.7. R Function outbox
2.4.8. Other Measures of Location
2.4.9. R Functions mom and onestep
2.5. Histograms
2.5.1. R Functions hist and splot
2.6. Kernel Density Estimators
2.6.1. R Functions kdplot and akerd
2.7. Stem-and-Leaf Displays
2.7.1. R Function stem
2.8. Skewness
2.8.1. Transforming Data
2.9. Choosing a Measure of Location
2.10. Covariance and Pearson's Correlation
2.11. Exercises
3. Probability And Related Concepts
3.1. Basic Probability
3.2. Expected Values
3.3. Conditional Probability and Independence
3.4. Population Variance
3.5. Binomial Probability Function
3.6. Continuous Variables and the Normal Curve
3.6.1. Computing Probabilities Associated with Normal Distributions
3.6.2. R Function pnorm
3.7. Understanding the Effects of Non-Normality
3.7.1. Skewness
3.8. Pearson's Correlation and the Population Covariance
3.8.1. Computing the Population Covariance and Pearson's Correlation
3.9. Some Rules about Expected Values
3.10. Chi-Squared Distributions
3.11. Exercises
4. Sampling Distributions And Confidence Intervals
4.1. Random Sampling
4.2. Sampling Distributions
4.2.1. Sampling Distribution of the Sample Mean
4.2.2. Computing Probabilities Associated with the Sample Mean
4.3. Confidence Interval for the Population Mean
4.3.1. Known Variance
4.3.2. Confidence Intervals When σ Is Not Known
4.3.3. R Functions pt and qt
4.3.4. Confidence Interval for the Population Mean Using Student's T
4.3.5. R Function t.test
4.4. Judging Location Estimators Based on Their Sampling Distribution
4.4.1. Trimming and Accuracy: Another Perspective
4.5. Approach to Non-Normality: The Central Limit Theorem
4.6. Student's T and Non-Normality
4.7. Confidence Intervals for the Trimmed Mean
4.7.1. Estimating the Standard Error of a Trimmed Mean
4.7.2. R Function trimse
4.8. Confidence Interval for the Population Trimmed Mean
4.8.1. R Function trimci
4.9. Transforming Data
4.10. Confidence Interval for the Population Median
4.10.1. R Function sint
4.10.2. Estimating the Standard Error of the Sample Median
4.10.3. R Function msmedse
4.10.4. More Concerns about Tied Values
4.11. Remark About MOM and M-Estimators
4.12. Confidence Intervals for the Probability of Success
4.12.1. R Functions binomci and acbinomci
4.13. Exercises
5. Hypothesis Testing
5.1. Basics of Hypothesis Testing
5.1.1. P-Value or Significance Level
5.1.2. R Function t.test
5.1.3. Criticisms of Two-Sided Hypothesis Testing and P-Values
5.1.4. Summary and Generalization
5.2. Power and Type II Errors
5.2.1. Understanding How n, α, and σ Are Related to Power
5.3. Testing Hypotheses about the Mean When σ Is Not Known
5.4. Controlling Power and Determining n
5.4.1. Choosing n Prior to Collecting Data
5.4.2. R Function power.t.test
5.4.3. Stein's Method: Judging the Sample Size When Data Are Available
5.4.4. R Functions stein1 and stein2
5.5. Practical Problems with Student's T Test
5.6. Hypothesis Testing Based on a Trimmed Mean
5.6.1. R Function trimci
5.6.2. R Functions stein1.tr and stein2.tr
5.7. Testing Hypotheses about the Population Median
5.7.1. R Function sintv2
5.8. Making Decisions about Which Measure of Location to Use
5.9. Exercises
6. Regression And Correlation
6.1. Least Squares Principle
6.2. Confidence Intervals and Hypothesis Testing
6.2.1. Classic Inferential Techniques
6.2.2. Multiple Regression
6.2.3. R Functions ols, lm, and olsplot
6.3. Standardized Regression
6.4. Practical Concerns about Least Squares Regression and How They Might Be Addressed
6.4.1. Effect of Outliers on Least Squares Regression
6.4.2. Beware of Bad Leverage Points
6.4.3. Beware of Discarding Outliers among the Y Values
6.4.4. Do Not Assume Homoscedasticity or That the Regression Line Is Straight
6.4.5. Violating Assumptions When Testing Hypotheses
6.4.6. Dealing with Heteroscedasticity: The HC4 Method
6.4.7. R Functions olshc4 and hc4test
6.5. Pearson's Correlation and the Coefficient of Determination
6.5.1. Closer Look at Interpreting r
6.6. Testing H0: ρ = 0
6.6.1. R Functions cor.test and pwr.t.test
6.6.2. R Function pwr.r.test
6.6.3. Testing H0: ρ = 0 When There is Heteroscedasticity
6.6.4. R Function pcorhc4
6.6.5. When Is It Safe to Conclude That Two Variables Are Independent?
6.7. Regression Method for Estimating the Median of Y and Other Quantiles
6.7.1. R Function rqfit
6.8. Detecting Heteroscedasticity
6.8.1. R Function khomreg
6.9. Concluding Remarks
6.10. Exercises
7. Bootstrap Methods
7.1. Bootstrap-t Method
7.1.1. Symmetric Confidence Intervals
7.1.2. Exact Nonparametric Confidence Intervals for Means Are Impossible
7.2. Percentile Bootstrap Method
7.3. Inferences about Robust Measures of Location
7.3.1. Using the Percentile Method
7.3.2. R Functions onesampb, momci, and trimpb
7.3.3. Bootstrap-t Method Based on Trimmed Means
7.3.4. R Function trimcibt
7.4. Estimating Power When Testing Hypotheses about a Trimmed Mean
7.4.1. R Functions powt1est and powt1an
7.5. Bootstrap Estimate of Standard Errors
7.5.1. R Function bootse
7.6. Inferences about Pearson's Correlation: Dealing with Heteroscedasticity
7.6.1. R Function pcorb
7.7. Bootstrap Methods for Least Squares Regression
7.7.1. R Functions hc4wtest, olswbtest, lsfitci
7.8. Detecting Associations Even When There Is Curvature
7.8.1. R Functions indt and medind
7.9. Quantile Regression
7.9.1. R Functions qregci and rqtest
7.9.2. Test for Homoscedasticity Using a Quantile Regression Approach
7.9.3. R Function qhomt
7.10. Regression: Which Predictors Are Best?
7.10.1. R Function regpre
7.10.2. Least Angle Regression
7.10.3. R Function larsR
7.11. Comparing Correlations
7.11.1. R Functions TWOpov and TWOpNOV
7.12. Empirical Likelihood
7.13. Exercises
8. Comparing Two Independent Groups
8.1. Student's T Test
8.1.1. Choosing the Sample Sizes
8.1.2. R Function power.t.test
8.2. Relative Merits of Student's T
8.3. Welch's Heteroscedastic Method for Means
8.3.1. R Function t.test
8.3.2. Tukey's Three-Decision Rule
8.3.3. Non-Normality and Welch's Method
8.3.4. Three Modern Insights Regarding Methods for Comparing Means
8.4. Methods for Comparing Medians and Trimmed Means
8.4.1. Yuen's Method for Trimmed Means
8.4.2. R Functions yuen and fac2list
8.4.3. Comparing Medians
8.4.4. R Function msmed
8.5. Percentile Bootstrap Methods for Comparing Measures of Location
8.5.1. Using Other Measures of Location
8.5.2. Comparing Medians
8.5.3. R Function medpb2
8.5.4. Some Guidelines on When to Use the Percentile Bootstrap Method
8.5.5. R Functions trimpb2 and pb2gen
8.6. Bootstrap-t Methods for Comparing Measures of Location
8.6.1. Comparing Means
8.6.2. Bootstrap-t Method When Comparing Trimmed Means
8.6.3. R Functions yuenbt and yhbt
8.6.4. Estimating Power and Judging the Sample Sizes
8.6.5. R Functions powest and pow2an
8.7. Permutation Tests
8.7.1. R Function permg
8.8. Rank-Based and Nonparametric Methods
8.8.1. Wilcoxon
-Mann
-Whitney Test
8.8.2. R Functions wmw and wilcox.test
8.8.3. Handling Tied Values and Heteroscedasticity
8.8.4. Cliff's Method
8.8.5. R functions cid and cidv2
8.8.6. Brunner
-Munzel Method
8.8.7. R function bmp
8.8.8. Kolmogorov
-Smirnov Test
8.8.9. R Function ks
8.8.10. Comparing All Quantiles Simultaneously: An Extension of the Kolmogorov
-Smirnov Test
8.8.11. R Function sband
8.9. Graphical Methods for Comparing Groups
8.9.1. Error Bars
Contents note continued: 8.9.2. R Function ebarplot
8.9.3. Plotting the Shift Function
8.9.4. Plotting the Distributions
8.9.5. R Function sumplot2g
8.9.6. Other Approaches
8.10. Comparing Measures of Scale
8.11. Methods for Comparing Measures of Variation
8.11.1. R Function comvar2
8.11.2. Brown
-Forsythe Method
8.11.3. Comparing Robust Measures of Variation
8.12. Measuring Effect Size
8.12.1. R Functions yuenv2 and akp.effect
8.13. Comparing Correlations and Regression Slopes
8.13.1. R Functions twopcor, twolsreg, and tworegwb
8.14. Comparing Two Binomials
8.14.1. Storer
-Kim Method
8.14.2. Beal's Method
8.14.3. R Functions twobinom, twobici, and power.prop.test
8.15. Making Decisions about Which Method to Use
8.16. Exercises
9. Comparing Two Dependent Groups
9.1. Paired T Test
9.1.1. When Does the Paired T Test Perform Well?
9.1.2. R Function t.test
9.2. Comparing Robust Measures of Location
9.2.1. R Functions yuend, ydbt, and dmedpb
9.2.2. Comparing Marginal M-Estimators
9.2.3. R Function rmmest
9.3. Handling Missing Values
9.3.1. R Functions rm2miss and rmmismcp
9.4. Different Perspective When Using Robust Measures of Location
9.4.1. R Functions loc2dif and 12drmci
9.5. Sign Test
9.5.1. R Function signt
9.6. Wilcoxon Signed Rank Test
9.6.1. R Function wilcox.test
9.7. Comparing Variances
9.8. Comparing Robust Measures of Scale
9.8.1. R Function rmrvar
9.9. Comparing All Quantiles
9.9.1. R Function lband
9.10. Plots for Dependent Groups
9.10.1. R Function g2plotdifxy
9.11. Exercises
10. One-Way Anova
10.1. Analysis of Variance for Independent Groups
10.1.1. Conceptual Overview
10.1.2. ANOVA via Least Squares Regression and Dummy Coding
10.1.3. R Functions anova, anoval, aov, and fac2list
10.1.4. Controlling Power and Choosing the Sample Sizes
10.1.5. R Functions power.anova.test and anova.power
10.2. Dealing with Unequal Variances
10.2.1. Welch's Test
10.3. Judging Sample Sizes and Controlling Power When Data Are Available
10.3.1. R Functions bdanoval and bdanova2
10.4. Trimmed Means
10.4.1. R Functions t1way, t1wayv2, and t1wayF
10.4.2. Comparing Groups Based on Medians
10.4.3. R Function med1way
10.5. Bootstrap Methods
10.5.1. Bootstrap-t Method
10.5.2. R Function t1waybt
10.5.3. Two Percentile Bootstrap Methods
10.5.4. R Functions b1way and pbadepth
10.5.5. Choosing a Method
10.6. Random Effects Model
10.6.1. Measure of Effect Size
10.6.2. Heteroscedastic Method
10.6.3. Method Based on Trimmed Means
10.6.4. R Function rananova
10.7. Rank-Based Methods
10.7.1. Kruskall
-Wallis Test
10.8. R Function kruskal.test
10.8.1. Method BDM
10.8.2. R Function bdm
10.9. Exercises
11. Two-Way And Three-Way Designs
11.1. Basics of a Two-Way ANOVA Design
11.1.1. Interactions
11.1.2. R Functions interaction.plot and interplot
11.1.3. Interactions When There Are More than Two Levels
11.2. Testing Hypotheses about Main Effects and Interactions
11.2.1. R Function anova
11.2.2. Inferences about Disordinal Interactions
11.2.3. Two-Way ANOVA Model
11.3. Heteroscedastic Methods for Trimmed Means, Including Means
11.3.1. R Function t2way
11.4. Bootstrap Methods
11.4.1. R Functions pbad2way and t2waybt
11.5. Testing Hypotheses Based on Medians
11.5.1. R Function m2way
11.6. Rank-Based Method for a Two-Way Design
11.6.1. R Function bdm2way
11.6.2. Patel
-Hoel Approach to Interactions
11.7. Three-Way ANOVA
11.7.1. R Functions anova and t3way
11.8. Exercises
12. Comparing More Than Two Dependent Groups
12.1. Comparing Means in a One-Way Design
12.1.1. R Function aov
12.2. Comparing Trimmed Means When Dealing with a One-Way Design
12.2.1. R Functions rmanova and rmdat2mat
12.2.2. Bootstrap-t Method for Trimmed Means
12.2.3. R Function rmanovab
12.3. Percentile Bootstrap Methods for a One-Way Design
12.3.1. Method Based on Marginal Measures of Location
12.3.2. R Function bd1way
12.3.3. Inferences Based on Difference Scores
12.3.4. R Function rmdzero
12.4. Rank-Based Methods for a One-Way Design
12.4.1. Friedman's Test
12.4.2. R Function friedman.test
12.4.3. Method BPRM
12.4.4. R Function bprm
12.5. Comments on Which Method to Use
12.6. Between-by-Within Designs
12.6.1. Method for Trimmed Means
12.6.2. R Function bwtrim and bw2list
12.6.3. Bootstrap-t Method
12.6.4. R Function tsplitbt
12.6.5. Inferences Based on M-estimators and Other Robust Measures of Location
12.6.6. R Functions sppba, sppbb, and sppbi
12.6.7. Rank-Based Test
12.6.8. R Function bwrank
12.7. Within-by-Within Design
12.7.1. R Function wwtrim
12.8. Three-Way Designs
12.8.1. R Functions bbwtrim, bwwtrim, and wwwtrim
12.8.2. Data Management: R Functions bw2list and bbw2list
12.9. Exercises
13. Multiple Comparisons
13.1. One-Way ANOVA, Independent Groups
13.1.1. Fisher's Least Significant Difference Method
13.1.2. Tukey
-Kramer Method
13.1.3. R Function TukeyHSD
13.1.4. Tukey
-Kramer and the ANOVA F Test
13.1.5. Step-Down Method
13.1.6. Dunnett's T3
13.1.7. Games
-Howell Method
13.1.8. Comparing Trimmed Means
13.1.9. R Function lincon
13.1.10. Alternative Methods for Controlling FWE
13.1.11. Percentile Bootstrap Methods for Comparing Trimmed Means, Medians, and M-estimators
13.1.12. R Functions medpb, tmcppb, pbmcp, and mcppb20
13.1.13. Bootstrap-t Method
13.1.14. R Function linconb
13.1.15. Rank-Based Methods
13.1.16. R Functions cidmul, cidmulv2, and bmpmul
13.2. Two-Way, between-by-between Design
13.2.1. Scheffe's Homoscedastic Method
13.2.2. Heteroscedastic Methods
13.2.3. Extension of Welch
-Sidak and Kaiser
-Bowden Methods to Trimmed Means
13.2.4. R Function kbcon
13.2.5. R Function con2way
13.2.6. Linear Contrasts Based on Medians
13.2.7. R Functions msmed and mcp2med
13.2.8. Bootstrap Methods
13.2.9. R Functions linconb, mcp2a, and bbmcppb
13.2.10. Patel
-Hoel Rank-Based Interaction Method
13.2.11. R Function rimul
13.3. Judging Sample Sizes
13.3.1. Tamhane's Procedure
13.3.2. R Function tamhane
13.3.3. Hochberg's Procedure
13.3.4. R Function hochberg
13.4. Methods for Dependent Groups
13.4.1. Linear Contrasts Based on Trimmed Means
13.4.2. R Function rmmcp
13.4.3. Comparing M-estimators
13.4.4. R Functions rmmcppb, dmedpb, and dtrimpb
13.4.5. Bootstrap-t Method
13.4.6. R Function bptd
13.5. Between-by-within Designs
13.5.1. R Functions bwmcp, bwamcp, bwbmcp, bwimcp, spmcpa, spmcpb, spmcpi, and bwmcppb
13.6. Within-by-within Designs
13.6.1. Three-Way Designs
13.6.2. R Functions con3way, mcp3atm, and rm3mcp
13.6.3. Bootstrap Methods for Three-Way Designs
13.6.4. R Functions bbwmcp, bwwmcp, bbbmcppb, bbwmcppb, bwwmcppb, and wwwmcppb
13.7. Exercises
14. Some Multivariate Methods
14.1. Location, Scatter, and Detecting Outliers
14.1.1. Detecting Outliers via Robust Measures of Location and Scatter
14.1.2. R Functions cov.mve and com.mcd
14.1.3. More Measures of Location and Covariance
14.1.4. R Functions rmba, tbs, and ogk
14.1.5. R Function out
14.1.6. Projection-Type Outlier Detection Method
14.1.7. R Functions outpro, outproMC, outproad, outproadMC, and out3d
14.1.8. Skipped Estimators of Location
14.1.9. R Functions smean
14.2. One-Sample Hypothesis Testing
14.2.1. Comparing Dependent Groups
14.2.2. R Functions smeancrv2, hotel1, and rmdzeroOP
14.3. Two-Sample Case
14.3.1. R Functions smean2, mat2grp, and matsplit
14.4. MANOVA
14.4.1. R Function manova
14.4.2. Robust MANOVA Based on Trimmed Means
14.4.3. R Functions MULtr.anova and MULAOVp
14.4.4. Multivariate Extension of the Wilcoxon
-Mann
-Whitney Test
14.4.5. Explanatory Measure of Effect Size: A Projection-Type Generalization
14.4.6. R Function mulwmwv2
14.5. Rank-Based Multivariate Methods
14.5.1. Munzel
-Brunner Method
14.5.2. R Function mulrank
14.5.3. Choi
-Marden Multivariate Rank Test
14.5.4. R Function cmanova
14.6. Multivariate Regression
14.6.1. Multivariate Regression Using R
14.6.2. Robust Multivariate Regression
14.6.3. R Function mlrreg and mopreg
14.7. Principal Components
14.7.1. R Functions prcomp and regpca
14.7.2. Robust Principal Components
14.7.3. R Functions outpca, robpca, robpcaS, Ppca, and Ppca.summary
14.8. Exercises
15. Robust Regression And Measures Of Association
15.1. Robust Regression Estimators
15.1.1. Theil
-Sen Estimator
15.1.2. R Functions tsreg and regplot
15.1.3. Least Median of Squares
15.1.4. Least Trimmed Squares and Least Trimmed Absolute Value Estimators
15.1.5. R Functions lmsreg, ltsreg, and ltareg
15.1.6. M-Estimators
15.1.7. R Function chreg
15.1.8. Deepest Regression Line
15.1.9. R Function mdepreg
15.1.10. Skipped Estimators
15.1.11. R Functions opreg and opregMC
15.1.12. S-estimators and an E-Type Estimator
15.1.13. R Function tsts
Contents note continued: 15.2. Comments on Choosing a Regression Estimator
15.3. Testing Hypotheses When Using Robust Regression Estimators
15.3.1. R Functions regtest, regtestMC, regci, and regciMC
15.3.2. Comparing Measures of Location via Dummy Coding
15.4. Dealing with Curvature: Smoothers
15.4.1. Cleveland's Smoother
15.4.2. R Functions lowess and lplot
15.4.3. Smoothers Based on Robust Measures of Location
15.4.4. R Functions rplot and rplotsm
15.4.5. More Smoothers
15.4.6. R Functions kerreg, runpd, and qsmcobs
15.4.7. Prediction When X Is Discrete: The R Function rundis
15.4.8. Seeing Curvature with More than Two Predictors
15.4.9. R Function prplot
15.4.10. Some Alternative Methods
15.5. Some Robust Correlations and Tests of Independence
15.5.1. Kendall's tau
15.5.2. Spearman's rho
15.5.3. Winsorized Correlation
15.5.4. R Function wincor
15.5.5. OP Correlation
15.5.6. R Function scor
15.5.7. Inferences about Robust Correlations: Dealing with Heteroscedasticity
15.5.8. R Function corb
15.6. Measuring the Strength of an Association Based on a Robust Fit
15.7. Comparing the Slopes of Two Independent Groups
15.7.1. R Functions reg2ci, runmean2g, and l2plot
15.8. Tests for Linearity
15.8.1. R Functions lintest, lintestMC, and linchk
15.9. Identifying the Best Predictors
15.9.1. R Functions regpord, ts2str, and sm2strv7
15.10. Detecting Interactions and Moderator Analysis
15.10.1. R Functions adtest
15.10.2. Graphical Methods for Assessing Interactions
15.10.3. R Functions kercon, runsm2g, regi, ols.plot.inter, and reg.plot.inter
15.11. ANCOVA
15.11.1. Classic ANCOVA
15.11.2. Some Modern ANCOVA Methods
15.11.3. R Functions ancsm, Qancsm, ancova, ancpb, ancbbpb, and ancboot
15.12. Exercises
16. Basic Methods For Analyzing Categorical Data
16.1. Goodness of Fit
16.1.1. R Functions chisq.test and pwr.chisq.test
16.2. Test of Independence
16.2.1. R Function chi.test.ind
16.3. Detecting Differences in the Marginal Probabilities
6.3.1. R Functions contab and mcnemar.test
16.4. Measures of Association
16.4.1. Proportion of Agreement
16.4.2. Kappa
16.4.3. Weighted Kappa
16.4.4. R Function Ckappa
16.5. Logistic Regression
16.5.1. R Functions glm and logreg
16.5.2. Confidence Interval for the Odds Ratio
16.5.3. R Function ODDSR.CI
16.5.4. Smoothers for Logistic Regression
16.5.5. R Functions logrsm, rplot.bin, and logSM
16.6. Exercises.