Statistics with JMP
Uloženo v:
| Hlavní autoři: | , |
|---|---|
| Médium: | E-kniha |
| Jazyk: | angličtina |
| Vydáno: |
New York
John Wiley & Sons, Incorporated
2016
|
| Vydání: | 1 |
| Témata: | |
| ISBN: | 9781119097150, 1119097150 |
| On-line přístup: | Získat plný text |
| Tagy: |
Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
|
Obsah:
- 15.1 Equal Costs for All Observations -- 15.2 Unequal Costs for the Observations -- 16 Testing Equivalence -- 16.1 Shortcomings of Classical Hypothesis Tests -- 16.2 The Principles of Equivalence Tests -- 16.3 An Equivalence Test for Two Population Means -- 17 The Estimation and Testing of Correlation and Association -- 17.1 The Pearson Correlation Coefficient -- 17.2 Spearman's Rank Correlation Coefficient -- 17.3 A Test for the Independence of Two Qualitative Variables -- 18 An Introduction to Regression Modeling -- 18.1 From a Theory to a Model -- 18.2 A Statistical Model -- 18.3 Causality -- 18.4 Linear and Nonlinear Regression Models -- 19 Simple Linear Regression -- 19.1 The Simple Linear Regression Model -- 19.2 Estimation of the Model -- 19.3 The Properties of Least Squares Estimators -- 19.4 The Estimation of σ2 -- 19.5 Statistical Inference for β0 and β1 -- 19.6 The Quality of the Simple Linear Regression Model -- 19.7 Predictions -- 19.8 Regression Diagnostics -- Notes -- Appendix A The Binomial Distribution -- Appendix B The Standard Normal Distribution -- Appendix C The χ2-Distribution -- Appendix D Student's t-Distribution -- Appendix E The Wilcoxon Signed-Rank Test -- Appendix F The Shapiro-Wilk Test -- Appendix G Fisher's F-Distribution -- Appendix H The Wilcoxon Rank-Sum Test -- Appendix I The Studentized Range or Q-Distribution -- Appendix J The Two-Tailed Dunnett Test -- Appendix K The One-Tailed Dunnett Test -- Appendix L The Kruskal-Wallis Test -- Appendix M The Rank Correlation Test -- Index -- EULA
- 8.1 Tests for Two Population Means for Independent Samples -- 8.2 A Hypothesis Test for Two Population Proportions -- 8.3 A Hypothesis Test for Two Population Variances -- 8.4 Hypothesis Tests for Two Independent Samples in JMP -- Notes -- 9 A Nonparametric Hypothesis Test for the Medians of Two Independent Samples -- 9.1 The Hypotheses Tested -- 9.2 Exact p-Values in the Absence of Ties -- 9.3 Exact p-Values in the Presence of Ties -- 9.4 Approximate p-Values -- Notes -- 10 Hypothesis Tests for the Means of Two Paired Samples -- 10.1 The Hypotheses Tested -- 10.2 The Procedure -- 10.3 Examples -- 10.4 The Technical Background -- 10.5 Generalized Hypothesis Tests -- 10.6 A Confidence Interval for a Difference of Two Population Means -- Notes -- 11 Two Nonparametric Hypothesis Tests for Paired Samples -- 11.1 The Sign Test -- 11.2 The Wilcoxon Signed-Rank Test -- 11.3 Contradictory Results -- Notes -- Part Four More Than Two Populations -- 12 Hypothesis Tests for More Than Two Population Means: One-Way Analysis of Variance -- 12.1 One-Way Analysis of Variance -- 12.2 The Test -- 12.3 One-Way Analysis of Variance in JMP -- 12.4 Pairwise Comparisons -- 12.5 The Relation Between a One-Way Analysis of Variance and a t-Test for Two Population Means -- 12.6 Power -- 12.7 Analysis of Variance for Nonnormal Distributions and Unequal Variances -- Notes -- 13 Nonparametric Alternatives to an Analysis of Variance -- 13.1 The Kruskal-Wallis Test -- 13.2 The van der Waerden Test -- 13.3 The Median Test -- 13.4 JMP -- Notes -- 14 Hypothesis Tests for More Than Two Population Variances -- 14.1 Bartlett's Test -- 14.2 Levene's Test -- 14.3 The Brown-Forsythe Test -- 14.4 O'Brien's Test -- 14.5 JMP -- 14.6 The Welch Test -- Notes -- Part Five Additional Useful Tests and Procedures -- 15 The Design of Experiments and Data Collection
- Intro -- Title page -- Copyright -- Dedication -- Preface -- Acknowledgments -- Part One Estimators and Tests -- 1 Estimating Population Parameters -- 1.1 Introduction: Estimators Versus Estimates -- 1.2 Estimating a Mean Value -- 1.3 Criteria for Estimators -- 1.4 Methods for the Calculation of Estimators -- 1.5 The Sample Mean -- 1.6 The Sample Proportion -- 1.7 The Sample Variance -- 1.8 The Sample Standard Deviation -- 1.9 Applications -- Notes -- 2 Interval Estimators -- 2.1 Point and Interval Estimators -- 2.2 Confidence Intervals for a Population Mean with Known Variance -- 2.3 Confidence Intervals for a Population Mean with Unknown Variance -- 2.4 Confidence Intervals for a Population Proportion -- 2.5 Confidence Intervals for a Population Variance -- 2.6 More Confidence Intervals in JMP -- 2.7 Determining the Sample Size -- Notes -- 3 Hypothesis Tests -- 3.1 Key Concepts -- 3.2 Testing Hypotheses About a Population Mean -- 3.3 The Probability of a Type II Error and the Power -- 3.4 Determination of the Sample Size -- 3.5 JMP -- 3.6 Some Important Notes Concerning Hypothesis Testing -- Notes -- Part Two One Population -- 4 Hypothesis Tests for a Population Mean, Proportion, or Variance -- 4.1 Hypothesis Tests for One Population Mean -- 4.2 Hypothesis Tests for a Population Proportion -- 4.3 Hypothesis Tests for a Population Variance -- 4.4 The Probability of a Type II Error and the Power -- Notes -- 5 Two Hypothesis Tests for the Median of a Population -- 5.1 The Sign Test -- 5.2 The Wilcoxon Signed-Rank Test -- Notes -- 6 Hypothesis Tests for the Distribution of a Population -- 6.1 Testing Probability Distributions -- 6.2 Testing Probability Densities -- 6.3 Discussion -- Notes -- Part Three Two Populations -- 7 Independent Versus Paired Samples -- 8 Hypothesis Tests for the Means, Proportions, or Variances of Two Independent Samples
- 9.2.3 The Two-Tailed Test -- 9.3 Exact p-Values in the Presence of Ties -- 9.4 Approximate p-Values -- 9.4.1 The Right-Tailed Test -- 9.4.2 The Left-Tailed Test -- 9.4.3 The Two-Tailed Test -- 10 Hypothesis Tests for the Means of Two Paired Samples -- 10.1 The Hypotheses Tested -- 10.2 The Procedure -- 10.2.1 The Starting Point -- 10.2.2 Known -- 10.2.3 Unknown -- 10.3 Examples -- 10.4 The Technical Background -- 10.5 Generalized Hypothesis Tests -- 10.6 A Confidence Interval for a Difference of Two Population Means -- 10.6.1 Known -- 10.6.2 Unknown -- 11 Two Nonparametric Hypothesis Tests for Paired Samples -- 11.1 The Sign Test -- 11.1.1 The Hypotheses Tested -- 11.1.2 Practical Implementation -- 11.1.3 JMP -- 11.2 The Wilcoxon Signed-Rank Test -- 11.2.1 The Hypotheses Tested -- 11.2.2 Practical Implementation -- 11.2.3 Approximate -Values -- 11.2.4 JMP -- 11.3 Contradictory Results -- Part Four More Than Two Populations -- 12 Hypothesis Tests for More Than Two Population Means: One-Way Analysis of Variance -- 12.1 One-Way Analysis of Variance -- 12.2 The Test -- 12.2.1 Variance Within and Between Groups -- 12.2.2 The Test Statistic -- 12.2.3 The Decision Rule and the p-Value -- 12.2.4 The ANOVA Table -- 12.3 One-Way Analysis of Variance in JMP -- 12.4 Pairwise Comparisons -- 12.4.1 The Bonferroni Method -- 12.4.2 Tukey's Method -- 12.4.3 Dunnett's Method -- 12.5 The Relation Between a One-Way Analysis of Variance and a t-Test for Two Population Means -- 12.6 Power -- 12.6.1 The Noncentral F-Distribution -- 12.6.2 The Noncentral F-Distribution and Analysis of Variance -- 12.6.3 The Power and the Probability of a Type II Error -- 12.6.4 Determining the Sample Size and Power in JMP -- 12.7 Analysis of Variance for Nonnormal Distributions and Unequal Variances -- 13 Nonparametric Alternatives to an Analysis of Variance -- 13.1 The Kruskal-Wallis Test
- 19.1.1 Examples
- Intro -- Statistics with JMP: Hypothesis Tests, Anova and Regression -- Contents -- Preface -- Software -- Data Files -- Acknowledgments -- Part One Estimators and Tests -- 1 Estimating Population Parameters -- 1.1 Introduction: Estimators Versus Estimates -- 1.2 Estimating a Mean Value -- 1.2.1 The Mean of a Normally Distributed Population -- 1.2.2 The Mean of an Exponentially Distributed Population -- 1.3 Criteria for Estimators -- 1.3.1 Unbiased Estimators -- 1.3.2 The Efficiency of an Estimator -- 1.4 Methods for the Calculation of Estimators -- 1.5 The Sample Mean -- 1.5.1 The Expected Value and the Variance -- 1.5.2 The Probability Density of the Sample Mean for a Normally Distributed Population -- 1.5.3 The Probability Density of the Sample Mean for a Nonnormally Distributed Population -- 1.5.4 An Illustration of the Central Limit Theorem -- 1.6 The Sample Proportion -- 1.7 The Sample Variance -- 1.7.1 The Expected Value -- 1.7.2 The 2-Distribution -- 1.7.3 The Relation Between the Standard Normal and the 2-Distribution -- 1.7.4 The Probability Density of the Sample Variance -- 1.8 The Sample Standard Deviation -- 1.9 Applications -- 2 Interval Estimators -- 2.1 Point and Interval Estimators -- 2.2 Confidence Intervals for a Population Mean with Known Variance -- 2.2.1 The Percentiles of the Standard Normal Density -- 2.2.2 Computing a Confidence Interval -- 2.2.3 The Width of a Confidence Interval -- 2.2.4 The Margin of Error -- 2.3 Confidence Intervals for a Population Mean with Unknown Variance -- 2.3.1 The Student t-Distribution -- 2.3.2 The Application of the t-Distribution to Construct Confidence Intervals -- 2.4 Confidence Intervals for a Population Proportion -- 2.4.1 A First Interval Estimator Based on the Normal Distribution -- 2.4.2 A Second Interval Estimator Based on the Normal Distribution
- 5.2.2 The Starting Point of the Signed-Rank Test -- 5.2.3 Exact -Values -- 5.2.4 Exact -Values for Ties -- 5.2.5 Approximate p-Values Based on the Normal Distribution -- 5.2.6 Approximate p-Values Based on the t-Distribution -- 6 Hypothesis Tests for the Distribution of a Population -- 6.1 Testing Probability Distributions -- 6.1.1 Known Parameters -- 6.1.2 Unknown Parameters -- 6.1.3 2-Tests for Qualitative Variables -- 6.2 Testing Probability Densities -- 6.2.1 The Normal Probability Density -- 6.2.2 Other Continuous Densities -- 6.3 Discussion -- Part Three Two Populations -- 7 Independent Versus Paired Samples -- 8 Hypothesis Tests for the Means, Proportions, or Variances of Two Independent Samples -- 8.1 Tests for Two Population Means for Independent Samples -- 8.1.1 The Starting Point -- 8.1.2 Known Variances and -- 8.1.3 Unknown Variances and -- 8.1.4 Confidence Intervals for a Difference in Population Means -- 8.2 A Hypothesis Test for Two Population Proportions -- 8.2.1 The Starting Point -- 8.2.2 The Right-Tailed Test -- 8.2.3 The Left-Tailed Test -- 8.2.4 The Two-Tailed Test -- 8.2.5 Generalized Hypothesis Tests -- 8.2.6 The Confidence Interval for a Difference in Population Proportions -- 8.3 A Hypothesis Test for Two Population Variances -- 8.3.1 Fisher's -Distribution -- 8.3.2 The F-Test for the Comparison of Two Population Variances -- 8.3.3 The Confidence Interval for a Quotient of Two Population Variances -- 8.4 Hypothesis Tests for Two Independent Samples in JMP -- 8.4.1 Two Population Means -- 8.4.2 Two Population Proportions -- 8.4.3 Two Population Variances -- 9 A Nonparametric Hypothesis Test for the Medians of Two Independent Samples -- 9.1 The Hypotheses Tested -- 9.1.1 The Procedure -- 9.1.2 The Starting Point -- 9.2 Exact p-Values in the Absence of Ties -- 9.2.1 The Right-Tailed Test -- 9.2.2 The Left-Tailed Test
- 13.1.1 Computing the Test Statistic -- 13.1.2 The Behavior of the Test Statistic -- 13.1.3 Exact p-Values -- 13.1.4 Approximate p-Values -- 13.2 The van der Waerden Test -- 13.3 The Median Test -- 13.4 JMP -- 14 Hypothesis Tests for More Than Two Population Variances -- 14.1 Bartlett's Test -- 14.1.1 The Test Statistic -- 14.1.2 The Technical Background -- 14.1.3 The p-Value -- 14.2 Levene's Test -- 14.3 The Brown-Forsythe Test -- 14.4 O'Brien's Test -- 14.5 JMP -- 14.6 The Welch Test -- Part Five Additional Useful Tests and Procedures -- 15 The Design of Experiments and Data Collection -- 15.1 Equal Costs for All Observations -- 15.1.1 Equal Variances -- 15.1.2 Unequal Variances -- 15.2 Unequal Costs for the Observations -- 16 Testing Equivalence -- 16.1 Shortcomings of Classical Hypothesis Tests -- 16.2 The Principles of Equivalence Tests -- 16.2.1 The Use of Two One-Sided Tests -- 16.2.2 The Use of a Confidence Interval -- 16.3 An Equivalence Test for Two Population Means -- 16.3.1 Independent Samples -- 16.3.2 Paired Samples -- 17 The Estimation and Testing of Correlation and Association -- 17.1 The Pearson Correlation Coefficient -- 17.1.1 A Test for = 0 -- 17.1.2 A Test for = 0 ≠ 0 -- 17.1.3 The Confidence Interval -- 17.2 Spearman's Rank Correlation Coefficient -- 17.2.1 The Approximate Test for (s) = 0 -- 17.2.2 The Exact Test for (s) = 0 -- 17.2.3 The Approximate Test for (s) = (s) ≠ 0 -- 17.2.4 The Confidence Interval -- 17.3 A Test for the Independence of Two Qualitative Variables -- 17.3.1 The Contingency Table -- 17.3.2 The Functioning of the Test -- 17.3.3 The Homogeneity Test -- 18 An Introduction to Regression Modeling -- 18.1 From a Theory to a Model -- 18.2 A Statistical Model -- 18.3 Causality -- 18.4 Linear and Nonlinear Regression Models -- 19 Simple Linear Regression -- 19.1 The Simple Linear Regression Model
- 2.4.3 An Interval Estimator Based on the Binomial Distribution -- 2.5 Confidence Intervals for a Population Variance -- 2.6 More Confidence Intervals in JMP -- 2.7 Determining the Sample Size -- 2.7.1 The Population Mean -- 2.7.2 The Population Proportion -- 3 Hypothesis Tests -- 3.1 Key Concepts -- 3.2 Testing Hypotheses About a Population Mean -- 3.2.1 The Right-Tailed Test -- 3.2.2 The Left-Tailed Test -- 3.2.3 The Two-Tailed Test -- 3.3 The Probability of a Type II Error and the Power -- 3.4 Determination of the Sample Size -- 3.5 JMP -- 3.6 Some Important Notes Concerning Hypothesis Testing -- 3.6.1 Fixing the Significance Level -- 3.6.2 A Note on the "Acceptance" of the Null Hypothesis -- 3.6.3 Statistical and Practical Significance -- Part Two One Population -- 4 Hypothesis Tests for a Population Mean, Proportion, or Variance -- 4.1 Hypothesis Tests for One Population Mean -- 4.1.1 The Right-Tailed Test -- 4.1.2 The Left-Tailed Test -- 4.1.3 The Two-Tailed Test -- 4.1.4 Nonnormal Data -- 4.1.5 The Use of JMP -- 4.2 Hypothesis Tests for a Population Proportion -- 4.2.1 Tests Based on the Normal Distribution -- 4.2.2 Tests Based on the Binomial Distribution -- 4.2.3 Testing Proportions in JMP -- 4.3 Hypothesis Tests for a Population Variance -- 4.3.1 The Right-Tailed Test -- 4.3.2 The Left-Tailed Test -- 4.3.3 The Two-Tailed Test -- 4.3.4 The Use of JMP -- 4.4 The Probability of a Type II Error and the Power -- 4.4.1 Tests for a Population Mean -- 4.4.2 Tests for a Population Proportion -- 4.4.3 Tests for a Population Variance and Standard Deviation -- 5 Two Hypothesis Tests for the Median of a Population -- 5.1 The Sign Test -- 5.1.1 The Starting Point of the Sign Test -- 5.1.2 Exact -Values -- 5.1.3 Approximate p-Values Based on the Normal Distribution -- 5.2 The Wilcoxon Signed-Rank Test -- 5.2.1 The Use of Ranks

