Multivariate Statistics and Machine Learning : An Introduction to Applied Data Science Using R and Python
Multivariate Statistics and Machine Learning : An Introduction to Applied Data Science Using R and Python
Click to enlarge
Author(s): Denis, Daniel J.
ISBN No.: 9781032454283
Pages: 560
Year: 202510
Format: Trade Paper
Price: $ 100.09
Dispatch delay: Dispatched between 7 to 15 days
Status: Available (Forthcoming)

Preface Acknowledgements PART I - Preliminaries and Foundations Chapter 0 - Introduction, Motivation, Pedagogy and Ideas About Learning 0.1. The Paradigm Shift (What Has Changed) 0.1.1. A Wide Divide 0.2. A Unified Vision - The Bridge 0.


3. The Data Science and Machine Learning Invasion (Questions and Answers) 0.4. Who Should Read this Book? 0.4.1. Textbook Limbo 0.4.


2. Theoretical vs. Applied vs. Software Books vs. "Cookbooks" 0.4.2.1.


Watered Down Statistics 0.4.3. Prerequisites to Reading this Book 0.5. Pedagogical Approach and the Trade-Offs of Top-Down, Bottom-Up Learning 0.5.1.


Top-Down, Bottom-Up Learning 0.5.2. Ways of Writing a Book: Making it Pedagogical Instead of Cryptic 0.5.3. Standing on the Shoulders of Giants (A Companion to Advanced Texts) 0.5.


4. Making Equations "Speak" 0.5.5. The Power of Problems 0.5.6. Computing Languages 0.


5.7. Notation Used in the Book 0.6. Nobody Learns a Million Things (The Importance of Foundations and Learning How to Learn) 0.6.1. Essential Philosophy of Science and History 0.


6.2. Beyond the Jargon, Beyond the Hype 0.7. The Power and Dangers of Analogy and Metaphor (Ways of Understanding) 0.7.1. The Infinite Regress of Knowledge - A Venture into What it Means to "Understand" Something and Why Epistemology is Important 0.


7.1.2. Epistemological Maturity 0.8. Format and Organization of Chapters Chapter 1 - First Principles and Philosophical Foundations 1.1. Science, Statistics, Machine Learning, Artificial Intelligence 1.


1.1. Mathematics, Statistics, Computation 1.1.2. Mathematical Systems as a Narrative to Understanding 1.2. The Scope of Data Analysis and Data Science (Expertise in Everything!) 1.


2.1. Theoretical vs. Applied Statistics & Specialization 1.3. The Role of Computers 1.3.1.


The Nature of Algorithms 1.3.1.2. Algorithmic Stability 1.4. The Importance of Design, Experimental or Otherwise 1.5.


Inductive, Deductive, and Other Logics 1.5.1. Consistency and Gödel''s Incompleteness Theorems 1.5.1.2. What is the Relevance of Gödel? 1.


6. Supervised vs. Unsupervised Learning 1.6.1. Fuzzy Distinctions 1.7. Theoretical vs.


Empirical Justification 1.7.1. Airplanes and Oceanic Submersibles 1.7.2. Will the Bridge Stay Up if the Mathematics Fail? 1.8.


Level of Analysis Problem 1.9. Base Rates, Common Denominators and Degrees 1.9.1. Base Rates and Splenic Masses 1.9.2.


Probability Neglect 1.9.3. The "Zero Group" 1.10. Statistical Regularities and Perceptions of Risk 1.10.1.


Beck Depression Inventory: How Depressed Are You? 1.11. Decision, Risk Analysis and Optimization 1.11.1. The Risk of Making a Wrong Decision 1.11.2.


Statistical Lives and Optimization 1.11.3. Medical Decision-Making and Dominating Criteria 1.12. All Knowledge, Scientific and Other, is Tentative 1.13. Occam''s Razor 1.


13.1. Parsimony vs. Complexity Trade-Off 1.14. Overfitting vs. Underfitting 1.14.


1. Solutions to Overfitting 1.14.2. The Idea of Regularization 1.15. The Measurement Problem 1.15.


1. What is Data? 1.15.2. The Philosophy and Scales of Measurement 1.15.3. Reliability 1.


15.3.1. Coefficient Alpha 1.15.3.2. Test-Retest Reliability 1.


15.4. Validity 1.15.5. Scales of Measurement 1.15.6.


Likert Scales 1.15.6.1. Statistical Models for Likert Data 1.15.6.2.


Models for Ordinal and Monotonically Increasing/Decreasing Data Overview of Statistical and Machine Learning Concepts 1.16. Probably Approximately Correct 1.17. No Free Lunch Theorem 1.18. V-C Dimension and Complexity 1.19.


Parametric vs. Nonparametric Learning Methods 1.19.1. Flexibility and Number of Parameters 1.19.1.1.


Concept of Degrees of Freedom 1.19.2. Instance or Memory-Based Learning 1.19.3. Revisiting Classical Nonparametric Tests 1.20.


Dimension Reduction, Distance, and Error Functions: Commonalities in Modeling 1.20.1. Dimension Reduction: What''s the Big Idea? 1.20.2. The Curse of Dimensionality 1.21.


Distance 1.22. Error Minimization 1.23. Training vs. Test Error 1.24. Cross-Validation and Model Selection 1.


25. Monte Carlo Methods 1.26. Missing Data 1.27. Quantitative Approaches to Data Analysis 1.28. Chapter Review Exercises Chapter 2 - Mathematical and Statistical Foundations 2.


1. Mathematical "Previews" vs. the "Appendix" Approach (Why Previews are Better) 2.1.2. About Proofs 2.2. Elementary Probability and Fundamental Statistics 2.


3. Interpretations of Probability 2.4. Mathematical Probability 2.4.1. Unions and Intersections of Events 2.5.


Conditional Probability 2.5.1. Unconditional vs. Conditional Statistical Models 2.6. Probabilistic Independence 2.6.


1. Everything is About Independence vs. Dependence! 2.7. Marginal vs. Conditional Distributions 2.8. Independence Implies Covariance of Zero, But Covariance of Zero Does Not (Necessarily) Imply Independence 2.


9. Sensitivity and Specificity: More Conditional Probabilities 2.10. Bayes'' Theorem and Conditional Probabilities 2.10.1. Bayes'' Factor 2.10.


2. Bayesian Model Selection 2.10.3. Bayes'' Theorem as Rational Belief or Theorizing 2.11. Law of Large Numbers 2.11.


1. Law of Large Numbers and the Idea of Committee Machines 2.12. Random Variables and Probability Density Functions 2.13. Convergence of Random Variables 2.14. Probability Density Functions 2.


15. Normal (Gaussian) Distributions 2.15.1. Univariate Gaussian 2.15.2. Mixtures of Gaussians 2.


15.3. Evaluating Univariate Normality 2.15.4. Multivariate Gaussian 2.15.5.


Evaluating Multivariate Normality 2.16. Binomial Distributions 2.16.1. Approximation to the Normal Distribution 2.17. Multinomial Distributions 2.


18. Poisson Distribution 2.19. Chi-Square Distributions 2.20. Expectation and Expected Value 2.21. Measures of Central Tendency 2.


21.1. The Arithmetic Mean (Average) 2.21.1.1. Averaging Over Cases (Why Thinking in Terms of Averages Can Be Dangerous) 2.21.


2. The Median 2.22. Measures of Variability 2.22.1. Variance and Standard Deviation 2.22.


2. Mean Absolute Deviation 2.23. Skewness and Kurtosis 2.24. Coefficient of Variation 2.25. Statistical Estimation 2.


26. Bias-Variance Trade-Off 2.26.1. Is Irreducible Error Really Irreducible? 2.27. Maximum Likelihood Estimation 2.27.


1. Why ML is so Popular and Alternatives 2.27.2. Estimation and Confidence Intervals 2.28. The Bootstrap (A Way of Estimating Nonparametrically) 2.28.


1. Simple Examples of the Bootstrap 2.28.2. Why not Boostrap Everything? 2.28.3. Variations and Extensions of the Bootstrap 2.


29. Elements of Classic Null Hypothesis Significance Testing 2.29.1. One-Tailed vs. Two-Tailed Tests 2.29.2.


Effect Size 2.29.3. Cohen''s d (Measure of Effect Size) 2.29.4. Are p-values that Evil? 2.29.


5. Absolute vs. Relative Size of Effect (Context Matters) 2.29.6. Comparability of Effect Sizes Across Studies 2.29.7.


Operationalizing Predictors 2.30. Central Limit Theorem 2.31. Covariance and Correlation 2.31.1. Why Does rxy Have Limits -1 to +1? 2.


31.2. Covariance and Correlation in R and Python 2.31.3. Correlating Linear Combinations 2.31.4.


Covariance and Correlation Matrices 2.32. Z-Scores and Z-Tests 2.32.1. Z-tests and T-tests for the Mean 2.33. Unequal Variances: Welch-Satterthwaite Approximation 2.


34. Paired Data 2.35. Review Exercises 2.36 Linear Algebra and Matrices 2.36.1. Vectors 2.


36.1.2. Vector Spaces and Fields 2.36.1.3. Zero, Unit Vectors, and One-Hot Vectors 2.


36.1.4. Transpose of a Vector 2.36.1.5. Vector Addition and Length 2.


36.1.6. Eigen Analysis and Decomposition 2.36.1.7. Points vs.


Vectors 2.37. Matrices 2.37.1. Identity Matrix 2.37.2.


Transpose of a Matrix 2.37.3. Symmetric Matrices 2.37.4. Matrix Addition and Multiplication 2.37.


5. Meaning of Matrices (Matrices as Data and Transformations) 2.37.6. Kernel (Null Space) 2.37.7. Trace of a Matrix 2.


38. Linear Combinations 2.39. Determinants 2.40. Means and Variances of Matrices 2.41. Determinant as a Generalized Variance 2.


42. Matrix Inverse 2.42.1. Nonexistence of an Inverse and Singularity 2.43. Quadratic Forms 2.44.


Positive Definite Matrices 2.45. Inner Products 2.46. Linear Independence 2.47. Rank of a Matrix 2.48.


Orthogonal Matrices 2.49. Kernels, the Kernel Trick, and Dual Representations 2.49.1. When are Kernel Methods Useful? 2.50. Systems of Equations 2.


51. Distance 2.52. Projections and Basis 2.53. The Meaning of Linearity 2.54. Basis and Dimension 2.


54.1. Orthogonal Basis 2.55. Review Exercises 2.56. Calculus and Optimization 2.57.


Functions, Approximation and Continuity 2.57.1. Definition of Continuity 2.58. The Derivative 2.58.1.


Local Behavior and Approximation 2.58.2. Composite Functions and Basis Expansions 2.59. The Partial Derivative 2.60. Optimization and Gradients 2.


60.1. What Does "Optimal" Mean? 2.60.2. Minima and Maxima via Calculus 2.60.3.


Convex vs. Non-Convex Functions and.


To be able to view the table of contents for this publication then please subscribe by clicking the button below...
To be able to view the full description for this publication then please subscribe by clicking the button below...