Präsentation herunterladen

Die Präsentation wird geladen. Bitte warten

1
**Uni/bivariate Probleme**

Unabhängigkeit Normalverteilung Ausreißer (Phasen-Iterationstest) (KS-Test / Chi-Quadrat Test) (Dixon / Chebyshev's Theorem) Ja Nein Voraussetzungen erfüllt ? Parametrische Verfahren Nicht parametrische Verfahren Verteilungstest KS-Test / Chi-quadrat Test Einstichproben T-test Chi-quadrat Test Vergleich von Mittelwerten mit dem Parameter der GG Zweistichproben T-test F-Test / Levene Test Vergleich von 2 unabhängigen Stichproben U-Test T-Test für verbundene Stichproben Vergleich von 2 verbundenen Stichproben Wilcoxon-Test Vergleich von k unabhängigen Stichproben Varianzanalyse (ANOVA) H-Test Mehrfachvergleiche: Post hoc tests Mehrfachvergleiche: Bonferroni Korrektur, Šidàk-Bonferonni correction Pearson‘s Korrelationsanalyse/ Regressionsanalyse Zusammenhangsanalyse Rangkorrelation nach Spearmann

2
**Categorization of multivariate methods**

Data analysis Data mining Reduction Classification Data Relationships Principal Component Analysis Factor Analysis Factor Analysis Discriminant Analysis Multiple Regression Principal Component Regression Correspondence Analysis Homogeneity Analysis Hierarchical Cluster Analysis Multidimensional Scaling Linear Mixture Analysis Partial Least Squares - 2 Non-linear PCA Procrustes Analysis K-Means Artificial Neural Networks Partial Least Squares -1 Canonical Analysis Support Vector Machines ANN SVM ANN SVM

3
**Vorgehen beim statistischen testen:**

Aufstellen der H0/H1-Hypothese Ein- oder zweiseitige Fragestellung Auswahl des Testverfahrens Festlegen des Signikanzniveaus (Fehler 1. und 2. Art) Testen Interpretation

4
**Entscheidung aufgrund**

Fehler 1. und 2. Art In Population gilt H0 H1 richtig, mit 1-α β-Fehler P(H0¦H1)= β Entscheidung aufgrund der Stichprobe H1 H0 α-Fehler P(H1¦H0)= α richtig, mit 1- β

5
**Bestimmen von Irrtumswahrscheinlichkeiten**

sei eine normalverteilte Stichprobe (nach 1. Grenzwertsatz) unbekannter Herkunft, mit Probe stammt aus der Eifel Probe stammt aus dem Hunsrück

6
**Test: Einstichproben Gauss Test**

P-Wert wird gleiner mit > Diff. mit < mit > n Test: Einstichproben Gauss Test mit Wert schneidet 0.62% von NV ab (P-Wert = Irrtumswahrscheinlichkeit) α=5%, ~Z=1.65 H0 muss verworfen werden!

7
**Frage: Welches muss überschritten werden, um H0 mit**

gerade verwerfen zu können? schneided von der rechten Seite der SNV genau 5% ab

8
**schneidet auf jeder Seite der SNV genau 2.5% ab**

Zweiseitiger Test: schneidet auf jeder Seite der SNV genau 2.5% ab H0 wird knapper abgelehnt! Entscheidung ein-/zweiseitiger Test muss im Vorfeld erfolgen!

9
Der β-Fehler Kann nur bei spezifischer H1 bestimmt werden! Wir testen, ob sich die Stichprobe mit dem Parameter der Eifelproben verträgt Wert schneidet auf der linken Seite der SNV 10.6% ab. Entscheidet man sich aufgrund des Ereignisses für die H0, so wird man mit einer p von 10.6% einen β-Fehler begehen, d.h. H1 (« Probe stammt aus der Eifel ») verwerfen, obwohl sie richtig ist.

10
**Bestimmen der Teststärke**

Die Teststärke Die β-Fehlerwahrscheinlichkeit gibt an, mit welcher p die H1 verworfen wird, obwohl ein Unterschied besteht 1- β gibt die p an zugunsten von H1 zu entscheiden, wenn H1 gilt. Bestimmen der Teststärke Wir habe herausgefunden, dass ab einem Wert der Test gerade signifikant wird (« Probe stammt aus der Eifel »)

11
**Bestimmen der Teststärke**

β-Wahrscheinlichkeit: Teststärke: 1-β = = Die p, dass wir uns aufgrund des gewählten Signifikanzniveaus (α=5%) zu Recht zugunsten der H1 entscheiden, beträgt 98.21% Determinanten der Teststärke: Mit kleiner werdener Diff. µ0-µ1 verringert sich 1- β Mit wachsendem n vergrössert sich 1- β Mit wachsender Merkmalsstreuung sinkt 1- β

12
**Why multivariate statistics?**

Remember Fancy statistics do not make up for poor planning Design is more important than analysis

13
**Categorization of multivariate methods**

Prediction Methods Use some variables to predict unknown or future values of other variables. Description Methods Find human-interpretable patterns that describe the data. From [Fayyad, et.al.] Advances in Knowledge Discovery and Data Mining, 1996

14
**Multiple Linear Regression Analysis**

The General Linear Model x1 x2 y 0=10 (xi1, xi2) E(yi) yi i Response Surface A general linear model can be: straight-line model quadratic model (second-order model) more than one independent variables. E.g.

15
**Multiple Linear Regression Analysis**

y=x1 + x2 – x1 + 2 x x22 15

16
**Multiple Linear Regression Analysis**

Parameter Estimation The goal of an estimator is to provide an estimate of a particular statistic based on the data. There are several ways to characterize estimators: Bias: an unbiased estimator converges to the true value with large enough sample size. Each parameter is neither consistently over or under estimated Likelihood: the maximum likelihood (ML) estimator is the one that makes the observed data most likely ML estimators are not always unbiased for small N Efficient: an estimator with lower variance is more efficient, in the sense that it is likely to be closer to the true value over samples the “best” estimator is the one with minimum variance of all estimators 16

17
**Multiple Linear Regression Analysis**

A linear model can be written as Where: is an N-dimensional column vector of observations is a (k+1)-dimensional column vector of unknown parameters is an N-dimensional random column vector of unobserved errors Matrix X is written as The first column of X is the vector , so that the first coefficient is the intercept. The unknown coefficient vector is estimated by minimizing the residual sum of squares 17

18
**Multiple Linear Regression Analysis**

Model assumptions The OLS estimator can be considered as the best linear unbiased estimator (BLUE) of provided some basic assumptions regarding the error term are satisfied : Mean of errors is zero: Errors have a constant variance: Errors from different observations are independent of each other: for Errors follow a Normal Distribution. Errors are not uncorrelated with explanatory variable: 18

19
**Multiple Linear Regression Analysis**

X2 Interpreting Multiple Regression Model For a multiple regression model： 1 should be interpreted as change in y when a unit change is observed in x1 and x2 is kept constant. This statement is not very clear when x1 and x2 are not independent. Misunderstanding: i always measures the effect of xi on E(y), independent of other x variables. Misunderstanding: a statistically significant value establishes a cause and effect relationship between x and y. X1 19

20
**Multiple Linear Regression Analysis**

Explanation Power by If the model is useful… At least one estimated must 0 But wait …What is the chance of having one estimated significant if I have 2 random x? For each , prob(b 0) = 0.05 At least one happen to be b 0, the chance is: Prob(b1 0 or b2 0) = 1 – prob(b1=0 and b2=0) = 1-(0.95)2 = Implication? 20

21
**Multiple Linear Regression Analysis**

R2 (multiple correlation squared) – variation in Y accounted for by the set of predictors Adjusted R2. The adjustment takes into account the size of the sample and number of predictors to adjust the value to be a better estimate of the population value. Adjusted R2 = R2 - (k - 1) / (n - k) * (1 - R2) Where: n = # of observations, k = # of independent variables, Accordingly: smaller n decreases R2 value; larger n increases R2 value; smaller k, increases R2 value; larger k, decreases R2 value. The F-test in the ANOVA table to judge whether the explanatory variables in the model adequately describe the outcome variable. The t-test of each partial regression coefficient. Significant t indicates that the variable in question influences the Y response while controlling for other explanatory variables. 21

22
**Multiple Linear Regression Analysis**

ANOVA Source of Variance SS df MS Regression p-1 MSR=SSR/(p-1) Error n-p MSE=SSE/(n-p) Total n-1 where J is an nn matrix of 1s 22

23
**Multiple Linear Regression Analysis**

The R2 statistic measures the overall contribution of Xs. Then test hypothesis: H0: 1=… k=0 H1: at least one parameter is nonzero Since there is no probability distribution form for R2, F statistic is used instead. 23

24
**Multiple Linear Regression Analysis**

F-statistics 24

25
**Multiple Linear Regression Analysis**

How many variables should be included in the model? Basic strategies: Sequential forward Sequential backward Force entire The first two strategies determine a suitable number of explanatory variables using the semi-partial correlation as criterion and a partial F-statistics which is calculated from the error terms from the restricted (RSS1) and unrestricted (RSS) models: where k, k1 denotes the number of lags of the unrestricted and restricted model, and N is the number of observations. 25

26
**Multiple Linear Regression Analysis**

The semi-partial correlation Y Z X Measures the relationship between a predictor and the outcome, controlling for the relationship between that predictor and any others already in the model. It measures the unique contribution of a predictor to explaining the variance of the outcome. 26

27
**Multiple Linear Regression Analysis**

Testing the regression coefficients An unbiased estimator for the variance is The regression coefficients are tested for significance under the Null-Hypothesis using a standard t-test Where denotes the ith diagonal element of the matrix is also referred to as standard error of a regression coefficient . 27

28
**Multiple Linear Regression Analysis**

Which X is contributing the most to the prediction of Y? Cannot interpret relative size of bs because each are relative to the variables scale but s (Betas; standardized Bs) can be interpreted. a is the mean on Y which is zero when Y is standardized 28

29
**Multiple Linear Regression Analysis**

Can the regression equation be generalized to other data? Can be evaluated by randomly separating a data set into two halves. Estimate regression equation with one half and apply it to the other half and see if it predicts Cross-validation 29

30
**Multiple Linear Regression Analysis**

Residual analysis 30

31
**Multiple Linear Regression Analysis**

The Revised Levene’s test Divide the residuals into two (or more) groups based the level of x, The variances and the means of the two groups are supposed to be equal. A standard t-test can be used to test the difference in mean. A large t indicates nonconsistancy. e x/E(y) 31

32
**Multiple Linear Regression Analysis**

Detecting Outliers and Influential Observations Influential points are those whose exclusion will cause major change in fitted line. “Leave-one-out” crossvalidation. If ei > 4s, it is considered as outlier. True outlier should not be removed, but should be explained. 32

33
**Multiple Linear Regression Analysis**

Generalized Least-Squares Example for a Generalized Least-Square model which can be used instead of OLS-regression in the case of autocorrelated error terms (e.g. in Distributed Lag-Models) 33

34
**Multiple Linear Regression Analysis**

SPSS-Example 34

35
**Multiple Linear Regression Analysis**

SPSS-Example 35

36
**Multiple Linear Regression Analysis**

SPSS-Example Model evaluation 36

37
**Multiple Linear Regression Analysis**

Studying residual helps to detect if: Model is nonlinear in function Missing x One or more assumptions of is violated. Outliers SPSS-Example Model evaluation 37

38
**ANOVA (ONE-WAY) ANOVA (TWO-WAY) MANOVA**

ANalysis Of VAriance ANOVA (ONE-WAY) ANOVA (TWO-WAY) MANOVA Start with example! 38

39
**Comparing more than two groups**

ANOVA Comparing more than two groups ANOVA deals with situations with one observation per object, and three or more groups of objects The most important question is as usual: Do the numbers in the groups come from the same population, or from different populations? 39

40
**One-way ANOVA: Example**

Assume ”treatment results” from 13 soil plots from three different regions: Region A: 24,26,31,27 Region B: 29,31,30,36,33 Region C: 29,27,34,26 H0: The treatment results are from the same population of results H1: They are from different populations 40

41
**ANOVA Comparing the groups Averages within groups: Total average:**

Region A: 27 Region B: 31.8 Region C: 29 Total average: Variance around the mean matters for comparison. We must compare the variance within the groups to the variance between the group means. 41

42
**Variance within and between groups**

ANOVA Variance within and between groups Sum of squares within groups: Sum of squares between groups: The number of observations and sizes of groups has to be taken into account! 42

43
**Adjusting for group sizes**

ANOVA Adjusting for group sizes Both are estimates of population variance of error under H0 n: number of observations K: number of groups If populations are normal, with the same variance, then we can show that under the null hypothesis, Reject at confidence level if 43

44
ANOVA Continuing example -> H0 can not be rejected 44

45
**ANOVA table Source of variation Sum of squares Deg. of freedom**

Mean squares F ratio Between groups SSG K-1 MSG Within groups SSW n-K MSW Total SST n-1 NOTE: 45

46
**When to use which method**

ANOVA When to use which method In situations where we have one observation per object, and want to compare two or more groups: Use non-parametric tests if you have enough data For two groups: Mann-Whitney U-test (Wilcoxon rank sum) For three or more groups use Kruskal-Wallis If data analysis indicate assumption of normally distributed independent errors is OK For two groups use t-test (equal or unequal variances assumed) For three or more groups use ANOVA 46

47
**Two-way ANOVA (without interaction)**

In two-way ANOVA, data fall into categories in two different ways: Each observation can be placed in a table. Example: Both type of fertilization and crop type should influence soil properties. Sometimes we are interested in studying both categories, sometimes the second category is used only to reduce unexplained variance. Then it is called a blocking variable 47

48
**Sums of squares for two-way ANOVA**

Assume K categories, H blocks, and assume one observation xij for each category i and each block j block, so we have n=KH observations. Mean for category i: Mean for block j: Overall mean: Illustrate in table! 48

49
**Sums of squares for two-way ANOVA**

49

50
**ANOVA table for two-way data**

Source of variation Sums of squares Deg. of freedom Mean squares F ratio Between groups SSG K-1 MSG= SSG/(K-1) MSG/MSE Between blocks SSB H-1 MSB= SSB/(H-1) MSB/MSE Error SSE (K-1)(H-1) MSE= SSE/(K-1)(H-1) Total SST n-1 Test for between groups effect: compare to Test for between blocks effect: compare to 50

51
**Two-way ANOVA (with interaction)**

The setup above assumes that the blocking variable influences outcomes in the same way in all categories (and vice versa) Checking interaction between the blocking variable and the categories by extending the model with an interaction term 51

52
**Sums of squares for two-way ANOVA (with interaction)**

Assume K categories, H blocks, and assume L observations xij1, xij2, …,xijL for each category i and each block j block, so we have n=KHL observations. Mean for category i: Mean for block j: Mean for cell ij: Overall mean: Illustrate in table! 52

53
**Sums of squares for two-way ANOVA (with interaction)**

53

54
**ANOVA table for two-way data (with interaction)**

Source of variation Sums of squares Deg. of freedom Mean squares F ratio Between groups SSG K-1 MSG= SSG/(K-1) MSG/MSE Between blocks SSB H-1 MSB= SSB/(H-1) MSB/MSE Interaction SSI (K-1)(H-1) MSI= SSI/(K-1)(H-1) MSI/MSE Error SSE KH(L-1) MSE= SSE/KH(L-1) Total SST n-1 Test for interaction: compare MSI/MSE with Test for block effect: compare MSB/MSE with Test for group effect: compare MSG/MSE with 54

55
ANOVA Notes on ANOVA All analysis of variance (ANOVA) methods are based on the assumptions of normally distributed and independent errors The same problems can be described using the regression framework. We get exactly the same tests and results! There are many extensions beyond those mentioned 55

56
**MANOVA Uses Multiple DVs**

Predictors (IVs) Criterion (DV(s)) ANOVA Multiple, discrete Single, continuous MANOVA Multiple, continuous Various measures of soil properties Corg, Cmik, N, pH,… Various outcome measures following different types of categories Fertilization, point in time, crop type,… 56

57
**MANOVA Multiple DVs could be analysed using multiple ANOVAs, but:**

The FW increases with each ANOVA Scores on the DVs are likely correlated Non-independent, and taken from the same subjects Hard to interpret results if multiple ANOVAs are significant MANOVA solves this by conducting only one overall test Creates a ‘composite’ DV Tests for significance of the composite DV 57

58
**MANOVA The Composite DV is a linear combination of the DVs**

i.e., a discriminant function, or root The weights maximally separate the groups on the composite DV C = W1Y1 + W2Y2 + W3Y3 + …+ WnYn where, C is a subject’s score on the composite DV Yi are scores on each of the DVs Wi are the weights, one for each DV A composite DV is required for each main effect and interaction 58

59
**MANOVA Considering the DVs together can enhance power**

Frequency distributions show considerable overlap between groups on the individual DVs The elipses, that reflect the DVs in combination, show less overlap Small differences on each DV combine to make a larger multivariate difference 59

60
MANOVA In ANOVA, the sums of squared deviations are partitioned: SST = SSA + SSB + SSAxB + SSS/AB In MANOVA, the sum of squares cross-products are partitioned: ST = SD + STr + SDxTr + SS(DTr) The SSCP matrices (S) are analogous to the SS SSCP matrix is a squared deviation that also reflects correlations among the DVs 60

61
**Scores and Means in MANOVA are Vectors**

Y: Scores for each subject T and D: Row and column marginals GM: the grand mean DTr: the average scores of subjects within cells 61

62
MANOVA 62

63
**MANOVA The deviation score for the first subject is:**

The squared deviation is obtained by multiplying by the transpose: SS are on the diagonal: (25.89)2 = 670, and (20.78)2 = 431 Cross-products are on the off-diagonals: (25.89)(20.78)=538 And: 63

64
MANOVA The squaring of a matrix is carried out by multiplying it by its transpose The transpose is obtained by flipping the matrix about its diagonal: To multiply, the ijth element in the resulting matrix is obtained by the sum of products of the ith row in A and the jth column in A' For a vector, the transpose is a row vector, and: 64

65
**MANOVA Main Effects in ANOVA vs. MANOVA: The Interaction:**

The Error Term: 65

66
MANOVA In ANOVA, variance estimates (MS) are obtained from the SS for significance testing using the F-statistic In MANOVA, variance estimates (determinants) are obtained from the SSCP matrices for significance testing e.g. using Wilk’s Lambda () ANOVA MANOVA SS ~ SSCP MS ~ |SSCP| ~ Note that F and are inverse to one another 66

67
**MANOVA The determinant of a 2x2 matrix is given by:**

The determinants required to test the interaction are: Wilk’s Lambda for the Interaction is obtained by: 67

68
**MANOVA If the effect is small, then approaches 1.0**

Here SDT was small, and was 0.91 Eta Squared for MANOVA is: 2 = 1 - Effect = 1 – 0.91 = 0.09 The interaction accounts for only 9% of the variance in the group means on the composite DV 68

69
**MANOVA Must have more cases/cell than number of DVs**

Avoids singularity, enhances power Linear relation of all DVs and of DVs and COVs Multivariate normality Sampling distribution of means for all DVs and linear combinations of DVs is normal Homogeneity of Variance-Covariance matrices To rationalize pooling of error estimate Can be extended to within-subjects and mixed designs Repeated measures are treated as new DVs 69

70
MANOVA MANOVA SPSS Example 70

71
MANOVA MANOVA SPSS Example 71

72
MANOVA 72

73
MANOVA 73

74
MANOVA 74

75
**Discriminant Analysis**

Discriminant analysis is used to predict group memberships from a set of continuous predictors Analogy to MANOVA: in MANOVA linearly combined DVs are created to answer the question if groups can be separated. The same “DVs” can be used to predict group membership!! 75

76
**Discriminant Analysis**

What is the goal of Discriminant Analysis? Perform dimensionality reduction “while preserving as much of the class discriminatory information as possible”. Seeks to find directions along which the classes are best separated. Takes into consideration the scatter within-classes but also the scatter between-classes. 76

77
**Discriminant Analysis**

MANOVA and Disriminant Analysis (DA) are mathematically identical but are different in terms of emphasis: DA is usually concerned with grouping of objects (classification) and testing how well objects were classified (one grouping variable, one or more predictor variables) Discriminant functions are identical to canonical correlations between the groups on one side and the predictors on the other side. MANOVA is applied to test if groups significantly differ from each other (one or more grouping variables, one or more predictor variables) 77

78
**Discriminant Analysis**

78

79
**Discriminant Analysis**

Assumptions small number of samples might lead to overfitting. If there are more DVs than objects in any cell the cell will become singular and cannot be inverted. If only a few cases more than DVs equality of covariance matrices is likely to be rejected. With a small objects/DV ratio power is likely to be very small Multivariate normality: the means of the various DVs in each cell and all linear combinations of them are normally distributed Absence of outliers – significance assessment is very sensitive to outlying cases Homogeneity of Covariance Matrices. DA is relatively robust to violations of this assumption if interference is the focus of the analysis, but not in classification. 79

80
**Discriminant Analysis**

Assumptions For classification purposes DA is highly influenced by violations for the last assumption, since subjects will tend to be classified into groups with the largest variance Homogeneity of class variances can be assessed by plotting pairwise the discriminant function scores for the first discriminant functions. LDA assumes linear relationships between all predictors within each group. Violations tend to reduce power and not increase alpha. Absence of Multicollinearity/Singularity in each cell of the design: Avoid redundant predictors 80

81
**Discriminant Analysis**

Interpreting a Two-Group Discriminant Function In the two-group case, discriminant function analysis is analogous to multiple regression; the two-group discriminant analysis is also called Fisher linear discriminant analysis. In general, in the two-group case we fit a linear equation of the type: c = a + d1*x1 + d2*x dm*xm where a is a constant and d1 through dm are regression coefficients and c is the predicted class. The interpretation of the results of a two-group problem is straightforward and closely follows the logic of multiple regression: Those variables with the largest (standardized) regression coefficients are the ones that contribute most to the prediction of group membership. 81

82
**Discriminant Analysis**

Discriminant Functions for Multiple Groups When there are more than two groups, then we can estimate more than one discriminant function. For instance, when there are three groups, there exist a function for discriminating between group 1 and groups 2 and 3 combined, and another function for discriminating between group 2 and group 3. Canonical analysis. In a multiple group discriminant analysis, the first function is defined such that it provides the most overall discrimination between groups, the second provides second most, and so on. All functions are independent or orthogonal. Computationally, a canonical correlation analysis is performed that determines the successive functions and canonical roots. The number of function that can be calculated is: Min [number of groups-1;number of variables] 82

83
**Discriminant Analysis**

Eigenvalues Eigenvalus can be interpreted as the proportion of variance accounted for by the correlation between the respective canonical variates. Successive eigenvalues will be of smaller and smaller size. First, compute the weights that maximize the correlation of the two sum scores. After this first root has been extracted, you will find the weights that produce the second largest correlation between sum scores, subject to the constraint that the next set of sum scores does not correlate with the previous one, and so on. Canonical correlations. If the square root of the eigenvalues is taken, then the resulting numbers can be interpreted as correlation coefficients. Because the correlations pertain to the canonical variates, they are called canonical correlations. 83

84
**Discriminant Analysis**

Suppose there are C classes Let µi be the mean vector of class i, i = 1,2,…, C Let be the total number of samples. And Within-class scatter matrix: Between-class scatter matrix: = mean of the entire data set Where and 84

85
**Discriminant Analysis**

Methodology projection matrix LDA computes a transformation that maximizes the between-class scatter while minimizing the within-class scatter: products of eigenvalues ! : scatter matrices of the projected data y 85

86
**Discriminant Analysis**

Linear transformation implied by LDA The LDA solution is given by the eigenvectors of the generalized eigenvector problem: The linear transformation is given by a matrix U whose columns are the eigenvectors of the above problem. Important: Since Sb has at most rank C-1, the max number of eigenvectors with non-zero eigenvalues is C-1 (i.e., max dimensionality of sub-space is C-1) 86

87
**Discriminant Analysis**

Does Sw-1 always exist? If Sw is non-singular, we can obtain a conventional eigenvalue problem by writing: In practice, Sw is often singular when more variables than cases are involved in the analysis (M << N ) 87

Ähnliche Präsentationen

© 2017 SlidePlayer.org Inc.

All rights reserved.

Google-Anzeigen