- Print the notecards
- Fold each page in half along the solid vertical line
- Cut out the notecards by cutting along each horizontal dotted line
- Optional: Glue, tape or staple the ends of each notecard together

front 1 mixed method research | back 1 combination of quantitative and qualitative approaches to data collection and analysis |

front 2 quantitative data collection examples | back 2 primary data collection such as surveys with numerical scales |

front 3 qualitative data collection examples | back 3 interviews focus groups observations document review |

front 4 explanatory sequential design | back 4 quantitative data-qualitative data-interpret results |

front 5 exploratory sequential design | back 5 qualitative-quantitative-interpret |

front 6 scout | back 6 gain direct insight into setting's dynamics through observing, reading and interacting with people on sight can use focused interviews |

front 7 reality check | back 7 assess the validity of quantitative measures |

front 8 interpreter | back 8 understand the meaning of the patterns uncovered through quantitative methods can help understand why an intervention has certain outcomes |

front 9 incubator of theory | back 9 recast basic assumptions reframe definitions conceptualize the evaluation problem in a new way refine intervention concepts generate theoretical ideas |

front 10 primary qualitative methods | back 10 review of relevant documents direct observation focused interviewing |

front 11 purposive sampling | back 11 individuals are selected who meet a specific criteria |

front 12 quota sampling | back 12 respondents with particular qualities are targeted |

front 13 chunk sampling | back 13 individuals are selected based on their availability |

front 14 snoball sampling | back 14 individual meeting specific criteria are identified who in turn identify other potential respondents of similiar criteria |

front 15 structured interview with closed, fixed responses | back 15 all interviewees are asked the same questions and choose answers from among the same set of alternatives |

front 16 structured with open ended responses | back 16 same open ended questions are asked to all |

front 17 semi-structured interview | back 17 same general areas of info are collected but allows for adaptability |

front 18 unstructured | back 18 informal and conversation like, no predetermined questions |

front 19 population | back 19 the broader group of people who you would like to make generalizations about |

front 20 sample | back 20 the group of subjects that actually participate in your study |

front 21 non-random sample selection | back 21 potential threat to external validity over sampling of minorities (good and bad) convenience sampling (ie all college students) may need to compare to population |

front 22 categorical/discrete variables | back 22 sex, occupation, race some categorical variables are ordinal when possible values are ordered ie education: less than high school, high school, some college, etc |

front 23 continuous variables | back 23 age, height, weight can be divided into groups ie bmi, underweight, normal, overweight, obese intake of fat-tertiles, quartiles, quintiles |

front 24 categorical variables are described using... | back 24 frequency distributions |

front 25 continous variables are described using | back 25 probability distribution-the range of values your variable can take |

front 26 skew of the distribution is based on the.... | back 26 tail: a longer tail on the left= a left skew=negative skew longer tail on the right= a right skew= positive skew |

front 27 descriptive stats: categorical variables | back 27 % of the total population in each category |

front 28 descriptive stats: continuous variables | back 28 measures of central tendency, normal distribution=mean, non-normal distribution= median mode |

front 29 continuous variables-measures of variability | back 29 normal distribution use standard deviation non-normal- use range or interquartile range |

front 30 range | back 30 high score minus low score |

front 31 interquartile range | back 31 difference between the 1st and 3rd quartile 25%-75% |

front 32 standard deviation | back 32 numerical indicator of the spread of the data values within a sample 1 SD=68% 2 SD= 95% 3 SD = 99.7% |

front 33 standard error of the mean | back 33 numerical indicator of the expected difference between the sample and the population mean |

front 34 what is the difference between SD and SEM | back 34 SD= how scattered a sample is SE= how precise your estimate is when compared to the population/"true" value |

front 35 SE formula | back 35 SE= SD/square root of the sample size |

front 36 when SD is small.... when sample is larger... | back 36 it is easier for your mean estimate to get close to the population value your estimate of the mean gets closer to the population value |

front 37 when you goal is to describe the sample, report... | back 37 standard deviation |

front 38 when your goal is to indicate how precise your measurement is in relation to the pop use, | back 38 standard error |

front 39 standard error is most useful when we calculate.... | back 39 confidence interval |

front 40 confidence interval | back 40 calculated from sample data nd gives an estimated range of values which is likely to include an unknown population parameter |

front 41 what does a confidence interval mean | back 41 if the same population is sampled numerous times and interval estimates are made on each occasion, the resulting intervals would bracket the true population parameter 95% of the time |

front 42 confidence interval formula | back 42 mean +- 1.96 * standard error |

front 43 3 steps of hypothesis testing | back 43 make an assumption collect data rejct or don't reject the initial assumption based on the data |

front 44 null hypothesis | back 44 there is no difference among groups or correlation between variables |

front 45 alternative hypothesis | back 45 there is a difference among groups |

front 46 when you reject the null hypothesis | back 46 you are never 100% sure, there is always a chance that we made an error |

front 47 type 1 error=false positive | back 47 the null hypothesis is rejected when it is true (alpha) telling a male he is pregnant |

front 48 type 2 error | back 48 the null hypothesis is not rejected when it is false (beta) telling a 9month pregnant woman she is not pregnant |

front 49 alpha is | back 49 the level of significance ie p-value |

front 50 power is | back 50 the sensitivity of your analysis to correctly detect if there is an association |

front 51 p-value is | back 51 the probability of finding the observed or more extreme results when the null hypothesis of the study question is true |

front 52 steps of hypothesis testing using p-value | back 52 specify null and alt use sample data to calculate the value of the test stat use the known distribution of the test stat, calculate p-value compare p-value with the pre-set sig level-if p-value is less than alpha (0.05) reject the null in favor of the alt hypothesis. if the p-value is greater than the alpha, do not reject the null |

front 53 p-value interpretation | back 53 p value is greater than 0.05= reject the null, there is an association p value less than 0.05-do not reject the null, there is no association |

front 54 what you need to calculate an acceptable sample size | back 54 acceptable power acceptable probability for type 1 error expected effect size variability of the measurement |

front 55 describe relationships-qualitative | back 55 is there a relationship? positive or negative? linear/non-linear outliers? |

front 56 describe relationships-quantitative | back 56 1 unit change in x is associated with how mnay units of change in y? |

front 57 scatter plot | back 57 graphic rep of 2 variables in which independent variable is on the x-axis and the depend variable is on the y axis |

front 58 cluster | back 58 smaller groupings of data may show an effect modifier in those subjects |

front 59 correlation | back 59 if entities are correlated they should change together in a predictable fashion |

front 60 positive correlation | back 60 when variables increase or decrease together /<what the line looks like |

front 61 negative correlation | back 61 when variable increases while the other decreases \<example of the line |

front 62 pearson correlation assumptions | back 62 when variables are continuous there is a linear relationship normal distribution no outliers use spearman if assumptions are violated |

front 63 interpretting pearson | back 63 r ranges from -1 to +1 negative r= negative association larger the absolute value= a stronger correlation-ie plot looks less scattered |

front 64 rule of thumb for inerpreting the size of a correlation | back 64 .9-1 very high positive .7-.9-high positive .5-.7 moderate positive .3-.5 low positive 0.0-0.3= negligible correlation |

front 65 r has nothing to do with... | back 65 slope |

front 66 pearson r degrees of freedom equals... | back 66 n-2 |

front 67 coefficient of determination r2 | back 67 the amount of common variance shared by the two variables what percent fo the variability of one variable is explained by the variability of the other variable |

front 68 weak r does not mean no relationship | back 68 it means there is no LINEAR relationship |

front 69 r and r2 can be greatly affected by.... | back 69 OUTLIERS |

front 70 regression used to... | back 70 quantify the relationship and predict how much change we expect to see in one variable when other variables change |

front 71 simple linear regression | back 71 used to predict one dependent variable suing one explanatory variable and a constant |

front 72 SLR formula | back 72 Y=B0 +B1* X B0=intercept B1 = slope |

front 73 SLR makes a... | back 73 line of best fit least squares method is the most commonly used method |

front 74 least squares regression line is a line that.... | back 74 makes up the sum of the squares of the vertical distances of the data points from the line as small as posble |

front 75 residuals | back 75 the vertical distance between the actual and predicted values of y |

front 76 when to use SLR-assumptions | back 76 two continuous variables linearity stat independence of errors constant variance of errors normality of error distribution |

front 77 interpreting SLR | back 77 usually pay attention to the slope (B1) |

front 78 hypothesis testing for SLR | back 78 want to determine if B1 is significantly different from 0 |

front 79 p-value with SLR | back 79 calculated using a t-test, is it smaller than your preset alpha? |

front 80 confidence intervals in SLR | back 80 calculated using the estimated B1 and the standard error of the estimated B1, does the interval include 0? if confidence interval does not include zero than you can reject the null |

front 81 multiple regression | back 81 involves one dependent variable and two or more independent variables |

front 82 multiple regression formula | back 82 T= B0+B1*X1+B2*X2+B3*X3.....ETC one unit change in x predicts b1 unit change in y when all other xs are held constant |

front 83 why use multiple regression? | back 83 better predictor explore relationship between y and multiple Xs simultaneously control for confounding explore interaction |

front 84 t-test is used... | back 84 to study the difference in 2 groups |

front 85 students t-test | back 85 study sample vs population study bmi among local students and want to know if they are more/less obese than typical us children |

front 86 two dependent study samplesgroups t-test | back 86 two independent study samples 2 exercise programs and want to know if their fasting glucose levels are different after completing the programs here you compare the groups to each other |

front 87 two dependent study samples t-test | back 87 samples are related in some manner you want to know whether exercise has an impact on fasting glucose levels and you compare glucose levels before and after exercise in 1 group of children here you compare one group to 2 factors |

front 88 t-test formula | back 88 t= (sample mean-population mean)/ standard error of the means |standard deviation/square root of n| |

front 89 degrees of freedom for the students t-test | back 89 n-1 |

front 90 one tail t-test | back 90 when you expect the difference only goes in one direction example- trial drug is cheaper, all you care about is if it is worse, not if it is better than an already existing drug |

front 91 two tail t-test | back 91 when you expect the difference to go in both directions |

front 92 interpret independent t-test | back 92 t score must be larger than the critical value from the chart in order to reject your null |

front 93 independent t-test (groups) | back 93 assign subjects to 2 diff ex programs and want to know if their fasting glucose levels differ after completing the program |

front 94 what influences independent t-test stat? | back 94 sample mean of the two groups, larger difference= larger t-value standard deviation of the two groups-smaller Sd=larger t-value sample size of the two groups-larger sample size=larger t-value |

front 95 effect size-cohen's D | back 95 how large the effect of the intervention is D stands for distance 0.2 or less= small difference 0.2-0.8-moderate difference 0.8 or more= large effect size |

front 96 assumptions of independent t-test | back 96 random sampling samples are independent normal distribution equal variance of the two samples |

front 97 degrees of freedom for independent t-test | back 97 N1+N2-2 |

front 98 dependent t-test | back 98 when 2 samples are related ie repeated measures-before and after treatment want to study whether exercise affects blood glucose before and after an exercise program |

front 99 degrees of freedom for dependent t-test | back 99 n-1, n= the number of pairs!!! |

front 100 simple anova example | back 100 example-studying the effects of 3+ exercise programs on body fat |

front 101 factorial anova example | back 101 studying 2+ dufferent components of health programs (ie diet and pa) and want to know how each of them affect body weight as well as their interaction effects |

front 102 anova for repeated measures example | back 102 multiple ex programes and measure their body fat before and after program, which program is the best |

front 103 anova assumptions | back 103 independence normality equal variance |

front 104 how does anova work | back 104 compares between group variance with within group variance |

front 105 steps for anova | back 105 1. calculate between group variance and divide it by its degrees of freedom (number of groups-1) = MSb 2. calculate within group variance and divide it by its degree of freedom (sample n- number of groups)= MSw 3. use f-statistic- MSB/MSW if between group variance exceeds within group variance we reject the null |

front 106 using the ctiritcal f values | back 106 top row- degree of freedom for the numerator (between group=number of groups) far left column- degree of freedom of the denominator (sample size minus the number of groups) |

front 107 factorial anova is used when... | back 107 you are interested in more than one independent variable ie duration and intensity mvpa and sedentary behaviour |

front 108 interaction effect | back 108 does the effect of one independent variable on the dependent variable differ by the level of the other independent variable? |

front 109 post hoc test-anova only tests... | back 109 whether there is a difference among groups, it does not tell you which groups are different |

front 110 post hoc tests are used.... | back 110 to follw up a significant anova they take care of multiple comparison problem and retain the original alpha level there are multiple test, choose appropriately |

front 111 analysis of covariance (ANCOVA) | back 111 used to adjust for covariate when you suspect the groups are different in certain characteristics that may influence your results, you use ancova to control for external factors ie confounders |

front 112 operationalization | back 112 process of specifying how a concept or phenomenon will be defined/measured quant easy qual harder |

front 113 measurement validity | back 113 the extent to which a test instrument measures what it is supposed to measure a test is not universally valid, depends on goals of the testing and the subjects being tested |

front 114 valildity is not.... | back 114 dichotomous-we ask how valid a test is, not whether it is or isn't |

front 115 types of validity | back 115 logical or face content criterion based-concurrent or predictive construct |

front 116 logical validity | back 116 the extent to which a measurement method appears on its face to measure the construct of interest ie using a scale to measure weight weakest evidence of validity based on human intuition usually assessed informally with no external standard |

front 117 content validity | back 117 the degree to which a test serves as a representative sampling of content often used in education (how good a standard test is) |

front 118 criterion based validity | back 118 the degree to which the measurement of one test correlates with the other tests ie a gold standard |

front 119 concurrent validity | back 119 how well one measurement compares to the criterion standard |

front 120 predictive validity | back 120 how well one measurement method predicts future events ie a functional test to predict falls in the elder. a weight scale is not predictive |

front 121 criiterion validity with multiple measurements | back 121 you can use multiple regression to evaluate the collective validity of multple measurements to assess which measurements have the highest predictive power ie skinflod measurements from different areas of the body |

front 122 construct validity | back 122 used when you attempt to measure a theoretical construct that is not directly measurable ie intelligence, anxiety, trust usually incorporate other validity measures with construct validity |

front 123 reliability | back 123 how consistent or repeatable measurements are |

front 124 reliability (shot group analogy) | back 124 high reliability is like a close shot grouping, whetheer it is on the bullseye or it is on the 1 circle |

front 125 validity (9shot grouping analogy) | back 125 validity is not like accuracy. you can hit all shots in the 7 group but be spread out (low validity) |

front 126 measurements of reliability | back 126 test-retest equivalence or alternate forms internal consistency intertester reliability (objectivity) |

front 127 test-retest | back 127 redo the test exactly the same on different days used to asses temporal stability assumption is that quantity being measure does not change over time not suitable for volatile variables like mood |

front 128 alternate forms reliability | back 128 refers to how similarly different versions of a test or questionnaire perform in measuring same entity important for standardized tests that exist in multiple versions |

front 129 internal consistency | back 129 how well the items within a test that are supposed to measure the same construct are correlated uses the same day test-retest method |

front 130 intertester reliability (objectivity) | back 130 the degree to which two independent testers can provide the same scores on the same subject how well the tester evaluates the subject |

front 131 comparing measures | back 131 sometimes you need to compare measurements from 2 different tests and direct comparison may not be appropriate if results are normally distributed use the standard z scores |

front 132 what variables require nonparametric methods? | back 132 categorical variables count rank data skewed distribution examples- bmi gorups, breast cancer incident, pain scale, vigorous pa in elderly adults (right skew) |

front 133 chi square test | back 133 applied to frequency data (categorical variables) evaluates the number of subjects in each category is different from what would be expected one way and two way |

front 134 one way chi-square | back 134 when you are interested in the freq distribution of only one variable |

front 135 two way chi square | back 135 when you are interested in the joint freq distribtuion of two variables |

front 136 chi square df | back 136 number of categories minus 1 |

front 137 what to know about nonparametric tests | back 137 no assumptions about the distribution of data usually use ranking instead of the actual value of the variable (robust for outliers) often less powerful than parametric tests (results in a larger p-values and are more conservative |

front 138 things to consider when selecting stat tests | back 138 how many variables? types of variables (continuous or categorical) distribution of variables purpose of the study |

front 139 one variable analysis | back 139 categorical variable:frequency table continuous variable measure of central tendency- normal distribution-mean non normal distribution-median/mode measures of variability normal dist-SD non-normal dist- range/interquartile range |

front 140 2 variable analysis if both are continuous... | back 140 normal distribution: pearson non-normal-spearman |

front 141 2 variable analysis if both are categorical... | back 141 chi-square |

front 142 2 variable analysis if one is continuous and one is categorical (2 groups) | back 142 w/normal- t-tests non-normal-mann-whitney or wilcoxon |

front 143 2 variable analysis if one is continuous and the others are categorical (more than two groups | back 143 normal distribution-anova, repeated anova non-normal-kruskal-wallis test, friedman test |

front 144 more than 2 variable analysis when one dependent variable and multiple independent variables | back 144 to control for confounding-ANCOVA or multiple regresion to evaluate effect modification-factorial ancova or multiple regression |

front 145 when you have multiple dependent variables use.. | back 145 MANOVA MANCOVA MULTIVARIATE REGRESSION |