Psych 103 final
Between-groups design is used to assess the effects of different levels of an independent variable by administering each level to a different group of subjects and then comparing the status or performance of the group on the dependent variable.
(independent groups) = each group gets one level of the IV
- Equivalency of groups at each level
- random assignment for between-subjects
- Within-subjects (repeated measures) = each subject gets all levels of the IV
- A “repeated measures” design
- Dependent measures taken multiple times
- Data are dependent
Graphic Rating scale
Make a mark along a line
Semantic differential scale
Respondents rate any concept-persons, objects, ideas on a series of bipoloar 7-point scales
the happy-sad faced rating scales
Labeling response alternatives
The probability of incorrectly rejecting the null hypothesis that is used by a researcher to decide whether an outcome of a study is statistically signifi cant (most commonly, researchers use a probability of .05).
alternate forms reliability
Assessment of reliability by administering two different forms of the same measure to the same individuals at two points in time.
Information that is obtained from stored records including written, video, audio, and digital sources.
The use of existing sources of information for research. Sources include statistical records, survey archives, and written records.
cords. attrition The loss of subjects who decide to leave an experiment. See mortality.
In a single case design, the subject’s behavior during a control period before introduction of the experimental manipulation.
autonomy (Belmont Report)
Principle that individuals in research investigations are capable of making a decision of whether to participate
beneficence (Belmont Report)
Principle that research should have beneficial effects while minimizing any harmful effects.
An experiment in which different subjects are assigned to each group. Also called independent groups design.
A problem that may occur in repeated measures designs if the eff ects of one treatment are still present when the next treatment is given.
A descriptive account of the behavior, past history, and other relevant factors concerning a specifi c individual.
Failure of a measure to detect a diff erence because it was too easy ( also see fl oor eff ect)
A single number or value that describes the typical or central score among a set of scores.
A probability sampling method in which existing groups or geographic areas, called clusters, are identified. Clusters are randomly sampled and then everyone in the selected clusters participates in the study.
A type of replication of research using different procedures for manipulating or measuring the variables.
The construct validity of a measure is assessed by examining whether groups of people diff er on the measure in expected ways.
An interval of values within which there is a given level of confidence (e.g., 95%)
A variable that is not controlled in a research investigation. In an experiment, the experimental groups diff er on both the independent variable and the confounding variable.
The degree to which a measurement device accurately measures the theoretical construct it is designed to measure.
Systematic analysis of recorded communications.
An indicator of construct validity of a measure in which the content of the measure is compared to the universe of content that defines the construct.
control series design
An extension of the interrupted time series quasi-experimental design in which there is a comparison or control group.
The construct validity of a measure is assessed by examining the extent to which scores on the measure are related to scores on other measures of the same construct or similar constructs.
An index of how strongly two variables are related to each other.
covariation of cause and effect
Part of causal inference; observing that a change in one variable is accompanied by a change in a second variable.
The variable/ score that is predicted based upon an individual’s score on another variable (the predictor variable) Conceptually similar to a dependent variable.
An indicator of internal consistency reliability assessed by examining the average correlation of each item (question) in a measure with every other question.
A developmental research method in which persons of diff erent ages are studied at only one point in time; conceptually similar to an independent groups design.
A relationship in which changes in the values of the fi rst variable are accompanied by both increases and decreases in the values of another variable
Misinformation that a participant receives during a research investigation.
degrees of freedom ( df )
A concept used in tests of statistical signifi cance; the number of observations that are free to vary to produce a known outcome.
The variable that is the subject’s response to, and dependent on, the level of the manipulated independent variable.
Statistical measures that describe the results of a study; descriptive statistics include measures of central tendency (e.g., mean), variability (e.g., standard deviation), and correlation (e.g., Pearson r ).
The construct validity of a measure is assessed by examining the extent to which scores on the measure are not related to scores on conceptually unrelated measures.
A procedure wherein both the experimenter and the participant are unaware of whether the participant is in the experimental (treatment) or the control condition.
The extent to which two variables are associated. In experimental research, the magnitude of the impact of the independent variable on the dependent variable.
Use of objective observations to answer a question about the nature of behavior.
Random variability in a set of scores that is not the result of the indepen dent variable. Statistically, the variability of each score from its group mean.
A type of replication of research using the same procedures for manipulating and measuring the variables that were used in the original research.
Eliminating the infl uence of an extraneous variable on the outcome of an experiment by keeping the variable constant in the exp
A method of determining whether variables are related, in which the researcher manipulates the independent variable and controls all other variables either by randomization or by direct experimental control.
experimenter bias (expectancy eff ects)
Any intentional or unintentional influence that the experimenter exerts on subjects to confi rm the hypothesis under investigation.
The degree to which the results of an experiment may be generalized.
F test or ANOVA (analysis of variance)
statistical signifi cance test for determining whether two or more means are signifi cantly diff erent. F is the ratio of systematic variance to error variance.
The degree to which a measurement device appears to accurately measure a variable.
A design in which all levels of each independent variable are combined with all levels of the other independent variables. A factorial design allows investigation of the separate main eff ects and interactions of two or more independent variables.
The principle that a good scientific idea or theory should be capable of being shown to be false when tested using scientific methods.
Deterioration in participant performance with repeated testing.
An experiment that is conducted in a natural setting rather than in a laboratory setting.
Failure of a measure to detect a difference because it was too difficult ( also see ceiling effect).
An arrangement of a set of scores from lowest to highest that indicates the number of times each score was obtained.
A graphic display of a frequency distribution in which the frequency of each score is plotted on the vertical axis, with the plotted points connected by straight lines.
haphazard (convenience) sampling
Selecting subjects in a haphazard manner, usually on the basis of availability, and not with regard to having a representative sample of the population; a type of nonprobability sampling.
Graphic representation of a frequency distribution using bars to represent each score or group of scores.
As a threat to the internal validity of an experiment, refers to any outside event that is not part of the manipulation that could be responsible for the results.
An assertion about what is true in a particular situation; often, a statement asserting that two or more variables are related to one another.
independent groups design
An experiment in which diff erent subjects are assigned to each group. Also called between-subjects design.
The variable that is manipulated to observe its effect on the dependent variable.
Statistics designed to determine whether results based on sample data are generalizable to a population.
Situation in which the eff ect of one independent variable on the dependent variable changes, depending on the level of another independent variable.
internal consistency reliability
Reliability assessed with data collected at one point in time with multiple measures of a psychological construct. A measure is reliable when the multiple measures provide similar results.
The certainty with which results of an experiment can be attributed to the manipulation of the independent variable rather than to some other, confounding variable.
An indicator of reliability that examines the agreement of observations made by two or more raters (judges).
interrupted time series design
A design in which the effectiveness of a treatment is determined by examining a series of measurements made over an extended time period both before and after the treatment is introduced. The treatment is not introduced at a random point in time.
A scale of measurement in which the intervals between numbers on the scale are all equal in size.
Intentional or unintentional influence exerted by an interviewer in such a way that the actual or interpreted behavior of respondents is consistent with the interviewer’s expectations.
The correlation between scores on individual items with the total score on all items of a measure.
Descriptive method in which observations are made in a natural social setting. Also called fi eld observation.
justice (Belmont Report)
Principle that all individuals and groups should have fair and equal access to the benefits of research participation as well as potential risks of research participation.
A technique to control for order effects without having all possible orders.
A developmental research method in which the same persons are observed repeatedly as they grow older; conceptually similar to a repeated measures design.
The direct effect of an independent variable on a dependent variable.
A correlation between one variable and a combined set of predictor variables.
As a threat to internal validity, the possibility that any naturally occurring change within the individual is responsible for the results.
A measure of central tendency, obtained by summing scores and then dividing the sum by the number of scores.
A measure of central tendency; the middle score in a distribution of scores that divides the distribution in half.
A set of statistical procedures for combining the results of a number of studies in order to provide a general assessment of the relationship between variables.
A measure of central tendency; the most frequent score in a distribution of scores.
negative linear relationship
A relationship in which increases in the values of the fi rst variable are accompanied by decreases in the values of the second variable.
A scale of measurement with two or more categories that have no numerical (less than, greater than) properties.
Outcome of research in which two variables are not related; changes in the fi rst variable are not associated with changes in the second variable.
Use of measurement of variables to determine whether variables are related to one another. Also called correlational method.
Type of sampling procedure in which one cannot specify the probability that any member of the population will be included in the sample.
The hypothesis, used for statistical purposes, that the variables under investigation are not related in the population, that any observed effect based on sample results is due to random error.
Definition of a concept that specifies the method used to measure or manipulate the concept.
In a repeated measures design, the eff ect that the order of introducing treatment has on the dependent variable.
A scale of measurement in which the measurement categories form a rank order along a continuum.
The correlation between two variables with the influence of a third variable statistically controlled for.
Pearson product-moment correlation coefficient
A type of correlation coefficient used with interval and ratio scale data. In addition to providing information on the strength of relationship between two variables, it indicates the direction (positive or negative) of the relationship.
The defined group of individuals from which a sample is drawn.
positive linear relationship
A relationship in which increases in the values of the first variable are accompanied by increases in the values of the second variable.
Improvement in participant performance with repeated testing.
An assertion concerning what will occur in a particular research investigation.
The construct validity of a measure is assessed by examining the ability of the measure to predict a future behavior.
A variable that is used to make a prediction of an individual’s score on another variable (the criterion variable). Conceptually similar to an independent variable.
Type of sampling procedure in which one is able to specify the probability that any member of the population will be included in the sample.
Claims that are made on the basis of evidence that, despite appearances, is not based on the principles of the scientific method.
A type of haphazard sample conducted to obtain predetermined types of individuals for the sample.
A sampling procedure in which the sample is chosen to refl ect the numerical composition of various subgroups in the population. A haphazard sampling technique is used to obtain the sample
Use of a random “chance” procedure (such as a random number generator or coin toss) to determine the condition in which an individual will participate.
A measure of variability. The difference between the highest score and the lowest score.
A scale of measurement in which there is an absolute zero point, indicating an absence of the variable being measured. An implication is that ratios of numbers on the scale can be formed (generally, these are physical mea sures such as weight or timed measures such as duration or reaction time).
A problem of measurement in which the measure changes the behavior being observed.
A mathematical equation that allows prediction of one behavior when the score on another variable is known.
The degree to which a measure is consistent.
repeated measures design
An experiment in which the same subjects are assigned to each group. Also called within-subjects design.
The percentage of people selected for a sample who actually completed a survey.
A pattern of response to questions on a selfreport measure that is not related to the content of the questions.
Graphic representation of each individual’s scores on two variables. The score on the fi rst variable is found on the horizontal axis and score on the second variable is found on the vertical axis.
A combination of the cross-sectional and longitudinal design to study developmental research questions.
simple random sampling
A sampling procedure in which each member of the population has an equal probability of being included in the sample.
single case experiment
An experiment in which the eff ect of the independent variable is assessed using data from a single participant.
Solomon four-group design
Experimental design in which the experimental and control groups are studied with and without a pretest.
A reliability coefficient determined by the correlation between scores on half of the items on a measure with scores on the other half of a measure.
The average deviation of scores from the mean (the square root of the variance).
Rejection of the null hypothesis when an outcome has a low probability of occurrence (usually .05 or less) if, in fact, the null hypothesis is correct
stratifi ed random sampling
probability sampling method in which a population is divided into subpopulation groups called strata; individuals are then randomly sampled from each of the strata.
Observations of one or more specific variables, usually made in a precisely defined setting.
A threat to internal validity in which taking a pretest changes behavior without any effect on the independent variable.
A reliability coeffcient determined by the correlation between scores on a measure given at one time with scores on the same measure given at a later time.
A statistical significance test used to compare differences between means
Type I error
An incorrect decision to reject the null hypothesis when it is true.
Type II error
An incorrect decision to accept the null hypothesis when it is false.
Any event, situation, behavior, or individual characteristic that varies—that is, has at least two values.
A measure of the variability of scores about a mean; the mean of the sum of squared deviations of scores from the group mean.
An experiment in which the same subjects are assigned to each group. Also called repeated measures design.
IV DV Statistical test Nominal Male-female Nominal Vegetarian—yes/no Chi-square Nominal (2 groups)
Male-female Interval/ratio Grade point average t test Nominal (3 groups)
Study time (low, medium, high) Interval/ratio Test score One-way analysis of variance
Interval/ratio Optimism score Interval/ratio Sick days last year Pearson correlation
Nominal (2 or more variables) Interval/ratio Analysis of variance (factorial design)
Interval/ratio (2 or more variables) Interval/ratio Multiple regression