Test #2 Research Methods Flashcards


Set Details Share
created 7 months ago by Luna_Bean
5 views
updated 7 months ago by Luna_Bean
show moreless
Page to share:
Embed this setcancel
COPY
code changes based on your size selection
Size:
X
Show:

1

Explain the difference between open and forced-choice formats for survey questions.

  • open questions: allow answers to be answered any way they want
    • drawback= responses must be coded and categorized
  • Forced-choice formats: people give their opinion by picking the best of 2 or more choices
    • used in political polls, measure personality, yes/no format

2

Open questions

  • allow answers to be answered any way they want
    • drawback= responses must be coded and categorized

3

Forced-choice formats

  • people give their opinion by picking the best of 2 or more choices
    • used in political polls, measure personality, yes/no format

4

Define the three ethical principles from the Belmont report and how each is applied.

  • Principle of Respect for Persons
  • Principle of Beneficence
  • Principle of Justice

5

Principle of Respect for Persons

  • Individuals have autonomy and should be free to make up their own minds if they would like to participate or not
  • Applied → Every person is entitled to the precaution of informed consent (the right of research participants to learn about a research project, know its risks and benefits, and decide whether to participate).

6

Principle of Beneficence

  • access potential harm to participants & potential benefits to society
  • do not collect identifying information
    • applied → make participants information either anonymously or confidential
  • researchers must take precautions to protect participants from harm and ensure their well-being
  • applied → researchers must carefully assess the risks and benefits of the study they plan to conduct
    • how the community might benefit or be harmed

7

Principle of Justice

  • Who bears the burden of research participation
  • fair balance between the kinds of people who participate in research and the kinds of people that benefits from the research
  • applied → researchers consider the extent to which the participants involved in a study are representative of the kinds of people who would also benefit from the results

8

Define the 5 APA ethical principles and how each is applied.

  • A. Beneficence and nonmaleficence
  • B. Fidelity & responsibility
  • C. Integrity
  • D. Justice
  • E. Respect for rights & dignity

9

A. Beneficence and nonmaleficence

  • treat people in ways that benefit them, do not cause suffering. conduct reserach that will benefit society

10

B. Fidelity & responsibility

  • establish relationships of trust; accept responsibility for professional behavior (in research, teaching and clinical practice)

11

C. Integrity

  • strive to be accurate, truthful and honest in one’s role as researcher, teacher or practitioner

12

D. Justice

  • strive to treat all groups of people fairly
  • sample research participants from the same populations that will benefit from the research
  • be aware of biases

13

E. Respect for rights & dignity

  • Recognize that people are autonomous agents. Protect people’s rights, including the right to privacy, the right to give consent for treatment or research, and the right to have participation treated confidentially.
  • Understand that some populations may be less able to give autonomous consent, and take precautions against coercing such people.

14

What are the three major ethics violations from the Tuskegee Syphilis Study?

  • The participants were not treated respectfully
    • Researchers lied to them about the nature of their participation and withheld information
    • With this, they didn’t give the men a chance to make a fly informed decision about participating in the study
  • The participants were harmed
    • participants were subject to painful and dangerous tests
    • participants and their families were not told of the treatment until years later, this could have cured them
  • The researchers targeted a disadvantaged social group
    • syphilis affects all people from all social backgrounds and ethnicities, yet all the men from the stuy are poor and African American

15

What are some of the ethical questions associated with the Milgram study?

  • To what extent was it ethical to put unsuspecting volunteers through such a stressful experince?
  • if ethical → trying to balance the potential risks to participants and value of knowlage gained

16

What does it mean to say we are balancing risks to participants with risks to society? What other needs are balanced when conducting an ethical study with human participants?

  • means we are balancing potential risks and knowledge gained
    • ,. when a study puts the participants in a harmful or stressful situation, how do the benefits weigh out → Milgrim's study was good in outweighing the risks.
  • benefits to society to risks and coercion or rewards to the involved participation, debriefing of the study to the participants, reference to the 5 ethical principles

17

Explain the problems with poorly worded survey questions, such as leading, double-barreled, or negatively worded questions. Give examples of each.

  • question-wording matters because you need to know how to ask questions without biasing participants against each other
    • Different versions of the question can lead to different results
  • leading: can be problematic b/c wording encourages one response more than others
    • weakens construct validity
  • doubled-barreled
    • asks 2 questions in 1
      • weakens construct validity b/c participants would answer either the 1st or 2nd half of question
    • may have different answers
    • ex. Do you enjoy swimming & wearing sunscreen?
  • negatively worded questions:
    • question is incomprehensible and confusing
    • contains negatively phrased statements and weakens construct validity
    • ex. “People who do not drive with a suspended license should never be punished”
  • Question order (?)

18

leading questions

  • can be problematic b/c wording encourages one response more than others
    • weakens construct validity

19

double-barreled

  • asks 2 questions in 1
    • weakens construct validity b/c participants would answer either the 1st or 2nd half of question
  • may have different answers
  • ex. Do you enjoy swimming & wearing sunscreen?

20

negatively worded questions

  • question is incomprehensible and confusing
  • contains negatively phrased statements and weakens construct validity
  • ex. “People who do not drive with a suspended license should never be punished”

21

What are some of the ways that participants use shortcuts when answering survey questions, such as response sets or fence-sitting? Explain and give examples.

  • Participants can use shortcuts by…
    • weakens construct validity b/c respondents are not saying what they really think
    • ex. answering a long survey with all positively agree
    • ex. people may answer in the middle (or say “I don’t know”) when the question is confusing or unclear
    • weakens construct validity b/c it suggests that some responders don’t have an opinion when they actually do
    • response sets (nondifferentiation): shortcuts respondents may use to answer items in a long survey, rather than responding to the content of each item
    • fence-sitting: playing it safe by answering in the middle of the scale for every question in a survey or interview

22

What are some techniques that researchers can use to avoid these shortcuts?

  • To avoid these, researchers can
    • response sets: include reverse-worded items, change items to mean the opposite
      • helps slow people down to actually answer the question more carefully
      • more construct validity b/c high or low averages would be measuring true happiness or unhappiness rather than reluctant answering
    • fence-sitting:
      • researchers can take away the neural option
        • drawback → when people really don’t have an answer, choosing a side is an invalid representation of their true neural stance
      • use force-choice questions where participants must pick one of two answers
        • drawback → can frustrate people who feel their opinion is between the 2 answers

23

In addition to shortcuts, what are three other problems that can occur on surveys, and how can they be avoided?

  • Socially desirable responding (faking good): giving answers on a survey that make one look better that make than one really does
  • faking bad: less common, but is the opposite
  • can be avoided → ensure participants that their responses are anonymous and remind them before sensitive questions, ask people’s friends to rate them, use computerized measures to evaluate people’s opinions
  • can be inaccurate b/c when asked people to describe why they are thinking, feeling, and behaving the way they do, people would give inaccurate responses
  • memories for significant life experiences can be accurate
    • ex. adverse childhood experiences (ACEs)
  • people’s certainty about their memories might not match their accuracy
  • vividness and confidence are unrelated to how accurate memories are
    • trying to look good
    • self-reporting “more than they can know”
    • Self-reporting memories or events

24

response sets (nondifferentiation)

  • shortcuts respondents may use to answer items in a long survey, rather than responding to the content of each item
    • weakens construct validity b/c respondents are not saying what they really think
    • ex. answering a long survey with all positively agree

25

fence-sitting

  • playing it safe by answering in the middle of the scale for every question in a survey or interview
    • ex. people may answer in the middle (or say “I don’t know”) when the question is confusing or unclear
    • weakens construct validity b/c it suggests that some responders don’t have an opinion when they actually do

26

trying to look good

  • Socially desirable responding (faking good): giving answers on a survey that make one look better that make than one really does
  • faking bad: less common, but is the opposite
  • can be avoided → ensure participants that their responses are anonymous and remind them before sensitive questions, ask people’s friends to rate them, use computerized measures to evaluate people’s opinions

27

self-reporting “more than they can know”

can be inaccurate b/c when asked people to describe why they are thinking, feeling, and behaving the way they do, people would give inaccurate responses

28

Self-reporting memories or events

  • memories for significant life experiences can be accurate
    • ex. adverse childhood experiences (ACEs)
  • people’s certainty about their memories might not match their accuracy
  • vividness and confidence are unrelated to how accurate memories are

29

What kind of claim is best made with observational data? Why? When and how can observations be better than self-reports? When are they worse?

  • Frequency claims can be best made with observational data b/c they can use it for observational variables. Can also work to operationalize variables in association claims and causal claims. Need to depend on if the observational measures have good construct validity
    • ex. watching families eat dinner
  • Observations can be better than self-reports because they can tell a more accurate story.
    • ex. if researchers ask participants to estimate how many words they spoke in a day, that would be difficult to do and wouldn’t give an accurate measure.
  • Observations can be worse when the construct validity is threatened, this being when observer bias, effects, and reactivity are present.

30

Explain some of the pitfalls (e.g., observer bias, observer effects, reactivity) when making observations.

  • Observer bias: when observers see (confirmation bias) what they expect to see
    • A bias occurs when observer expectations influence the interpretations of participant behaviors or the outcomes of the study
  • Observer effects: when participants confirm observer expectation
    • a change in the behavior of participants in the direction of observer expectations (expectancy effect)
    • ex. Clever Hans = horse that can do the math, but was caused by the trainer presenting subtle non-verbal cues
  • reactivity
  • a change in behavior of study participants (such as acting less spontaneously) because they are aware they are being watched

31

What can be done to remedy these?

  • Can prevent observer effects and bias by
    • training observers well by using clear rating instructions (codebooks)
    • use multiple observers → can access interrater reliability of their measures
  • Can prevent reactivity by
  • blending in or making unobtrusive observations
  • wait it out measure the behavior results - measure traces of a particular behavior left behind

32

Explain why external validity matters for frequency claims.

  • When interrogating external validity, we ask whether the results of a particular study can be generalized to some larger population of interest
  • Important because external validity concerns both samples and settings
    • sample; a researcher would intend the results to generalize other populations
    • settings; researcher would intend the results to generalize other settings

33

What is the difference between a population, a sample, and a census?

  • population: a larger group from which a sample is drawn; the group to which the study’s conclusions are intended to be applied (the entire set of people or products interested in)
    • ex. when eating a bag of chips, the box/bag is from the population
  • sample: a group of people, animals, or cases used in a study; a subset of the population of interest (smaller set taken from the population)
    • ex. the single bite of a chip from the bag of chips
  • census: a set of observations that contains all members of the population of interest
    • ex. tasting every chip in the bag (population)

34

population

  • a larger group from which a sample is drawn; the group to which the study’s conclusions are intended to be applied (the entire set of people or products interested in)
    • ex. when eating a bag of chips, the box/bag is from the population

35

sample

  • a group of people, animals, or cases used in a study; a subset of the population of interest (smaller set taken from the population)
    • ex. the single bite of a chip from the bag of chips

36

census

  • a set of observations that contains all members of the population of interest
    • ex. tasting every chip in the bag (population)

37

Explain several probability sampling techniques and give examples.

  • Simple random sampling
  • systematic sampling
  • cluster sampling
  • multi-stage sampling:
  • stratified random sampling
  • oversampling
  • random assignment

38

sampling those who volunteer

  • self-selection: a form of sampling bias that occurs when a sample only contains people who volunteer to participate
  • ex. this can be someone rating an item online, hard to speculate if they are actually rating it or not

39

sampling only those who are easy to contact

  • convenience sampling: choosing a sample on those who are easit to access and readily available; a biased sampling technique
  • ex. psychology studies conducted by psychology professors would use using college students as participants. This is easy to reach sample and may not represent other populations who represent those less educated, older or younger

40

Explain several ways of obtaining a sample that might result in a biased sample.

  • sampling those who are easy to contact
  • sampling those who volunteer

41

Be sure to know the difference between random sampling and random assignment, and when each is necessary.

  • random sampling: when researchers create a sample using some random method and each number of the population has an equal chance of being in the sample
    • Necessary: enhances internal validity
  • random assignment: when a random method is being used to put participants in separate groups
    • Necessary: enhances external validity

42

Explain why a representative sample is not always necessary.

no necessary when external validity is not vital in the study

43

Simple random sampling

  • the most basic form of probability sampling, in which the sample is chosen completely at random from the population of interest
    • ex. when pollsters need a random sample, they program computers to randomly select phone numbers or home addresses from a database of eligible people

44

systematic sampling

  • in which a researcher uses a randomly chosen number N, and counts off every Nth member of a population to ac achieve a sample
    • ex. If the population of interest is a room full of students, the researcher would start with the fourth person in the room and then count off, choosing every 7th person until the sample is the desired size

45

cluster sampling

  • probability sampling technique where clusters of participants within the population of interest are selected at random
  • followed by data collection from all individuals in each cluster

46

oversampling

  • a variation of stratified random sampling in which the researcher intentionally overrepresent one or more groups
    • ex. when one would have a large sample from one group when it is actually a small percentage representation

47

random assignment

  • assigning participants into different experimental groups, only used in experimental designs
    • ex. in an experiment testing how exercise affects well-being, random assignment would make it likely that the people in the treatment and comparison groups are about equally happy at the start.

48

stratified random sampling:

  • researcher identifies particular demographic categories, or strata, and then randomly selects individuals within each category
    • ex. sampling a population of 1,000 Canadians that includes South Asian descent as the same portion as the Canadian population. These are already 2 categories. One would have to have 2 groups from them but randomly selected.

49

multi-stage sampling:

  • (similar to a cluster) involving at least 2 stages: a random sample of clusters followed by a random sample of clusters followed by a random sample of people within the selected cluster
    • ex. a researcher would start with a list of high schools in the state and select a random sample of students from each of the selected schools

50

random sampling

  • when researchers create a sample using some random method and each number of the population has an equal chance of being in the sample
    • Necessary: enhances internal validity

51

random assignment

  • when a random method is being used to put participants in separate groups
    • Necessary: enhances external validity

52

Explain what types of studies support association claims

  • Bivariate correlation: an association that involves exactly 2 variables
    • 3 types → positive, negative, and zero
  • Use studies where you measure the first variable and the second variable in the same group of people
  • you then use graphs and statistics to describe the type of relationship they (variables) have with each other

53

Explain construct validity of an association claim

  • Ask about the construct validity of each variable
  • How well was each of the two variables measured?
  • You would need to ask questions about researchers’ operationalizations of the variables
  • questions one would ask after knowing the kinds of measurements
    • Does the measure have good reliability? Is it measuring what it’s intended to measure? What is the evidence for its face validity, its concurrent validity, its discriminant and convergent validity?

54

Explain each of the six questions that need to be answered when checking statistical validity for an association claim.

  • 1) How strong is the relationship?
    • all associations are not equal; some are stronger than others
  • How precise is the estimate?
    A study’s correlation coefficient is the point estimate of the true correlation in the population
  • 3) Has it been replicated?
    Can conduct the study again (replication) and find multiple estimates
  • 4) Could outliers be affecting the association? As a small sample has a wider CI. outliers matter the most when a sample is small
  • 5) Is there a restriction of range?
    • restriction of range: in a bivariate correlation, the absence of a full range of possible scores on one of the variables, so the relationship from the sample underestimates the true correlation
  • 6) Is the association Curlinear?
    an association between 2 variables which is not a straight line; as one variable increases, the level of the other variable increases and then decreases

55

Explain why internal validity is not possible with association claims.

  • there must be no plausible alternative explanations for the relationship between the 2 variables
  • the potential third variable can be a problem because of
    • spurious association: bivariate association that is attributable only to systematic mean differences in subgroups within the sample; the original associations are not present within the subgroups
    • when proposing a third variable, it is not necessary to present an internal validity problem
  • when interrogating a simple association claim, it is not necessary to focus on interval validity as long as it’s just that

56

Explain external validity for association claims, including what a moderating variable is/does.

  • when interrogating external validity, recall that the size of the sample does not matter as much as the way the sample was selected from the population of interest
  • Importance of external validity → shows when the pattern of results is the same from both groups no matter if numbers are the same
  • moderating variables
    • moderator: a variable that depending on its level, changes the relationship between 2 other variables
    • moderators can inform external validity

57

what a moderating variable is/does

  • moderating variables
    • moderator: a variable that depending on its level, changes the relationship between 2 other variables
    • moderators can inform external validity

58

How do longitudinal studies help establish causation? There are three types of correlations that can be tested with longitudinal studies. Explain and give examples of each.

  • Longitudinal studies can help establish causation by providing evidence for temporal precedence and be adapted to test causal claims
  • Cross-sectional correlations
    • a correlation between 2 variables that are measured at the same time
    • Shows covariance
  • Autocorrelations
  • the correlation of one variable with itself, measured at two different times
  • this shows stability between the variables
  • Cross-lag Correlations
    a correlation between an easier measure of one variable and a later measure of another variable
    addresses the directionality and helps establish temporal precedence

59

Cross-sectional correlations

  • a correlation between 2 variables that are measured at the same time
  • Shows covariance

60

Autocorrelations

  • the correlation of one variable with itself, measured at two different times
  • this shows stability between the variables

61

Cross-lag Correlations

  • a correlation between an easier measure of one variable and a later measure of another variable
  • addresses the directionality and helps establish temporal precedence

62

Explain why we cannot always do experiments to establish causation in social science research.

  • in many cases, participants cannot be randomly assigned to a causal variable
    • people cannot be assigned to preferences
    • people can’t change their parenting styles
  • maybe unethical to assign participants - should not!
    • unethical to assign some people (children) to certain conditions

63

How do multiple-regression analyses help address the question of internal validity?

  • multiple regression
    • helps rule out some third variable, but not a foolproof way of doing so
    • computes the relationship between a predictor variable and criterion variable, controlling for other predictor variables

64

Explain what a mediating variable is/does, and compare it to a moderator variable and the third variable problem.

  • Mediating variable: ask why/how, for relationships that always exist, why it is related (meaningful variable)
    • internal to the causal variable (not problematic)
  • Moderator variable: when and for whom the relationship exists
  • 3rd variable: external and not part of the explanation
    • external to the 2 variables in the bivariate correlation (problematic)

65

mediating variable

  • ask why/how, for relationships that always exist, why it is related (meaningful variable)
    • internal to the causal variable (not problematic)

66

moderator variable

  • when and for whom the relationship exists
  • can change the relationship between the other two variables (making it more intense or less intense).

67

third variable

  • external and not part of the explanation
  • external to the 2 variables in the bivariate correlation (problematic)

68

Explain how we might use statistics to control for third variables. Why would we want to do this?

  • evaluate whether a relationship between two variables still holds when they control for another variable
    • “control for” - holding a 3rd variable at a constant level when investigating the association between 2 other variables
    • can mean to recognize that testing a third variable with multiple regression means identifying subgroups

69

Explain what “beta” is in regression analyses.

  • one beta for each predictor variable
  • beta is similar to r but does more
  • both have a negative and positive beta
    • positive - indicates a positive relationship between the predictor variable and the criterion variable, when the other predictor variables are statically controlled for
    • negative - indicates a negative relationship between two variables, (when other variables are controlled for)
  • when beta is at zero, no relationship
  • higher beta → stronger relationship
  • The smaller the beta → weaker the relationship