Purpose of ControlsThe purpose of design controls is to provide structure to the experiment or study so that results can be interpreted in a more valid and reliable manner-- how do you know that the results you get are not due to chance or a variety of other conditions separate from what you believe is the major influence? Controls enable the researcher to screen, balance, or otherwise take into account factors that may interfere with the variables in the study. By controlling them in the design, the primary effects of the variables under study can be more clearly determined.There are eight extraneous variables that can adversely affect internal validity, which, if not controlled, might affect the experiment.
In much research in management and in applied settings, there is little opportunity to use strict controls. These are often limited due to inconvenience to workers, difficulty keeping information from being shared between workers in experimental and control groups, and unwillingness of businesses to withhold some groups from what might be a beneficial treatment. As a result, many thesis/FAP projects are descriptive studies or one-shot comparative samples.
Descriptive studies simply gather data regarding some sample or population and then organize and present it such that it characterizes that group. It usually involves reporting raw frequencies, percentages, means and standard deviations, and may be presented in tabular or cross-tabular format. One shot-sample comparisons are often performed in which the results of one group are compared to those of another, typically by correlation or t-test. Nonetheless, the design considerations that are discussed in this brief paper are very important and should be taken into account in planning your study. You may need to acknowledge that certain aspects of validity or reliability cannot be assured, and your conclusions are only tentative.
Sources of unexpected variance and influence can be internal or external. The research should attempt to control both in the design. Internal validity is the minimum essential ingredient for data to be interpretable; did the intervention/treatment really make a difference in this experiment? External validity determines whether the conclusions can be generalized to other populations and settings.
There are an additional four factors that can jeopardize external validity or generalizability:
1. History: The events that occur between the first and second measurement in addition to the treatment by the experimental (independent) variable (IV). How do you know whether the outcome (dependent variable--DV) is due to the IV or other uncontrolled events that occurred during the same period as the IV?
2. Maturation: Processes within the subjects that operate with the passage of time, such as aging, getting hungrier, more fatigued, etc. How do you know that the outcome is due to the IV or to natural maturational changes in the subjects?
3. Testing: The effects of taking a test the first time on taking it the second time. If the test is particularly difficult, offensive, or otherwise has an affect on the subjects, it can affect their taking it the second time during post-test. How do you know that the outcome is due to IV and not affected by reaction to the testing?
4. Instrumentation: This occurs when changes in calibration of the measuring instrument or in observers or raters may produce changes in the final measurements. Different forms of an instrument or changes and refinements in rater observations are examples. How do you know that the outcome is due to the IV and not due to instrumental changes?
5. Statistical Regression: For any group that has been selected on the basis of their extreme scores, there is a tendency on retesting for the scores to gravitate toward the mean. How do you know that your outcome is related to the IV and not to regression of scores in your sample?
6. Selection Bias: Bias may occur in the selection of subjects such that the IV outcomes cannot be distinguished from those produced by biased selection. Using a sample of convenience, such as friends, may introduce selection bias. How do you know the outcome is due to IV rather than selection bias?
7. Experimental Mortality: This can occur when a larger number of subjects are lost (e.g., drop out, absent, etc.) from a comparison group. Then the groups may not be comparable. How do you know if the outcome is due to the IV or to changes in group size or composition?
8. Selection-Maturation Interaction: This can occur when certain multiple group designs are mistaken for the experimental variable. How do you know the outcome is due to the IV and not to misunderstood design events?
9. Reactive or Interaction Effect of Testing: This occurs when a pretest might increase or decrease a subject's sensitivity to responsiveness to the experimental variable. This would make the results based on the pretested group different from those of an unpretested group, thereby limiting generalization. How do you know you can generalize your results when the differences may be due to reactive effects?
10. Interaction Effects of Selection Bias and the Experimental Variable: How do you know that the outcome can be generalized to a larger population, when it might be due to an interaction of the IV and biased subject selection?
11. Reactive Effects of Experimental Arrangements: This can occur when the nature of the experimental situation is different from the circumstances that would naturally occur for the variable under study. How do you know that the outcome can be generalized when the results may be a reaction to the experimental situation itself?
12. Multiple Treatment Interference: This can occur when several treatments are applied to the same respondents, because the effects of prior treatments cannot be erased; there may be a cumulative or interaction effect. How do you know your results are not due to multiple experimental effects?
Campbell, D. T., & Stanley, J. C. (1963). Experimental and quasi-experimental designs for research. Boston, MA: Houghton Mifflin.