Validity and Reliability

Validity

There are several types of validity that contribute to the overall validity of a study. The two main dimensions are Internal and External validity, and further sub-types can be added under these headings. Most research text books will explain these in detail (for example Burns & Grove (2001) or see http://trochim.human.cornell.edu/)

Internal Validity
is concerned with the degree of certainty that observed effects in an experiment are actually the result of the experimental treatment or condition (the cause), rather than intervening, extraneous or confounding variables.
Internal validity is enhanced by increasing the control of these other variables.

External Validity
Is concerned with the degree to which research findings can be applied to the real world, beyond the controlled setting of the research. This is the issue of generalisability. Attempts to increase internal validity are likely to reduce external validity as the study is conducted in a manner that is increasingly unlike the real world.

 

Reliability

There are many forms or reliability, all of which will have an effect on the overall reliability of the instrument and therefore the data collected. Reliability is an essential pre-requisite for validity. It is possible to have a reliable measure that is not valid, however a valid measure must also be reliable.

Below are some of the forms of reliability that the researcher will need to address. These and others are explained in more detail in research text books mentioned previously, or visit http://trochim.human.cornell.edu

Inter-Rater or Inter-Observer Reliability
Used to assess the degree to which different raters/observers agree when measuring the same phenomenon simultaneously.

Test-Retest Reliability
Compares results from an initial test with repeated measures later on, the assumption being that the if instrument is reliable there will be close agreement over repeated tests if the variables being measured remain unchanged.

Parallel-Forms or Alternate-Forms Reliability
Used to assess the consistency of the results of two similar types of test used to measure the same variable at the same time.

Tests for Homogeneity or Internal Consistency
Individual items in an instrument measuring a single construct should give highly correlated results which would reflect the homogeneity of the items. This can be tested using the split-half form, whereby the items are divided into two halves and the correlated with the Spearman-Brown formula. A more sophisticated approach is to use Chronbach’s alpha, which tests all possible split halves.

Another approach is to use Cohen’s kappa which correlates each item with each other item, and the total score. Items with weaker correlations can be removed to leave an instrument with a high degree of homogeneity.