Entries by Nathan Thompson, PhD

Responses in Common (RIC)

This collusion detection (test cheating) index simply calculates the number of responses in common between a given pair of examinees.  For example, both answered ‘B’ to a certain item regardless of whether it was correct or incorrect.  There is no probabilistic evaluation that can be used to flag examinees.  However, it could be of good […]

Errors in Common (EIC) exam cheating index

This exam cheating index (collusion detection) simply calculates the number of errors in common between a given pair of examinees.  For example, two examinees got 80/100 correct, meaning 20 errors, and they answered all of the same questions wrongly, the EIC would be 20. If they both scored 80/100 but had only 10 wrong questions […]

Harpp, Hogan, & Jennings: Response Similarity

Harpp, Hogan, and Jennings (1996) revised their Response Similarity Index somewhat from Harpp and Hogan (1993). This produced a new equation for a statistic to detect collusion and other forms of exam cheating: . Explanation of Response Similarity Index EEIC denote the number of exact errors in common or identically wrong, D is the number […]

Harpp and Hogan (1993) Response Similarity Index

Harpp and Hogan (1993) suggested a response similarity index defined as      Response Similarity Index Explanation EEIC denote the number of exact errors in common or identically wrong, EIC is the number of errors in common. This is calculated for all pairs of examinees that the researcher wishes to compare.  One advantage of this approach […]

Bellezza & Bellezza (1989): Error Similarity Analysis

This index evaluates error similarity analysis (ESA), namely estimating the probability that a given pair of examinees would have the same exact errors in common (EEIC), given the total number of errors they have in common (EIC) and the aggregated probability P of selecting the same distractor.  Bellezza and Bellezza utilize the notation of k=EEIC […]

Frary, Tideman, and Watts (1977): g2 collusion index

The Frary, Tideman, and Watts (1977) g2 index is a collusion (cheating) detection index, which is a standardization that evaluates a number of common responses between two examinees in the typical standardized format: observed common responses minus the expectation of common responses, divided by the expected standard deviation of common responses.  It compares all pairs […]

Wollack 1997 Omega Collusion Index

Wollack (1997) adapted the standardized collusion index of Frary, Tidemann, and Watts (1977) g2 to item response theory (IRT) and produced the Wollack Omega (ω) index.  It is clear that the graphics in the original article by Frary et al. (1977) were crude classical approximations of an item response function, so Wollack replaced the probability […]

Wesolowsky (2000) Zjk collusion detection index

Wesolowsky’s (2000) index is a collusion detection index, designed to look for exam cheating by finding similar response vectors amongst examinees. It is in the same family as g2 and Wollack’s ω.  Like those, it creates a standardized statistic by evaluating the difference between observed and expected common responses and dividing by a standard error.  […]

Response Time Effort

Wise and Kong (2005) defined an index to flag examinees not putting forth minimal effort, based on their response time.  It is called the response time effort (RTE) index. Let K be the number of items in the test. The RTE for each examinee j is where TCji is 1 if the response time […]