Sections: 

 

Are 8-11% of Australian Students Really Cheating?

07 September 2021

Academic Integrity

Local media have picked up on a piece by Guy Curtis on the Conversation which in normal click-bait modern journalism style is titled "1 in 10 uni students submit assignments written by someone else — and most are getting away with it". This makes the pretty strong claim that 10% of students are cheating, so clearly we should all join the Australian Universities running around attempting to prevent this plague facing our countries (no not COVID-19 - cheating). But maybe, perhaps, we might take a closer look at these claims.

The Conversation piece is a summary of a paper recently published in the well-regarded journal Studies in Higher Education by a large team of Australian researchers based in universities across the country. In the paper they present a study of 4098 students and suggest that rather than the roughly 2% of students cheating that other studies have found, that in fact between 7.9% and 11.4% of students have sourced assignments from either contract cheating sites or other sources respectively. So, case closed - right?

Well, possibly not.

I have some issues with the way the study's findings are being represented and very strong concerns about how they'll be used by folk who only read the abstract or the Conversation piece and don't critically engage with the full paper. To be clear up front, I have absolutely no concerns about how well the research has been done and like all good research it begs to be followed up with studies that can replicate the study and also triangulate the results with alternative approaches.

The first point is to realise that the 1 in 10 statistic refers to students having ever engaged in the cheating behaviours studied. There is nothing in the study to indicate how often they have done so. A typical student completes multiple courses (units in Australian parlance) which all typically have multiple assessments. In the absence of any evidence to the contrary it seems likely that the level of cheating in any one course is much lower. This suggests that a more accurate summary might be that this type of cheating occurs in less than 1% of work submitted by students. Interestingly this is much closer to the level reported in another recent study that did not depend on self reports:

"More recently, enhanced detection methods enabled academic integrity investigators at the University of New South Wales (UNSW) to substantiate an increase in the number of cases of contract cheating by nearly 500% in one year (Fellner 2020). Still, their figure of 168 substantiated cases of contract cheating in 2019 represents just 0.27% of UNSW students, or about one-tenth of the rate at which students admit to contract cheating in self-report surveys." (Curtis et al., 2021, pg. 2)

Interestingly, if we reframe the focus to be on the likely level of cheating within individual courses as opposed to at any point in time, the reduction in the associated geometric mean also drops dramatically, meaning that even if the inflated value is "true" the resulting number will still be much less than the flagship message of 1 in 10 used.

The second point is that the paper suggests that the rates may be as high as 1 in 10, but the data it presents can only prove a much lower level (between 0.2% and 4.2% depending on the type and combination of responses). How they get to the much larger number is by using a scaling technique, the "Bayesian Truth Serum" method, that its creators claim provides a much better estimate of the "real" level of cheating as opposed to that admitted to in an anonymous survey.

The Bayesian Truth Serum was developed by Drazen Prelec (Prelec, 2004). This technique has been widely adopted by those surveying people in the context of morally dubious behaviour (such as questionable research practices - John, Loewenstein and Prelec, 2012). Simplistically, the method uses a combination of incentives to encourage people to tell the truth (such as in the Curtis paper a donation to charity), combined with a bayesian manipulation of the responses using respondents assessments of how likely it is that others do similar things, to generate an inflated "true" measure of prevalence of the behaviour:

"BTS relies on the Bayesian assumption that people maintain a mental model of the world that is biased by their personal experiences, which leads to a belief that personally held opinions are disproportionately present amongst peers" (Frank, Cebrian, Pickard and Rahwan, 2017, pg. 10)

The problem is that they can't prove that the result they present is actually what's going on. So at best they are providing a sense of the what the upper bound of prevalance might be, but without any information as to the underlying distribution. Proving that this technique has any reliability is still very much an focus of ongoing work. It certainly confirms people's assumptions and preconceptions but the history of psychological research is filled with such studies that have had to be abandoned over time as they fail to be reproduced or when better explanations for the observations are found to have much stronger evidence in their support.

My third problem is a more significant one and relates to the assumption quoted above. The technique depends on people's estimates of other people's behaviours as influenced by their personal experiences. This is fine when there is little evidence of something acting to systematically affect that estimate (such as coin tosses or dice rolls - Frank, Cebrian, Pickard and Rahwan, 2017).

But this is not true for the current study. Australia has been subjected to a moral panic over cheating that has seen many reports in the media as well as a particular focus on it from the Australian Tertiary Education Quality Standards Agency (TEQSA). The narrative that contract cheating is rife is some visible that any student must be forgiven for believing that it must be routinely ocurring in their courses. This must see folk amplify the perceived level of cheating well beyond anything that might normally reflect their own personal experience. The result of this research is self-fulfilling: its (to my mind overstated) findings will see folk believe that even more cheating must be occuring and any attempt to replicate the study will see a further amplified result etc. etc. The narrative that other students are cheating is a known influence on students contemplating cheating themselves, and this hyped up result is hardly a message dissuading those tempted by circumstance and opportunity.

My real problem with this work, beyond all of these concerns, is the way that I expect it will be misused by its focus on cheating rather than on supporting student learning and the damage it is doing to the relationships between staff and students in the university. Contract cheating did not happen as a result of a deteriorating moral fibre in the student body. The causes are complex and come from many drivers including the relentlessly performative culture of the modern university.

Students are under enormous financial and social pressure to gain qualifications and they experience in many universities a culture of accountability and audit that has driven many to adopt assessment strategies driven by assurance rather than by formative experiences. Far to many courses now overload students with assessments that have no contribution to learning but simply comfort the university and its regulators that something has been done to ensure nobody cheats and that all qualifications are awarded on the basis of extensive measurement.

Somewhere in here we forgot that university learning is meant to be exciting, its meant to open our eyes to the astonishing possibilities of our diverse disciplines and the wonders of human knowledge and achievement. Our focus as assessors should be on helping students see where they can improve, to acknowledge and stimulate their development. Rather than stressing about contract cheating, I'd far rather we spent our energies on worrying about why our assessments and systems are harming students and leading small numbers of them to acts that ultimately harm their development and that first we fix ourselves.

References

Curtis, G. (2021). 1 in 10 uni students submit assignments written by someone else — and most are getting away with it? The Conversation, August 31, 2021. https://theconversation.com/1-in-10-uni-students-submit-assignments-written-by-someone-else-and-most-are-getting-away-with-it-166410

Curtis, G. J., McNeill, M., Slade, C., Tremayne, K., Harper, R., Rundle, K., & Greenaway, R. (2021). Moving beyond self-reports to estimate the prevalence of commercial contract cheating: an Australian study. Studies in Higher Education, 1-13. https://www.tandfonline.com/doi/full/10.1080/03075079.2021.1972093?journalCode=cshe20

Frank, M.R., Cebrian, M., Pickard, G., and Rahwan, I. (2017). Validating Bayesian truth serum in large-scale online human experiments. PLOS One. https://doi.org/10.1371/journal.pone.0177385

John, L.K., Loewenstein, G., and Prelec, D. (2012). Measuring the Prevalence of Questionable Research Practices With Incentives for Truth Telling. Psychological Science 23(5):524-532. doi:10.1177/0956797611430953 https://journals.sagepub.com/doi/10.1177/0956797611430953

Prelec, D. (2004). A Bayesian Truth Serum for Subjective Data. Science 306(5695): 462-466. https://www.science.org/doi/full/10.1126/science.1102081