Research & Evaluation
Assessment of Risks and Needs
For the past 10 years, my research team has conducted research on how to assess risks and needs (mental health and substance use) in a criminal justice context, as well as response to intervention (called “responsivity” in the criminal justice literature).
Below you will see the dissertations my students conducted evaluating risk, needs, and response to intervention. Curtis (2018) and Meeks (2020) both studied risk, Gerber (2022) studied needs (specifically substance use), Bell (2020) studied response to intervention (“responsivity”), and Cochran (2018) studied how to conduct cost-benefit studies in criminal justice settings. Chroback (2022) is completing her dissertation studying violence risk and substance use, which she will defend in early August.
In addition to those dissertations, I will summarize other unpublished research I have led in criminal justice risk, needs, and responsivity, and what I see as current best practices and future directions.
Since my dissertation in 1998 I have studied the assessment of criminal risk, with a particular focus on risk of violence. In 1996 I was the Principal Investigator (PI) in a series of validation studies of the Texas Risk Assessment System (TRAS), and in 2021 the Texas Department of Criminal Justice (TDCJ) asked me to specifically study sexual violence and the utility of continuing to use the assessment of sexual violence risk vs. creating a new assessment using modern statistical modeling techniques (sometimes called data mining or predictive modeling or machine learning). [I prefer the more general term “statistical modeling,” but because “machine learning” has gained the most acceptance, I will use that term to represent statistical techniques which are used to predict a specific outcome—in this case criminal behavior.]
Because I have studied assessment of risk the most, I will focus this first blog post on what we know about assessing risk of felony arrest, violence, and sexual violence:
Using actuarial assessment tools such as the TRAS, Level of Supervision Inventory – Revised (LSI-R) or any of a handful of other similar instruments is more accurate than using structured or unstructured human judgment to assess criminal risk.
The most accurate and least biased risk assessment tools are created using machine learning algorithms on large, representative samples. These algorithms accurately identify important predictors/items as well as more accurately weight these predictors/items in order to assess risk. See Curtis (2018) and Meeks (2020) below for detailed, technical explanations.
For general recidivism (such as arrest or conviction for a felony), actuarial assessment tools are able to accurately predict at roughly 60% accuracy, while our research shows that machine-learning-based assessments can accurately identify over 80%. See reports from the TRAS Validation Studies for more information.
For risk of violence, the discrepancy in accuracy between actuarial assessments and machine-learning-based assessments, is larger because actuarial assessments are less accurate predicting violence (due to a lower base rate of violence relative to felony arrest). See TRAS reports, Curtis (2018) and Meeks (2020).
For risk of sexual violence, the discrepancy is even wider (again because the base rate of sexual violence is lower than for “general” violence). For example, our latest research shows that actuarial tools used specifically to identify sexual violence have a 20% accuracy rate, while our machine-learning-based assessments have approximately 85% accuracy. See the TDCJ Sex Offender Study for more details.
In addition to substantial accuracy benefits, machine-learning-based assessments have the added advantage of being less biased for both race and gender across all types of criminal risk (i.e., felony arrest, violence, sexual violence), and because of its operational efficiencies machine-learning-based risk can be conducted quickly and cheaply (and thus more frequently).
In summary, current best practices for assessing criminal risk for felony arrest, violence, and sexual violence all involve creating a machine-learning-based risk assessment. These machine-learning-based approaches are more accurate, less biased, fast to complete, and cheaper than traditional actuarial risk assessments such as TRAS or LSI-R.
— Dr. Eugene Wang
“Numerous studies have been conducted over the last twenty years investigating the effectiveness of cognitive intervention programs on reducing reconviction rates among offenders. Whereas cognitive intervention programs have demonstrated promising results, there is a gap in the literature surrounding who would be most responsive. Knowing what type of offender would most benefit from attending this program could potentially reduce recidivism by providing the proper rehabilitation programming to offenders before they are released.”
Bell, B. K. (2020). Responsivity in rehabilitation programming for offenders [Doctoral dissertation, Texas Tech University]. TTU DSpace. https://hdl.handle.net/2346/86565
The effectiveness of rehabilitation for persistently violent male prisoners. International Journal of Offender Therapy and Comparative Criminology, 44(4), 505-514. Wang, E. W., Owens, R. M., *Long, S. A., Diamond, P. M., & *Smith, J. L. (2000).
Empirically identifying factors related to violence risk in corrections. Behavioral Sciences and the Law, 17, 377-389. Wang, E. W., & Diamond, P. M. (1999).
The PAI and feigning: A cautionary note on its use in forensic-correctional settings. Assessment, 5, 399-405. Rogers, R., Sewell, K. W., Cruise, K. R., Wang, E. W., & Ustad, K. L. (1998). doi:10.1177/107319119800500409.
A pilot study of the Personality Assessment Inventory (PAI) in corrections: Assessment of malingering, suicide risk, and aggression in male inmates. Behavioral Sciences & the Law, 15, 1-14. Wang, E. W., Rogers, R., Giles, C. L., Diamond, P. M., Herrington-Wang, L. E., & Taylor, E. R. (1997).
“We looked at whether or not the offender got employed, how long it took, whether the individual was employed one year after gaining initial employment, and whether or not the individual was rearrested or reincarcerated within three (3) years post-release.”
“The bulk of this report evaluates WSD academic, career and technical education (CTE), and life skills programs against these outcomes: academic achievement gains, HSED and industry certifications, and reentry outcomes.”
“S.B. 213 Sec. 19.0041, titled “Program Data Collection and Biennial Evaluation and Report”, requires Windham School District (WSD) to compile and analyze information to determine whether its programs are meeting its goals, to make changes to the programs as necessary, and to submit a report to the Board, the Legislature, and the Governor’s office.”