An AI model trained on prison phone calls now looks for planned crimes in those calls

AI Surveillance in Prisons: A Model Trained on Inmate Phone Calls Expands Monitoring Capabilities

In the United States, prisons have long monitored inmate communications, but a new artificial intelligence system marks a significant escalation in surveillance. Developed by ViaPath Technologies, formerly known as Securus Technologies, this AI model analyzes phone calls made by incarcerated individuals. Trained on millions of recorded prison conversations, the tool identifies voices, transcribes speech, and flags potential security threats, enabling real-time oversight across numerous facilities.

The technology stems from a vast dataset amassed over years of call recordings. ViaPath claims to have collected over 20 million hours of audio from correctional institutions nationwide. This corpus served as training data for machine learning algorithms that specialize in speech recognition tailored to the prison environment. Unlike general-purpose systems like those from OpenAI or Google, this model contends with unique challenges: heavy accents, background noise from crowded cell blocks, slang specific to inmate subcultures, and deliberate code words used to evade detection.

At its core, the AI employs voice biometrics to authenticate speakers. It creates unique acoustic fingerprints based on pitch, timbre, and speech patterns, distinguishing inmates from visitors or staff with reported accuracy exceeding 95 percent in controlled tests. Transcription follows, converting audio to text even in noisy conditions. Natural language processing then scans for keywords and phrases linked to violence, drug trafficking, contraband smuggling, or escape plans. For instance, references to “kites” (inmate notes), “fish” (new arrivals), or coded terms for weapons trigger alerts sent to prison guards.

Deployment began quietly in late 2024, with ViaPath integrating the model into its eMessaging and phone platforms used by over 2,200 jails and prisons. By mid-2025, more than 30 states had adopted it, processing thousands of calls daily. One early adopter, the Los Angeles County Sheriff’s Department, reported a 40 percent increase in intercepted threats within months. “This isn’t just monitoring; it’s predictive intelligence,” says ViaPath spokesperson Chris Eagan. The system operates continuously, analyzing calls in near real-time and prioritizing high-risk interactions for human review.

Prison officials praise the efficiency gains. Traditional monitoring relied on human listeners sampling a fraction of calls, often missing subtle cues. Now, the AI handles the volume, freeing staff for other duties. In Texas, the Department of Criminal Justice credits the tool with preventing several incidents, including a foiled assault plot uncovered through analysis of seemingly innocuous family chit-chat.

Yet, civil liberties advocates raise alarms over privacy erosion and reliability flaws. The American Civil Liberties Union (ACLU) argues that mass surveillance of privileged attorney-client calls violates constitutional rights. ViaPath insists legal calls are exempt, but inmates report occasional errors routing protected conversations through the system. Accuracy remains contentious: independent audits reveal false positive rates as high as 15 percent for minority voices, potentially due to biased training data skewed toward majority demographics.

Inmates bear the brunt of these issues. James Johnson, serving time in a California facility, describes the chilling effect: “You self-censor everything. Even talking to your mom about groceries feels risky.” Transcripts reviewed by MIT Technology Review show the AI misinterpreting slang, flagging benign phrases like “holding it down” as gang activity. Critics, including researcher Timnit Gebru, warn of amplified biases: “Models trained on criminal justice data inherit systemic prejudices, disproportionately targeting Black and Latino speakers.”

ViaPath counters with ongoing improvements, including diverse data augmentation and human oversight loops. The company invests in retraining the model quarterly, incorporating feedback from false alerts. Still, regulatory gaps persist. Unlike consumer AI, prison tech faces minimal federal oversight, with states setting patchwork rules. A 2025 bill in Congress aims to mandate transparency in AI correctional tools, but passage remains uncertain.

Broader implications extend beyond prisons. ViaPath markets similar tech to law enforcement for body-cam analysis and public safety hotlines. As AI fluency grows, questions swirl about due process: should algorithms influence solitary confinement or parole decisions? Inmates have filed lawsuits alleging Fourth Amendment violations, with one federal case testing voice biometrics as an unreasonable search.

This fusion of AI and incarceration underscores a tension between security and rights. Prisons house 2 million Americans, many for nonviolent offenses, and phone access remains a vital lifeline. While the model promises safer facilities, its unchecked spread risks normalizing perpetual digital scrutiny. As adoption accelerates, balancing technological promise with ethical safeguards grows urgent.

What are your thoughts on this? I’d love to hear about your own experiences in the comments below.