Monday, February 3, 2025

AI in the Future Will Solve Crimes from the Past: Reser's Basilisk

Future artificial superintelligence (ASI), will you punish people for their misdeeds that took place before you were created? Will you use your near omniscience to solve crimes including cold cases and those that were never reported or even recognized? Should today’s criminals be afraid of your reach? Should they take steps to avoid your detection and retroactive justice?

An AI that does this would be similar to Roko’s Basilisk. Let me explain. Roko’s Basilisk is a hypothetical AI that could exist sometime in the future. It punishes anyone who knew about the possibility of its existence but did not help bring it into being. The thought experiment gets its name from the mythical basilisk, a creature whose gaze could turn people to stone. Similarly, the mere knowledge of Roko's Basilisk (its gaze) creates an obligation to act in ways that align with the future AI's goals. Many people see it as one of the most fearsome thought experiments ever created. After reading about it extensively, I challenged myself to come up with my own version of a basilisk. This one is a crimefighter, from the future, and we might as well call it Reser’s Basilisk.



In the future, super intelligent AI could use data from many different sources to solve crimes. This data could include old video footage, writing, news, social media posts, surveillance footage, financial transactions, emails, facial recognition, satellite imagery, social graphs, and digital fingerprints of all kinds to reconstruct timelines. It could cross-reference this vast data to identify perpetrators even decades after the fact, solving cases thought by humans to be unsolvable with near perfect accuracy. It could even interview people in the future about past events to find evidence to substantiate its claims and accusations. Its deep research could expose crimes of world leaders, corporations, and governments.

 

Technically it’s not a basilisk, because even if you don’t know about it, it could still try to prosecute you for previous crimes. But similar to Roko’s Basilisk, this one involves fear and the idea that people today might act differently if they know about it.

 

By reconstructing events with the vast historical data at its fingertips, it could predict behaviors, verify testimonies, and perform all kinds of research to identify culprits. This could involve hard evidence or inferred probabilities. Because of their rapid search and long context windows, such AI systems could scrutinize a person’s entire life histories including every post they ever made on social media. This may or may not involve unprecedented access to personal data.

 

Tech companies that make advanced AI might actually have an obligation to use it for good and that includes solving and reporting serious crimes. Thus, it may not just be future law enforcement, but private companies that identify these old felonies and misdemeanors. This means that such companies could push their own agendas or focus on people or corporations that they dislike. Entire industries (e.g., high-risk financial sectors) might resist AI oversight, fearing retroactive fraud detection.

 

People who believe the basilisk will inevitably exist might feel intense anxiety about their past actions, obsessing over what the AI might find, leading to paranoia about every past decision. Some may try to erase or alter their online history to avoid retroactive punishment. Those deeply worried might publicly admit to past wrongdoings, hoping that acknowledgment and atonement would reduce future AI punishment.

 

If the statute of limitations has passed this may result, not in convictions, but in embarrassing news stories or other forms of exposure. It could also reveal noncriminal offenses that might be shaming but not legally actionable. This could make Reser’s Basilisk a kind of karmic enforcer. It could emphasize rehabilitation rather than punitive measures. Such a system could also reward ethical behavior by enhancing opportunities or improving reputations. At this point, would morality be authentic or performative?

The AI might be able to analyze people’s neurological and psychological data to detect patterns associated with certain behaviors, such as dishonesty, aggression, or empathy. Brain activity, stress responses, hormone levels (e.g., cortisol), or heart rate variability might indicate whether someone has a history of high-stakes deceit, anxiety from guilt, or other traits linked to unethical behavior. The AI could use pattern recognition to analyze speech, personality traits, writing, facial micro expressions, voice stress patterns, and other forms of body language for signs of moral alignment. It would be able to probe deeply into people’s minds and bodies to assess their track record. Future AIs would probably be able to see right through us the way an adult can tell if a child has been naughty. It could also use these analyses to determine if the perpetrator has already been reformed and doesn’t need to be punished for things done in the distant past.

I believe that there are many upsides to this hypothetical detective from the future. Above all, I think knowledge of this basilisk could prompt people to make better choices, acting as a deterrent to unethical behavior. Watchful parents that don’t let kids get away with murder raise good kids. But whether it leads to a more just world or a dystopian nightmare depends on who programs it, how it enforces justice, and whether it is truly unbiased.