Skip to main content

WHEN PIXELS PROMISE JUSTICE... ⚖️


When Pixels Promise Justice: The Kerala AI Revolution and India’s Looming Identity Crisis 

A 70-year-old grandmother’s 19-year wait for justice ended with an algorithm. But in a nation where faces echo across generations and names repeat like prayers, are we celebrating a breakthrough or courting disaster?
The Miracle on Social Media
In February 2006, when Ranjini and her 17-day-old twin daughters were brutally murdered in Kerala, the only clues were grainy photographs—useless pixels that couldn’t speak for the dead. Fast forward to 2024: those same pixels, transformed by artificial intelligence, became the key that unlocked a cold case vault.
Kerala Police didn’t just solve a murder; they performed digital necromancy. AI algorithms took a suspect’s 2006 photograph and predicted, “This is how you’d look today,” adding wrinkles, greying hair, and adjusting for nearly two decades of life lived in hiding. When that algorithmic prophecy matched a wedding photograph on social media with 90% accuracy, justice finally knocked on a door that had remained locked for 19 years.
The success story spread like wildfire across WhatsApp groups and LinkedIn feeds. “AI helps 70-year-old woman get justice!” Headlines celebrated technology’s triumph over time. Kerala Police became poster children for innovation in law enforcement, and rightly so.But beneath the celebration lies a question that should make every Indian pause: In a country where 1.46 billion faces tell stories of shared ancestors, common features, and family resemblances that span continents, are we ready for the consequences of digital certainty?

The Practitioner’s Prophecy

When faces become evidence and algorithms become witnesses, the stakes couldn’t be higher. Because while a computer celebrates a 90% match, a human being faces 100% of the consequences.
" In my fifteen years of criminal practice across Maharashtra, I have learned that in a country as vast and diverse as India, misidentification is not a remote possibility but a daily risk. AI-driven policing collides with the reality of lookalike faces across communities, goons absconding accused and even their ditto lookalikes within Police and other law enforcement forces itself... Isn't it funny and dangerous at the same time.Criminals have got long intel exploiting this thing. Using prosthetics, celebrity masks, helmets, or even simple tricks like applying charcoal, Gulal, or chalk powder to blur their features while operating in groups. The infamous Chaddi-Baniyan gang perfected this method, disguising themselves with foul-smelling lotions and powders that left only fragments of their faces visible—exactly the kind of fragments algorithms might mistake for identity. I am sure State of Kerala might have dealt end numbers of crimes like this too...
This risk is no longer theoretical. In Pune, AI-based cameras mounted on patrolling vehicles are already being tested to flag not just traffic violations but also fugitives. The system can link number plates with facial matches, yet it cannot tell if the driver is the car’s registered owner, a friend borrowing it for a school kid pickup, and then hires  a stranger or his friend or himself wearing a prosthetic mask performing any illegal activity. Imagine you sitting at home watching a TV while your car is lent out for a short errand, but is used in a serious crime: the AI reads the licence plate, captures a doctored face, and its “certainty” points at you.
Once such “matches” lands you in a completing the chain of circumstantial evidence, investigators may default to an arrest-first approach, leaving courts to sort out the mistake while an innocent person bears the stigma, stress, and loss of liberty. 
Try to imagine a strong South Indian exaggerated version movie which shows a group of thieves or robbers commiting a crime after using prosthetics or make ups which aid them a become a looklike of any real police officer from the same area who Is been stationed there recently. Just to divert investigation and letting victim wonder why police who has came to Inquire me is the same cop who just robbed me couple of hours ago...Humourous but horrifying isn't it..? Well it's possible...!

Even solid alibis or contradictory CCTV footage can be overshadowed once an algorithm is treated as authoritative. What once belonged to crime thrillers and story fictions is now a looming 'Operational reality'. Unless urgent safeguards are built into law and procedure, we risk creating a system where technology delivers not justice, but errors at scale which could cost somesones life and freedom once and for all...
Especially for vulnerable groups like transgenders,  individuals whose historic documents may not reflect their present lawful identity. Very confusing but plausible, as their earlier original male / female identity which need to be carefully considered before tracing out such transgender criminals is serious questions of discovery. What we need is Less of AI relying imagination and more of strict factual matrix revealing investigation. AI must remain a servant of justice, not its substitute.

The Mathematics of Misfortune

The global evidence is stark and sobering:
The NIST Reality Check: American research revealed that Asian and African American faces were 10–100 times more likely to be misidentified by AI systems than white faces. In India’s context, this isn’t just a statistic; it’s a prediction.
The Wrongful Arrest Ledger: At least seven confirmed cases in the US involved innocent people arrested due to facial recognition errors. Six were Black individuals, highlighting how AI bias doesn’t just compute; it discriminates.

The Demographic Danger

 Studies consistently show higher error rates for women, children, elderly individuals, and non-white populations in US  And other Europian Countries that could easily encompass most of its simmilar erroneous impact with India’s population too.
How do you think these global patterns play out will result in India, where:
A. Regional communities share distinct ancestral features.
B. Family resemblances can span entire districts.
C. Names repeat across generations like hereditary echoes.
D. Photo quality in government databases varies from high-definition to barely recognisable.
The result? A perfect storm of misidentification risk.

  When Law Meets Algorithm, Supreme Court's Wisdom in the AI Age

India’s highest court has been battling mistaken identity long before AI entered the picture. Their wisdom feels prophetic in today’s digital age:

1. K. Shikha Barman vs. State (2025): The Supreme Court acquitted a woman convicted under the Narcotic Drugs and Psychotropic Substances Act for possessing ganja, ruling it a clear instance of mistaken identity. The prosecution failed to prove beyond reasonable doubt that the appellant was the same person named in the FIR and arrest records. No identification parade was conducted under Section 54 of the Bharatiya Nagarik Suraksha Sanhita (BNSS), 2023, and there was no oral evidence or matching signatures to link her to the crime. The Court emphasised that convictions cannot stand on assumptions of identity, as this deprives the accused of a fair trial and risks punishing the innocent for another’s crimes.
2. Dana Yadav vs. State of Bihar (2002): The Supreme Court held that identification evidence cannot be relied upon solely in court if not corroborated by a prior test identification parade (TIP) under Section 54-A of CrPC or other evidence, as per Section 9 of the Indian Evidence Act, 1872, which governs facts necessary to explain or introduce relevant facts, including identification. In this case, involving a murder under Section 302 of the Indian Penal Code, 1872, the Court cautioned against the risks of mistaken identity, noting that without proper safeguards like TIPs, such evidence is unreliable and could lead to miscarriages of justice.
3. Malkhan Singh vs. State of M.P. (2003): The Court clarified that TIPs under Section 54A of CrPC are not substantive evidence but aid investigations by confirming witness reliability. Failure to hold a TIP does not automatically invalidate court identification, but courts must weigh the totality of circumstances to avoid misidentification, as supported by Section 9 of IEA. This judgment stresses the need for caution in diverse populations where facial similarities could lead to errors.
4. Rajesh @ Sarkari & Anr. vs. State of Haryana (2020): The Supreme Court reversed a conviction in a murder case under Section 302 of IPC, highlighting flaws in the TIP process under Section 54A of CrPC and the importance of consistent forensic and eyewitness evidence under Sections 9 and 45 of IEA. It reiterated that improper or absent TIPs can raise doubts about identification accuracy, potentially leading to wrongful convictions due to mistaken identity.
Mukesh Singh vs. State (NCT of Delhi) (2023): The Court upheld the mandatory nature of TIPs under Section 54A of CrPC, especially when the accused is unknown to witnesses. It addressed risks of misidentification by mandating supervised parades for vulnerable witnesses and emphasised that refusal to participate could lead to adverse inferences, but only if the process ensures fairness to prevent errors, aligning with Section 9 of IEA for corroborative evidence.
These cases collectively whisper a timeless truth: whether the witness is human or machine, identity evidence demands corroboration before liberty is lost. The Supreme Court’s caution about human identification becomes even more critical with AI identification. If human witnesses can be wrong, what makes us think silicon witnesses are infallible?

The Algorithm’s Shadow: What Could Go Wrong?
Picture this scenario, entirely plausible in today’s India:
Rajesh Kumar (a fictitious name), a 35-year-old shopkeeper in Mumbai, gets a knock on his door at dawn. Police officers show him a photograph, his face, aged by AI, matching a suspect in a 15-year-old kidnapping case under Section 137 of BNS (previously Section 361 of IPC) from Kerala. The algorithm is 85% certain. Rajesh has never been to Kerala, but his face—perhaps his jawline, moustache, or eye colour—tells a different story. His alibi from 2009 is buried in time. His only crime? Looking like someone who committed one. This isn’t science fiction or an extraordinary South Indian film plot; it’s a statistical inevitability in a country where faces echo across bloodlines and algorithms struggle to distinguish a shared beard style from a shared identity.

The Kerala Paradox: Success Story or Cautionary Tale?

The Kerala case deserves every accolade it has received. A family found closure. Justice prevailed. Technology served humanity. But it also represents something more complex—a proof of concept that could reshape Indian policing forever.
The question isn’t whether AI should be used in law enforcement. In a country with thousands of cold cases and limited investigative resources, AI isn’t just helpful; it’s necessary. The question is: how do we harness AI’s power without becoming victims of its precision?
Consider the Kerala success through a different lens:
What if the AI had been wrong?
What if the 90% match had led to the wrong person?
What if an innocent individual had spent months in custody while the real perpetrator remained free?
What if the victim’s family had celebrated justice while another family mourned injustice?
These aren’t hypotheticals; they’re inevitable outcomes if AI deployment lacks proper safeguards.

Building Safeguards in Silicon

The path forward requires treating AI like what it truly is: a powerful investigative tool, not an infallible judge. Just as Section 54A of BNSS and Section 7 of BSA govern identification parades, we need statutory frameworks for AI identification:
The Corroboration Mandate
No arrest based solely on AI matches. Period. Traditional investigation, witness testimony, forensic evidence, and other corroborative proof under Section 7 of BSA must support AI-generated leads before anyone loses their freedom, as emphasised in cases like Dana Yadav and K. Shikha Barman.

The Transparency Protocol

Law enforcement agencies must disclose:
AI system accuracy rates across different demographic groups.
Training data sources and potential biases
Confidence thresholds used for matches
Appeal mechanisms for questioned identifications.

The Bias Audit Requirement

Annual testing of AI systems using diverse Indian datasets, with particular attention to regional variations, age groups, and gender-specific accuracy rates.
The Human Override System
Trained officers must be able to question, challenge, and override AI recommendations based on ground realities and contextual factors—like similar hairstyles, facial features, or ambiguous images—that algorithms cannot distinguish.
The Legal Evidence Standard
Courts need specific guidelines for admitting AI-generated identification evidence, similar to existing standards for DNA evidence, fingerprints, and ballistics, drawing from the principles in Mukesh Singh and Section 7 of Bharatiya Sakshya Adhiniyam (previously Section 9 of Indian Evidence Act, 1872).

The Victim Protection Framework

When AI misidentification occurs, there must be swift remedial action, compensation mechanisms, and record correction procedures to restore innocent individuals’ reputations and livelihoods.
The Innovation Imperative vs. The Innocence Imperative
India stands at a crossroads. On one path lies the promise of AI-powered justice: cold cases solved, closure for families, and enhanced law enforcement capabilities that could transform public safety. On the other path lies the spectre of algorithmic injustice: innocent individuals trapped by digital certainty, families destroyed by computational error, and public trust eroded by technological overreach.
The beauty of the Kerala case is that it shows both paths are possible. The same technology that brought justice to one family could bring injustice to another. The difference lies not in the algorithm, but in how we choose to deploy it.

A Letter to India’s Digital Future

Dear Policymakers, Police Commissioners, and Technology Leaders,
The Kerala AI success is not just a case study; it’s a test. Are we mature enough as a democracy to celebrate innovation while acknowledging its risks? Can we build systems that are both effective and ethical? Will we learn from global mistakes or repeat them at an Indian scale?
The criminal law practitioner like me warning you the investigating machinery about lookalikes....  where even police officers could be mistaken for criminals due to shared facial features—isn’t pessimism... it’s realism...! 
In a country where surnames reveal ancestry, caste, regional features tell stories of migration, and family resemblances can span centuries, AI facial recognition isn’t just technology adoption; it’s demographic destiny.

We have a choice

 Deploy AI responsibly with robust safeguards, or deploy it recklessly and wait for the inevitable wrong arrests, wrongful convictions, and erosion of public trust.
The Kerala grandmother who finally received justice deserves our celebration. But the unknown individual who might have lost their freedom to a misidentification -- perhaps because their beard or jawline matches a suspect’s...  deserves our protection. Both are possible. Both are necessary. Both define who we are as a society embracing technological change.

When Silicon Meets Sanity

The Kerala Police’s AI triumph isn’t just about solving a 19-year-old case; it’s about glimpsing India’s technological future in law enforcement. But as we step into that future, we carry the wisdom of our legal past: identity is sacred, proof is paramount, and liberty cannot be sacrificed at the altar of computational convenience, as the Supreme Court has repeatedly affirmed in cases like K. Shikha Barman and Mukesh Singh.
In a country of 1.46 billion faces, the difference between justice and injustice might be just one algorithm away. Our challenge isn’t to perfect the algorithm; it’s to perfect the safeguards that protect human dignity when algorithms mistake a moustache or eye colour for guilt. The Kerala case gave us a glimpse of AI’s promise. Now it’s time to build systems worthy of that promise—systems that serve justice without sacrificing the innocent, systems that embrace innovation without abandoning wisdom, systems that celebrate technology while cherishing humanity.
Because in the end, justice isn’t about matching faces or computing probabilities. It’s about ensuring that every citizen, regardless of how common their face, beard, or hairstyle, can sleep peacefully knowing that their liberty depends on truth, not just on pixels and percentages.
The algorithm brought justice to Kerala. Now we must ensure Kerala’s justice brings wisdom to our algorithms.
The conversation about AI in Indian policing has just begun. As we write its next chapters, let’s make sure they’re worth reading, by victims seeking justice and citizens seeking protection alike.

Written by Adv. Mangesh Dhumal.
# IndiaLeaglSolutions.

Popular posts from this blog

Nithari’s Forensic Blind Spot: The Case That Collapsed Under Its Own Evidence :

The Nithari saga has always been narrated as a carnival of horrors. Dismembered bodies. Polybags in drains. A servant who confessed. A master who denied. Headlines that wrote themselves. But behind the spectacle lies a quieter, more disturbing truth: the investigation was so fractured that the case practically unmade itself. This is not a defence of guilt or innocence. It is a critique of a criminal justice process that built a narrative without securing its spine — forensic certainty. The Missing Witnesses :  What sank many of the prosecutions wasn’t lack of brutality. It was lack of method. Forensic work did happen. AIIMS doctors examined bones. CFSL and CDFD conducted DNA profiling. A team assembled scattered skulls and fragments. But when the matter reached the courtroom, the prosecution failed to produce a coherent, traceable chain of expert testimony. Basic questions remained unanswered: Which AIIMS forensic experts testified as PW-s? The record is patchy. Stateme...

When the Judiciary Judges Itself: Flaws in India’s Institutional Response to Sexual Harassment Allegations.

Constitutional and Procedural Violations in Judicial Self-Investigation... By  Adv Mangesh Dhumal        indialegalsolutions17@gmail.com I. EXECUTIVE SUMMARY The 2019 sexual harassment allegations against the then Chief Justice of India Shri. Ranjan Gogoi and the subsequent institutional response represent a profound constitutional crisis that exposed fundamental flaws in judicial accountability mechanisms. This case demonstrates how institutional self-preservation appeared to override constitutional mandates, natural justice principles, and statutory protections designed to safeguard victims of workplace sexual harassment. The matter reveals a systematic violation of due process, transparency norms, and equality before law principles that form the bedrock of constitutional governance. More critically, it establishes a dangerous precedent where the highest judicial office was insulated through procedures that lacked transparency and statutory safeguards, raising...

The Overlooked MCOCA Loophole: Gang Transition Scenarios and the Doctrine of Continuing Unlawful Activity - A Critical Legal Gap Analysis

   "The Overlooked MCOCA Loophole: Gang Transition Scenarios and the Doctrine of Continuing Unlawful Activity - A Critical Legal Gap Analysis" By  Adv Mangesh Dhumal                                                           indialegalsolutions17@gmail.com Abstract The Maharashtra Control of Organised Crime Act (MCOCA), 1999, represents one of India's most stringent legislative responses to organized crime. However, a critical legal vacuum has emerged in its judicial interpretation - one that has been systematically overlooked by courts across all levels of the Indian judiciary. This comprehensive analysis reveals a fundamental gap in MCOCA jurisprudence: the complete absence of judicial guidance on the Act's applicability when syndicate members transition between criminal organizations. Through detai...