Posts Tagged ‘artificial intelligence’

When the states legalize the deliberate ending of certain lives… it will eventually broaden the categories of those who can be put to death with impunity.”—Nat Hentoff, The Washington Post, 1992

Bodily autonomy—the right to privacy and integrity over our own bodies—is rapidly vanishing.

The debate now extends beyond forced vaccinations or invasive searches to include biometric surveillance, wearable tracking, and predictive health profiling.

We are entering a new age of algorithmic, authoritarian control, where our thoughts, moods, and biology are monitored and judged by the state.

This is the dark promise behind the newest campaign by Robert F. Kennedy Jr., President Trump’s Secretary of Health and Human Services, to push for a future in which all Americans wear biometric health-tracking devices.

Under the guise of public health and personal empowerment, this initiative is nothing less than the normalization of 24/7 bodily surveillance—ushering in a world where every step, heartbeat, and biological fluctuation is monitored not only by private companies but also by the government.

In this emerging surveillance-industrial complex, health data becomes currency. Tech firms profit from hardware and app subscriptions, insurers profit from risk scoring, and government agencies profit from increased compliance and behavioral insight.

This convergence of health, technology, and surveillance is not a new strategy—it’s just the next step in a long, familiar pattern of control.

Surveillance has always arrived dressed as progress.

Every new wave of surveillance technology—GPS trackers, red light cameras, facial recognition, Ring doorbells, Alexa smart speakers—has been sold to us as a tool of convenience, safety, or connection. But in time, each became a mechanism for tracking, monitoring, or controlling the public.

What began as voluntary has become inescapable and mandatory.

The moment we accepted the premise that privacy must be traded for convenience, we laid the groundwork for a society in which nowhere is beyond the government’s reach—not our homes, not our cars, not even our bodies.

RFK Jr.’s wearable plan is just the latest iteration of this bait-and-switch: marketed as freedom, built as a cage.

According to Kennedy’s plan, which has been promoted as part of a national campaign to “Make America Healthy Again,” wearable devices would track glucose levels, heart rate, activity, sleep, and more for every American.

Participation may not be officially mandatory at the outset, but the implications are clear: get on board, or risk becoming a second-class citizen in a society driven by data compliance.

What began as optional self-monitoring tools marketed by Big Tech is poised to become the newest tool in the surveillance arsenal of the police state.

Devices like Fitbits, Apple Watches, glucose trackers, and smart rings collect astonishing amounts of intimate data—from stress and depression to heart irregularities and early signs of illness. When this data is shared across government databases, insurers, and health platforms, it becomes a potent tool not only for health analysis—but for control.

Once symbols of personal wellness, these wearables are becoming digital cattle tags—badges of compliance tracked in real time and regulated by algorithm.

And it won’t stop there.

The body is fast becoming a battleground in the government’s expanding war on the inner realms.

The infrastructure is already in place to profile and detain individuals based on perceived psychological “risks.” Now imagine a future in which your wearable data triggers a mental health flag. Elevated stress levels. Erratic sleep. A skipped appointment. A sudden drop in heart rate variability.

In the eyes of the surveillance state, these could be red flags—justification for intervention, inquiry, or worse.

RFK Jr.’s embrace of wearable tech is not a neutral innovation. It is an invitation to expand the government’s war on thought crimes, health noncompliance, and individual deviation.

It shifts the presumption of innocence to a presumption of diagnosis. You are not well until the algorithm says you are.

The government has already weaponized surveillance tools to silence dissent, flag political critics, and track behavior in real time. Now, with wearables, they gain a new weapon: access to the human body as a site of suspicion, deviance, and control.

While government agencies pave the way for biometric control, it will be corporations—insurance companies, tech giants, employers—who act as enforcers for the surveillance state.

Wearables don’t just collect data. They sort it, interpret it, and feed it into systems that make high-stakes decisions about your life: whether you get insurance coverage, whether your rates go up, whether you qualify for employment or financial aid.

As reported by ABC News, a JAMA article warns that wearables could easily be used by insurers to deny coverage or hike premiums based on personal health metrics like calorie intake, weight fluctuations, and blood pressure.

It’s not a stretch to imagine this bleeding into workplace assessments, credit scores, or even social media rankings.

Employers already offer discounts for “voluntary” wellness tracking—and penalize nonparticipants. Insurers give incentives for healthy behavior—until they decide unhealthy behavior warrants punishment. Apps track not just steps, but mood, substance use, fertility, and sexual activity—feeding the ever-hungry data economy.

This dystopian trajectory has been long foreseen and forewarned.

In Brave New World by Aldous Huxley (1932), compliance is maintained not through violence but by way of pleasure, stimulation, and chemical sedation. The populace is conditioned to accept surveillance in exchange for ease, comfort, and distraction.

In THX 1138 (1971), George Lucas envisions a corporate-state regime where biometric monitoring, mood-regulating drugs, and psychological manipulation reduce people to emotionless, compliant biological units.

Gattaca (1997) imagines a world in which genetic and biometric profiling predetermines one’s fate, eliminating privacy and free will in the name of public health and societal efficiency.

In The Matrix (1999), written and directed by the Wachowskis, human beings are harvested as energy sources while trapped inside a simulated reality—an unsettling parallel to our increasing entrapment in systems that monitor, monetize, and manipulate our physical selves.

Minority Report (2002), directed by Steven Spielberg, depicts a pre-crime surveillance regime driven by biometric data. Citizens are tracked via retinal scans in public spaces and targeted with personalized ads—turning the body itself into a surveillance passport.

The anthology series Black Mirror, inspired by The Twilight Zone, brings these warnings into the digital age, dramatizing how constant monitoring of behavior, emotion, and identity breeds conformity, judgment, and fear.

Taken collectively, these cultural touchstones deliver a stark message: dystopia doesn’t arrive overnight.

As Margaret Atwood warned in The Handmaid’s Tale,  “Nothing changes instantaneously: in a gradually heating bathtub, you’d be boiled to death before you knew it.” Though Atwood’s novel focuses on reproductive control, its larger warning is deeply relevant: when the state presumes authority over the body—whether through pregnancy registries or biometric monitors—bodily autonomy becomes conditional, fragile, and easily revoked.

The tools may differ, but the logic of domination is the same.

What Atwood portrayed as reproductive control, we now face in a broader, digitized form: the quiet erosion of autonomy through the normalization of constant monitoring.

When both government and corporations gain access to our inner lives, what’s left of the individual?

We must ask: when surveillance becomes a condition of participation in modern life—employment, education, health care—are we still free? Or have we become, as in every great dystopian warning, conditioned not to resist, but to comply?

That’s the hidden cost of these technological conveniences: today’s wellness tracker is tomorrow’s corporate leash.

In a society where bodily data is harvested and analyzed, the body itself becomes government and corporate property. Your body becomes a form of testimony, and your biometric outputs are treated as evidence. The list of bodily intrusions we’ve documented—forced colonoscopies, blood draws, DNA swabs, cavity searches, breathalyzer tests—is growing.

To this list we now add a subtler, but more insidious, form of intrusion: forced biometric consent.

Once health tracking becomes a de facto requirement for employment, insurance, or social participation, it will be impossible to “opt out” without penalty. Those who resist may be painted as irresponsible, unhealthy, or even dangerous.

We’ve already seen chilling previews of where this could lead. In states with abortion restrictions, digital surveillance has been weaponized to track and prosecute individuals for seeking abortions—using period-tracking appssearch histories, and geolocation data.

When bodily autonomy becomes criminalized, the data trails we leave behind become evidence in a case the state has already decided to make.

This is not merely the expansion of health care. It is the transformation of health into a mechanism of control—a Trojan horse for the surveillance state to claim ownership over the last private frontier: the human body.

Because ultimately, this isn’t just about surveillance—it’s about who gets to live.

Too often, these debates are falsely framed as having only two possible outcomes: safety vs. freedom, health vs. privacy, compliance vs. chaos. But these are illusions. A truly free and just society can protect public health without sacrificing bodily autonomy or human dignity.

We must resist the narrative that demands our total surrender in exchange for security.

Once biometric data becomes currency in a health-driven surveillance economy, it’s only a matter of time before that data is used to determine whose lives are worth investing in—and whose are not.

We’ve seen this dystopia before.

In the 1973 film Soylent Green, the elderly become expendable when resources grow scarce. My good friend Nat Hentoff—an early and principled voice warning against the devaluation of human life—sounded this alarm decades ago. Once pro-choice, Hentoff came to believe that the erosion of medical ethics—particularly the growing acceptance of abortion, euthanasia, and selective care—was laying the groundwork for institutionalized dehumanization.

As Hentoff warned, once the government sanctions the deliberate ending of certain lives, it can become a slippery slope: broader swaths of the population would eventually be deemed expendable.

Hentoff referred to this as “naked utilitarianism—the greatest good for the greatest number. And individuals who are in the way—in this case, the elderly poor—have to be gotten out of the way. Not murdered, heaven forbid. Just made comfortable until they die with all deliberate speed.”

That concern is no longer theoretical.

In 1996, writing about the Supreme Court’s consideration of physician-assisted suicide, Hentoff warned that once a state decides who shall die “for their own good,” there are “no absolute limits.” He cited medical leaders and disability advocates who feared that the poor, elderly, disabled, and chronically ill would become targets of a system that valued efficiency over longevity.

Today, data collected through wearables—heart rate, mood, mobility, compliance—can shape decisions about insurance, treatment, and life expectancy. How long before an algorithm quietly decided whose suffering is too expensive, whose needs are too inconvenient, or whose body no longer qualifies as worth saving?

This isn’t a left or right issue.

Dehumanization—the process of stripping individuals or groups of their dignity, autonomy, or moral worth—cuts across the political spectrum.

Today, dehumanizing language and policies aren’t confined to one ideology—they’re weaponized across the political divide. Prominent figures have begun referring to political opponents, immigrants, and other marginalized groups as “unhuman”—a disturbing echo of the labels that have justified atrocities throughout history.

As reported by Mother Jones, J.D. Vance endorsed a book by influencer Jack Posobiec and Joshua Lisec that advocates crushing “unhumans” like vermin.

This kind of rhetoric isn’t abstract—it matters.

How can any party credibly claim to be “pro‑life” when it devalues the humanity of entire groups, stripping them of the moral worth that should be fundamental to civil society?

When the state and its corporate allies treat people as data, as compliance issues, or as “unworthy,” they dismantle the very notion of equal human dignity.

In such a world, rights—including the right to bodily autonomy, health care, or even life itself—become privileges doled out only to the “worthy.”

This is why our struggle must be both political and moral. We can’t defend bodily sovereignty without defending every human being’s equal humanity.

The dehumanization of the vulnerable crosses political lines. It manifests differently—through budget cuts here, through mandates and metrics there—but the outcome is the same: a society that no longer sees human beings, only data points.

The conquest of physical space—our homes, cars, public squares—is nearly complete.

What remains is the conquest of inner space: our biology, our genetics, our psychology, our emotions. As predictive algorithms grow more sophisticated, the government and its corporate partners will use them to assess risk, flag threats, and enforce compliance in real time.

The goal is no longer simply to monitor behavior but to reshape it—to preempt dissent, deviance, or disease before it arises. This is the same logic that drives Minority Report-style policing, pre-crime mental health interventions, and AI-based threat assessments.

If this is the future of “health freedom,” then freedom has already been redefined as obedience to the algorithm.

We must resist the surveillance of our inner and outer selves.

We must reject the idea that safety requires total transparency, or that health requires constant monitoring. We must reclaim the sanctity of the human body as a space of freedom—not as a data point.

The push for mass adoption of wearables is not about health. It is about habituation.

The goal is to train us—subtly, systematically—to accept government and corporate ownership of our bodies.

We must not forget that our nation was founded on the radical idea that all human beings are created equal, “endowed by their Creator with certain unalienable Rights,” among them life, liberty, and the pursuit of happiness.

These rights are not granted by the government, the algorithm, or the market. They are inherent. They are indivisible. And they apply to all of us—or they will soon apply to none of us.

The Founders got this part right: their affirmation of our shared humanity is more vital than ever before.

As I make clear in my book Battlefield America: The War on the American People and in its fictional counterpart The Erik Blair Diaries, the task before us is whether we will defend that humanity—or surrender it, one wearable at a time. Now is the time to draw the line—before the body becomes just another piece of state property.

Source: https://tinyurl.com/mr24w458

ABOUT JOHN W. WHITEHEAD

Constitutional attorney and author John W. Whitehead is founder and president of The Rutherford Institute. His most recent books are the best-selling Battlefield America: The War on the American People, the award-winning A Government of Wolves: The Emerging American Police State, and a debut dystopian fiction novel, The Erik Blair Diaries. Whitehead can be contacted at staff@rutherford.org. Nisha Whitehead is the Executive Director of The Rutherford Institute. Information about The Rutherford Institute is available at www.rutherford.org.

Publication Guidelines / Reprint Permission

John W. Whitehead’s weekly commentaries are available for publication to newspapers and web publications at no charge. 

We are fast approaching the stage of the ultimate inversion: the stage where the government is free to do anything it pleases, while the citizens may act only by permission.” — Ayn Rand

Call it what it is: a panopticon presidency.

President Trump’s plan to fuse government power with private surveillance tech to build a centralized, national citizen database is the final step in transforming America from a constitutional republic into a digital dictatorship armed with algorithms and powered by unaccountable, all-seeing artificial intelligence.

This isn’t about national security. It’s about control.

According to news reports, the Trump administration is quietly collaborating with Palantir Technologies—the data-mining behemoth co-founded by billionaire Peter Thiel—to construct a centralized, government-wide surveillance system that would consolidate biometric, behavioral, and geolocation data into a single, weaponized database of Americans’ private information.

This isn’t about protecting freedom. It’s about rendering freedom obsolete.

What we’re witnessing is the transformation of America into a digital prison—one where the inmates are told we’re free while every move, every word, every thought is monitored, recorded, and used to assign a “threat score” that determines our place in the new hierarchy of obedience.

This puts us one more step down the road to China’s dystopian system of social credit scores and Big Brother surveillance.

The tools enabling this all-seeing surveillance regime are not new, but under Trump’s direction, they are being fused together in unprecedented ways—with Palantir at the center of this digital dragnet.

Palantir, long criticized for its role in powering ICE (Immigration and Customs Enforcement) raids and predictive policing, is now poised to become the brain of Trump’s surveillance regime.

Under the guise of “data integration” and “public safety,” this public-private partnership would deploy AI-enhanced systems to comb through everything from facial recognition feeds and license plate readers to social media posts and cellphone metadata—cross-referencing it all to assess a person’s risk to the state.

Palantir’s software has already been used to assist ICE in locating, arresting, and deporting undocumented immigrants, often relying on vast surveillance data sets aggregated from multiple sources. In New Orleans, the company secretly partnered with local police to run a predictive policing program without public knowledge or oversight, targeting individuals flagged as likely to commit crimes based on social networks and past behaviors—not actual wrongdoing.

This isn’t speculative. It’s already happening.

Palantir’s Gotham platform, used by law enforcement and military agencies, has long been the backbone of real-time tracking and predictive analysis. Now, with Trump’s backing, it threatens to become the central nervous system of a digitally enforced authoritarianism.

As Palantir itself admits, its mission is to “augment human decision-making.” In practice, that means replacing probable cause with probability scores, courtrooms with code, and due process with data pipelines.

In this new regime, your innocence will be irrelevant. The algorithm will decide who you are.

To understand the full danger of this moment, we must trace the long arc of government surveillance—from secret intelligence programs like COINTELPRO to today’s AI-driven digital dragnet embodied by data fusion centers.

The threat posed by today’s surveillance state did not emerge overnight. The groundwork was laid decades ago through covert government programs such as COINTELPRO (Counter Intelligence Program), launched by the FBI in the 1950s and continuing through the 1970s. Its explicit mission was to “disrupt, misdirect, discredit, or otherwise neutralize” political dissidents, including civil rights leaders, Vietnam War protesters, and Black liberation groups.

Under COINTELPRO, federal agents infiltrated lawful organizations, spread misinformation, blackmailed targets, and conducted warrantless surveillance.

Though exposed and publicly condemned by Congress, the spirit of COINTELPRO never died—it merely went underground and digital.

Post-9/11 legislation like the USA PATRIOT Act provided legal cover for mass surveillance, allowing intelligence agencies to collect phone records, monitor internet activity, and build profiles on American citizens without meaningful oversight. Fusion centers, initially conceived to coordinate counterterrorism efforts, became clearinghouses for domestic spying, facilitating data-sharing between federal agencies and local police.

Today, this infrastructure has merged with the tools of Big Tech.

With Palantir and similar firms at the helm, the government can now watch more people, more closely, for more arbitrary reasons than ever before. Dissent is once again being criminalized. Free expression is being categorized as extremism. And citizens—without ever committing a crime—can be flagged, tracked, and punished by an invisible digital bureaucracy that operates with impunity.

Building on this foundation of historical abuse, the government has evolved its tactics, replacing human informants with algorithms and wiretaps with metadata, ushering in an age where pre-crime prediction is treated as prosecution.

In the age of AI, your digital footprint is enough to convict you—not in a court of law, but in the court of preemptive suspicion.

Every smartphone ping, GPS coordinate, facial scan, online purchase, and social media like becomes part of your “digital exhaust”—a breadcrumb trail of metadata that the government now uses to build behavioral profiles. The FBI calls it “open-source intelligence.” But make no mistake: this is dragnet surveillance, and it is fundamentally unconstitutional.

Already, government agencies are mining this data to generate “pattern of life” analyses, flag “radicalized” individuals, and preemptively investigate those who merely share anti-government views. Whistleblowers have revealed that the FBI has flagged individuals as potential threats based on their internet search history, social media posts, religious beliefs, or associations with activist groups.

In a growing number of cases, individuals have found themselves visited by agents simply for attending a protest, making a political post, or appearing on the “wrong” side of a digital algorithm.

This is not law enforcement. This is thought-policing by machine.

The FBI has developed detailed dossiers on individuals based not on criminal activity, but on constitutionally protected expression—flagging citizens for visiting alternative media websites, criticizing government policies, or supporting causes deemed “extreme.”

According to leaked memos and internal documents, terms like “liberty,” “sovereignty,” and even the Gadsden flag have been cited as potential indicators of domestic extremism. In one case, a peaceful protester was interrogated for merely using encrypted messaging apps. In another, churchgoers were surveilled because their religious leader spoke critically of the government.

These are the logical outcome of a system that criminalizes dissent and deputizes algorithms to do the targeting.

Nor is this entirely new.

For decades, the federal government has reportedly maintained a highly classified database known as Main Core, designed to collect and store information on Americans deemed potential threats to national security.

Investigative journalists have revealed that Main Core may contain data on millions of individuals—compiled without warrants or due process—for potential use during a national emergency. As Tim Shorrock reported for Salon, “One former intelligence official described Main Core as ‘an emergency internal security database system’ designed for use by the military in the event of a national catastrophe, a suspension of the Constitution or the imposition of martial law.”

Trump’s embrace of Palantir, and its unparalleled ability to fuse surveillance feeds, social media metadata, public records, and AI-driven predictions, marks a dangerous evolution: a modern-day resurrection of Main Core, digitized, centralized, and fully automated.

What was once covert contingency planning is now becoming active policy.

What has emerged is a surveillance model more vast than anything dreamed up by past regimes—a digital panopticon in which every citizen becomes both observed and self-regulating.

Imagine a society in which every citizen is watched constantly, and every move is logged in a government database.

Imagine a state where facial recognition cameras scan your face at protests and concerts, where your car’s location is tracked by automatic license plate readers, where your biometric data is captured by drones, and where AI programs assign you a “threat assessment” score based on your behavior, opinions, associations, and even your purchases.

This is not science fiction. This is America—now.

This is the panopticon brought to life: a circular prison designed so that inmates never know when they are being watched, and thus must behave as if they always are. Jeremy Bentham’s original vision has become the model of modern-day governance: total visibility, zero accountability.

Our every move is being monitored, our every word recorded, our every action judged and categorized—not by humans, but by machines without conscience, without compassion, and without constitutional limits.

And in this surveillance state, the people have become inventory. Lives reduced to data points. Choices reduced to algorithms. Freedom reduced to a permission slip. You are no longer the customer. You are the product.

In this new reality, we are not only watched—we are measured, categorized, and sold back to the very systems that enslave us.

We are no longer free citizens.

We are data points in a digital control grid—commodified, categorized, and exploited.

In this new digital economy, our lives have become profit centers for corporations that track, trade, and monetize our every move.

The surveillance state is powered not only by authoritarian government impulses but by a corporate ecosystem that sees no distinction between the marketplace and the public square.

We are being bought and sold, not as citizens with rights, but as consumers to be studied and shaped.

Our autonomy is being eroded by design, not by accident.

This modern surveillance state knows everything about you—where you go, what you buy, what you read, who you associate with—and it uses that information to predict your behavior, shape your preferences, and ultimately control your actions.

Your phone is tracking you.

Your car is tracking you.

Your smart TV, internet searches, and digital assistant—all of it is being harvested to feed a growing network of AI-powered surveillance.

Even your refrigerator and your doorbell are reporting on you.

Every electronic device you use, every online transaction you make, every move you make through a smart city grid, adds another data point to your profile.

This is the machinery of oppression, and it is being refined daily.

The difference between past regimes and the one being constructed now is its subtlety. Today’s totalitarianism doesn’t come with jackboots and secret police. It comes with convenience. With apps. With “national security” justifications. With the illusion of safety.

As in the dystopian world of Soylent Green, where the individual is reduced to a consumable product of the system, today’s surveillance state treats Americans not as citizens but as data points to be harvested, scored, and fed back into the machine of control.

We are no longer governed—we are managed.

It is no less dangerous—just more efficient.

The tragedy, however, is that most Americans don’t see the bars being built around them, because the architecture of tyranny is disguised as convenience and cloaked in comfort.

Most Americans are still asleep to the danger. They live in a prison masquerading as paradise, where surveillance is sold as safety, compliance is branded as patriotism, and convenience has become the currency of captivity.

We have been conditioned to love our servitude, to decorate our cells with apps and smart devices, and to mistake technological dependency for freedom.

The prison walls are invisible, the bars digital, the guards automated.

We are inmates in a high-tech prison, lulled by convenience and pacified by illusion. We carry our tracking devices in our pockets. We whisper our secrets into microphones embedded in our own devices. We voluntarily surrender our privacy to digital overlords.

Meanwhile, those who dare question this system—journalists, whistleblowers, dissidents—are silenced, surveilled, and punished. All under color of law.

Consider:

This is predictive policing turned preemptive prosecution. It is the very definition of a surveillance state.

As this technological tyranny expands, the foundational safeguards of the Constitution—those supposed bulwarks against arbitrary power—are quietly being nullified and its protections rendered meaningless.

What does the Fourth Amendment mean in a world where your entire life can be searched, sorted, and scored without a warrant? What does the First Amendment mean when expressing dissent gets you flagged as an extremist? What does the presumption of innocence mean when algorithms determine guilt?

The Constitution was written for humans—not for machine rule. It cannot compete with predictive analytics trained to bypass rights, sidestep accountability, and automate tyranny.

And that is the endgame: the automation of authoritarianism. An unblinking, AI-powered surveillance regime that renders due process obsolete and dissent fatal.

Still, it is not too late to resist—but doing so requires awareness, courage, and a willingness to confront the machinery of our own captivity.

Make no mistake: the government is not your friend in this. Neither are the corporations building this digital prison. They thrive on your data, your fear, and your silence.

To resist, we must first understand the weaponized AI tools being used against us.

We must demand transparency, enforce limits on data collection, ban predictive profiling, and dismantle the fusion centers feeding this machine.

We must treat AI surveillance with the same suspicion we once reserved for secret police. Because that is what AI-powered governance has become—secret police—only smarter, faster, and less accountable.

We must stop cooperating with our captors. Stop consenting to our own control. Stop feeding the surveillance machine with our data, our time, and our trust.

We don’t have much time.

Trump’s alliance with Palantir is a warning sign—not just of where we are, but of where we’re headed. A place where freedom is conditional, rights are revocable, and justice is decided by code.

The question is no longer whether we’re being watched—that is now a given—but whether we will meekly accept it. Will we dismantle this electronic concentration camp, or will we continue building the infrastructure of our own enslavement?

As I point out in my book Battlefield America: The War on the American People and in its fictional counterpart The Erik Blair Diaries, if we trade liberty for convenience and privacy for security, we will find ourselves locked in a prison we helped build, and the bars won’t be made of steel. They will be made of data.

Source: https://tinyurl.com/4mxvwpz3

ABOUT JOHN W. WHITEHEAD

Constitutional attorney and author John W. Whitehead is founder and president of The Rutherford Institute. His most recent books are the best-selling Battlefield America: The War on the American People, the award-winning A Government of Wolves: The Emerging American Police State, and a debut dystopian fiction novel, The Erik Blair Diaries. Whitehead can be contacted at staff@rutherford.org. Nisha Whitehead is the Executive Director of The Rutherford Institute. Information about The Rutherford Institute is available at www.rutherford.org.

Publication Guidelines / Reprint Permission

John W. Whitehead’s weekly commentaries are available for publication to newspapers and web publications at no charge. 

“If one company or small group of people manages to develop godlike digital superintelligence, they could take over the world. At least when there’s an evil dictator, that human is going to die. But for an AI, there would be no death. It would live forever. And then you’d have an immortal dictator from which we can never escape.”—Elon Musk

The Deep State is not going away. It’s just being replaced.

Replaced not by a charismatic autocrat or even a shadowy bureaucracy, but by artificial intelligence (AI)—unfeeling, unaccountable, and immortal.

As we stand on the brink of a new technological order, the machinery of power is quietly shifting into the hands of algorithms.

Under Donald Trump’s watch, that shift is being locked in for at least a generation.

Trump’s latest legislative initiative—a 10-year ban on AI regulation buried within the “One Big Beautiful Bill”—strips state and local governments of the ability to impose any guardrails on artificial intelligence until 2035.

Despite bipartisan warnings from 40 state attorneys general, the bill passed the House and awaits Senate approval. It is nothing less than a federal green light for AI to operate without oversight in every sphere of life, from law enforcement and employment to healthcare, education, and digital surveillance.

This is not innovation.

This is institutionalized automation of tyranny.

This is how, within a state of algorithmic governance, code quickly replaces constitutional law as the mechanism for control.

We are rapidly moving from a society ruled by laws and due process to one ruled by software.

Algorithmic governance refers to the use of machine learning and automated decision-making systems to carry out functions once reserved for human beings: policing, welfare eligibility, immigration vetting, job recruitment, credit scoring, and judicial risk assessments.

In this regime, the law is no longer interpreted. It is executed. Automatically. Mechanically. Without room for appeal, discretion, or human mercy.

These AI systems rely on historical data—data riddled with systemic bias and human error—to make predictions and trigger decisions. Predictive policing algorithms tell officers where to patrol and whom to stop. Facial recognition technology flags “suspects” based on photos scraped from social media. Risk assessment software assigns threat scores to citizens with no explanation, no oversight, and no redress.

These algorithms operate in black boxes, shielded by trade secrets and protected by national security exemptions. The public cannot inspect them. Courts cannot challenge them. Citizens cannot escape them.

The result? A population sorted, scored, and surveilled by machinery.

This is the practical result of the Trump administration’s deregulation agenda: AI systems given carte blanche to surveil, categorize, and criminalize the public without transparency or recourse.

And these aren’t theoretical dangers—they’re already happening.

Examples of unchecked AI and predictive policing show that precrime is already here.

Once you are scored and flagged by a machine, the outcome can be life-altering—as it was for Michael Williams, a 65-year-old man who spent nearly a year in jail for a crime he didn’t commit. Williams was behind the wheel when a passing car fired at his vehicle, killing his 25-year-old passenger, who had hitched a ride.

Despite no motive, no weapon, and no eyewitnesses, police charged Williams based on an AI-powered gunshot detection program called ShotSpotter. The system picked up a loud bang near the area and triangulated it to Williams’ vehicle. The charge was ultimately dropped for lack of evidence.

This is precrime in action. A prediction, not proof. An algorithm, not an eyewitness.

Programs like ShotSpotter are notorious for misclassifying noises like fireworks and construction as gunfire. Employees have even manually altered data to fit police narratives. And yet these systems are being combined with predictive policing software to generate risk maps, target individuals, and justify surveillance—all without transparency or accountability.

It doesn’t stop there.

AI is now flagging families for potential child neglect based on predictive models that pull data from Medicaid, mental health, jail, and housing records. These models disproportionately target poor and minority families. The algorithm assigns risk scores from 1 to 20. Families and their attorneys are never told what the scores are, or that they were used.

Imagine losing your child to the foster system because a secret algorithm said you might be a risk.

This is how AI redefines guilt.

The Trump administration’s approach to AI regulation reveals a deeper plan to deregulate democracy itself.

Rather than curbing these abuses, the Trump administration is accelerating them.

An executive order titled “Removing Barriers to American Leadership in Artificial Intelligence,” signed by President Trump in early 2025, revoked prior AI safeguards, eliminated bias audits, and instructed agencies to prioritize “innovation” over ethics. The order encourages every federal agency to adopt AI quickly, especially in areas like policing and surveillance.

Under the guise of “efficiency,” constitutional protections are being erased.

Trump’s 10-year moratorium on AI regulation is the logical next step. It dismantles the last line of defense—state-level resistance—and ensures a uniform national policy of algorithmic dominance.

The result is a system in which government no longer governs. It processes.

The federal government’s AI expansion is building a surveillance state that no human authority can restrain.

Welcome to Surveillance State 2.0, the Immortal Machine.

Over 1700 uses of AI have already been reported across federal agencies, with hundreds directly impacting safety and rights. Many agencies, including the Departments of Homeland Security, Veterans Affairs, and Health and Human Services, are deploying AI for decision-making without public input or oversight.

This is what the technocrats call an “algocracy”—rule by algorithm.

In an algocracy, unelected developers and corporate contractors hold more power over your life than elected officials.

Your health, freedom, mobility, and privacy are subject to automated scoring systems you can’t see and can’t appeal.

And unlike even the most entrenched human dictators, these systems do not die. They do not forget. They are not swayed by mercy or reason. They do not stand for re-election.

They persist.

When AI governs by prediction, due process disappears in a haze of machine logic.

The most chilling effect of this digital regime is the death of due process.

What court can you appeal to when an algorithm has labeled you a danger? What lawyer can cross-examine a predictive model? What jury can weigh the reasoning of a neural net trained on flawed data?

You are guilty because the machine says so. And the machine is never wrong.

When due process dissolves into data processing, the burden of proof flips. The presumption of innocence evaporates. Citizens are forced to prove they are not threats, not risks, not enemies.

And most of the time, they don’t even know they’ve been flagged.

This erosion of due process is not just a legal failure—it is a philosophical one, reducing individuals to data points in systems that no longer recognize their humanity.

Writer and visionary Rod Serling warned of this very outcome more than half a century ago: a world where technology, masquerading as progress under the guise of order and logic, becomes the instrument of tyranny.

That future is no longer fiction. What Serling imagined is now reality.

The time to resist is now, before freedom becomes obsolete.

To those who call the shots in the halls of government, “we the people” are merely the means to an end.

“We the people”—who think, who reason, who take a stand, who resist, who demand to be treated with dignity and care, who believe in freedom and justice for all—have become obsolete, undervalued citizens of a totalitarian state that, in the words of Serling, “has patterned itself after every dictator who has ever planted the ripping imprint of a boot on the pages of history since the beginning of time. It has refinements, technological advances, and a more sophisticated approach to the destruction of human freedom.”

In this sense, we are all Romney Wordsworth, the condemned man in Serling’s Twilight Zone episode “The Obsolete Man.”

The Obsolete Man,” a story arc about the erasure of individual worth by a mechanized state, underscores the danger of rendering humans irrelevant in a system of cold automation and speaks to the dangers of a government that views people as expendable once they have outgrown their usefulness to the State. Yet—and here’s the kicker—this is where the government through its monstrous inhumanity also becomes obsolete.

As Serling noted in his original script for “The Obsolete Man,” “Any state, any entity, any ideology which fails to recognize the worth, the dignity, the rights of Man…that state is obsolete.

Like Serling’s totalitarian state, our future will be defined by whether we conform to a dehumanizing machine order—or fight back before the immortal dictator becomes absolute.

We now face a fork in the road: resist the rise of the immortal dictator or submit to the reign of the machine.

This is not a battle against technology, but a battle against the unchecked, unregulated, and undemocratic use of technology to control people.

We must demand algorithmic transparency, data ownership rights, and legal recourse against automated decisions. We need a Digital Bill of Rights that guarantees:

  • The right to know how algorithms affect us.
  • The right to challenge and appeal automated decisions.
  • The right to privacy and data security.
  • The right to be free from automated surveillance and predictive policing.
  • The right to be forgotten.

Otherwise, AI becomes the ultimate enforcer of a surveillance state from which there is no escape.

As Eric Schmidt, former CEO of Google, warned: “We know where you are. We know where you’ve been. We can more or less know what you’re thinking about. Your digital identity will live forever… because there’s no delete button.

An immortal dictator, indeed.

Let us be clear: the threat is not just to our privacy, but to democracy itself.

As I point out in my book Battlefield America: The War on the American People and in its fictional counterpart The Erik Blair Diaries, the time to fight back is now—before the code becomes law, and freedom becomes a memory.

Source: https://tinyurl.com/pmj64bcb

ABOUT JOHN W. WHITEHEAD

Constitutional attorney and author John W. Whitehead is founder and president of The Rutherford Institute. His most recent books are the best-selling Battlefield America: The War on the American People, the award-winning A Government of Wolves: The Emerging American Police State, and a debut dystopian fiction novel, The Erik Blair Diaries. Whitehead can be contacted at staff@rutherford.org. Nisha Whitehead is the Executive Director of The Rutherford Institute. Information about The Rutherford Institute is available at www.rutherford.org.

Publication Guidelines / Reprint Permission

John W. Whitehead’s weekly commentaries are available for publication to newspapers and web publications at no charge. 

“If one company or small group of people manages to develop godlike digital superintelligence, they could take over the world. At least when there’s an evil dictator, that human is going to die. But for an AI, there would be no death. It would live forever. And then you’d have an immortal dictator from which we can never escape.”—Elon Musk (2018)

The Deep State is about to go turbocharged.

While the news media fixates on the extent to which Project 2025 may be the Trump Administration’s playbook for locking down the nation, there is a more subversive power play taking place under cover of Trump’s unique brand of circus politics.

Take a closer look at what’s unfolding, and you will find that all appearances to the contrary, Trump isn’t planning to do away with the Deep State. Rather, he was hired by the Deep State to usher in the golden age of AI.

Get ready for Surveillance State 2.0.

To achieve this turbocharged surveillance state, the government is turning to its most powerful weapon yet: artificial intelligence. AI, with its ability to learn, adapt, and operate at speeds unimaginable to humans, is poised to become the engine of this new world order.

Over the course of 70 years, the technology has developed so rapidly that it has gone from early computers exhibiting a primitive form of artificial intelligence to machine learning (AI systems that learn from historic data) to deep learning (machine learning that mimics the human brain) to generative AI, which can create original content, i.e., it appears able to think for itself.

What we are approaching is the point of no return.

In tech speak, this point of no return is more aptly termed “singularity,” the point at which AI eclipses its human handlers and becomes all-powerful. Elon Musk has predicted that singularity could happen by 2026. AI scientist Ray Kurzweil imagines it happening it closer to 2045.

While the scientific community has a lot to say about the world-altering impact of artificial intelligence on every aspect of our lives, little has been said about its growing role in government and its oppressive effect on our freedoms, especially “the core democratic principles of privacy, autonomy, equality, the political process, and the rule of law.”

According to a report from Accenture, it is estimated that across both the public and private sectors, generative AI has the potential to automate a significant portion of jobs across various sectors.

Here’s a thought: what if Trump’s pledge to cut the federal work force isn’t really about eliminating government bureaucracy but outsourcing it to the AI tech sector?

Certainly, Trump has made no secret of his plans to make AI a priority. Indeed, Trump signed the first-ever Executive Order on AI in 2019. More recently, Trump issued an executive order giving the technology sector a green light to develop and deploy AI without any guardrails in place to limit the risks it might pose to U.S. national security, the economy, public health or safety.

President Biden was no better, mind you. His executive order, which Trump repealed, merely instructed the tech sector to share the results of AI safety tests with the U.S. government.

Yet following much the same pattern that we saw with the rollout of drones, while the government has been quick to avail itself of AI technology, it has done little to nothing to ensure that rights of the American people are protected.

Indeed, we are altogether lacking any guardrails for transparency, accountability and adherence to the rule of law when it comes to the government’s use of AI.

As Karl Manheim and Lyric Kaplan point out in a chilling article in the Yale Journal of Law & Technology about the risks to privacy and democracy posed by AI, “[a]rtificial intelligence is the most disruptive technology of the modern era… Its impact is likely to dwarf even the development of the internet as it enters every corner of our lives… Advances in AI herald not just a new age in computing, but also present new dangers to social values and constitutional rights. The threat to privacy from social media algorithms and the Internet of Things is well known. What is less appreciated is the even greater threat that AI poses to democracy itself.”

Cue the rise of “digital authoritarianism” or “algocracy—rule by algorithm.”

In an algocracy, “Mark Zuckerberg and Sundar Pichai, CEOs of Facebook and Google, have more control over Americans’ lives and futures than do the representatives we elect.”

Digital authoritarianism, as the Center for Strategic and International Studies cautions, involves the use of information technology to surveil, repress, and manipulate the populace, endangering human rights and civil liberties, and co-opting and corrupting the foundational principles of democratic and open societies, “including freedom of movement, the right to speak freely and express political dissent, and the right to personal privacy, online and off.”

How do we protect our privacy against the growing menace of overreach and abuse by a technological sector working with the government?

The ability to do so may already be out of our hands.

In 2024, at least 37 federal government agencies ranging from the Departments of Homeland Security and Veterans Affairs to Health and Human Services reported more than 1700 uses of AI in carrying out their work, double from the year before. That does not even begin to touch on agencies that did not report their usage, or usage at the state and local levels.

Of those 1700 cases at the federal level, 227 were labeled rights- or safety-impacting.

A particularly disturbing example of how AI is being used by government agencies in rights- and safety-impacting scenarios comes from an investigative report by The Washington Post on how law enforcement agencies across the nation are using “artificial intelligence tools in a way they were never intended to be used: as a shortcut to finding and arresting suspects without other evidence.”

This is what is referred to within tech circles as “automation bias,” a tendency to blindly trust decisions made by powerful software, ignorant to its risks and limitations. In one particular case, police used AI-powered facial recognition technology to arrest and jail a 29-year-old man for brutally assaulting a security guard. It would take Christopher Gatlin two years to clear his name.

Gatlin is one of at least eight known cases nationwide in which police reliance on AI facial recognition software has resulted in resulted in wrongful arrests arising from an utter disregard for basic police work (such as checking alibis, collecting evidence, corroborating DNA and fingerprint evidence, ignoring suspects’ physical characteristics) and the need to meet constitutional standards of due process and probable cause. According to The Washington Post, “Asian and Black people were up to 100 times as likely to be misidentified by some software as White men.”

The numbers of cases in which AI is contributed to false arrests and questionable police work is likely much higher, given the extent to which police agencies across the country are adopting the technology and will only rise in the wake of the Trump Administration’s intent to shut down law enforcement oversight and policing reforms.

“How do I beat a machine?” asked one man who was wrongly arrested by police for assaulting a bus driver based on an incorrect AI match.

It is becoming all but impossible to beat the AI machine.

When used by agents of the police state, it leaves “we the people” even more vulnerable.

So where do we go from here?

For the Trump Administration, it appears to be full steam ahead, starting with Stargate, a $500 billion AI infrastructure venture aimed at building massive data centers. Initial reports suggest that the AI data centers could be tied to digital health records and used to develop a cancer vaccine. Of course, massive health data centers for use by AI will mean that one’s health records are fair game for any and all sorts of identification, tracking and flagging.

But that’s just the tip of the iceberg.

The surveillance state, combined with AI, is creating a world in which there’s nowhere to run and nowhere to hide. We’re all presumed guilty until proven innocent now.

Thanks to the 24/7 surveillance being carried out by the government’s sprawling spy network of fusion centers, we are all just sitting ducks, waiting to be tagged, flagged, targeted, monitored, manipulated, investigated, interrogated, heckled and generally harassed by agents of the American police state.

Without having ever knowingly committed a crime or been convicted of one, you and your fellow citizens have likely been assessed for behaviors the government might consider devious, dangerous or concerning; assigned a threat score based on your associations, activities and viewpoints; and catalogued in a government database according to how you should be approached by police and other government agencies based on your particular threat level.

Before long, every household in America will be flagged as a threat and assigned a threat score.

It’s just a matter of time before you find yourself wrongly accused, investigated and confronted by police based on a data-driven algorithm or risk assessment culled together by a computer program run by artificial intelligence.

It’s a setup ripe for abuse.

Writing for the Yale Journal, Manheim and Kaplan conclude that “[h]umans may not be at risk as a species, but we are surely at risk in terms of our democratic institutions and values.”

Privacy­—Manheim and Kaplan succinctly describe it as “the right to make personal decisions for oneself, the right to keep one’s personal information confidential, and the right to be left alone are all ingredients of the fundamental right of privacy”— is especially at risk.

Indeed, with every new AI surveillance technology that is adopted and deployed without any regard for privacy, Fourth Amendment rights and due process, the rights of the citizenry are being marginalized, undermined and eviscerated.

We teeter on the cusp of a cultural, technological and societal revolution the likes of which have never been seen before.

AI surveillance is already re-orienting our world into one in which freedom is almost unrecognizable by doing what the police state lacks the manpower and resources to do efficiently or effectively: be everywhere, watch everyone and everything, monitor, identify, catalogue, cross-check, cross-reference, and collude.

As Eric Schmidt, the former Google CEO remarked, “We know where you are. We know where you’ve been. We can more or less know what you’re thinking about… Your digital identity will live forever… because there’s no delete button.

The ramifications of any government wielding such unregulated, unaccountable power are chilling, as AI surveillance provides the ultimate means of repression and control for tyrants and benevolent dictators alike.

Indeed, China’s social credit system, where citizens are assigned scores based on their behavior and compliance, offers a glimpse into this dystopian future.

This is not a battle against technology itself, but against its misuse. It’s a fight to retain our humanity, our dignity, and our freedom in the face of unprecedented technological power. It’s a struggle to ensure that AI serves us, not the other way around.

Faced with this looming threat, the time to act is now, before the lines between citizen and subject, between freedom and control, become irrevocably blurred.

The future of freedom depends on it.

So demand transparency. Demand accountability.

Demand an Electronic Bill of Rights that protects “we the people” from the encroaching surveillance state.

We need safeguards in place to ensure the right to data ownership and control (the right to know what data is being collected about them, how it’s being used, who has access to it, and the right to be “forgotten”); the right to algorithmic transparency (to understand how algorithms that affect them make decisions, particularly in areas like loan applications, job hiring, and criminal justice) and due process accountability; the right to privacy and data security, including restrictions on government and corporate use of AI-powered surveillance technologies, particularly facial recognition and predictive policing; the right to digital self-determination (freedom from automated discrimination based on algorithmic profiling) and the ability to manage and control one’s online identity and reputation; and effective mechanisms to seek redress for harms caused by AI systems.

AI deployed without any safeguards in place to protect against overreach and abuse, especially within government agencies, has the potential to become what Elon Musk described as an “immortal dictator,” one that lives forever and from which there is no escape.

Whatever you choose to call it—the police state, the Deep State, the surveillance state—this “immortal dictator” will be the future face of the government unless we rein it in now.

As I point out in my book Battlefield America: The War on the American People and in its fictional counterpart The Erik Blair Diaries, next year could be too late.

Source: https://tinyurl.com/yy8etm6d

ABOUT JOHN W. WHITEHEAD

Constitutional attorney and author John W. Whitehead is founder and president of The Rutherford Institute. His most recent books are the best-selling Battlefield America: The War on the American People, the award-winning A Government of Wolves: The Emerging American Police State, and a debut dystopian fiction novel, The Erik Blair Diaries. Whitehead can be contacted at staff@rutherford.org. Nisha Whitehead is the Executive Director of The Rutherford Institute. Information about The Rutherford Institute is available at www.rutherford.org.

Publication Guidelines / Reprint Permission

John W. Whitehead’s weekly commentaries are available for publication to newspapers and web publications at no charge.

“There are no private lives. This a most important aspect of modern life. One of the biggest transformations we have seen in our society is the diminution of the sphere of the private. We must reasonably now all regard the fact that there are no secrets and nothing is private. Everything is public.” ― Philip K. Dick

Nothing is private.

We teeter on the cusp of a cultural, technological and societal revolution the likes of which have never been seen before.

While the political Left and Right continue to make abortion the face of the debate over the right to privacy in America, the government and its corporate partners, aided by rapidly advancing technology, are reshaping the world into one in which there is no privacy at all.

Nothing that was once private is protected.

We have not even begun to register the fallout from the tsunami bearing down upon us in the form of AI (artificial intelligence) surveillance, and yet it is already re-orienting our world into one in which freedom is almost unrecognizable.

AI surveillance harnesses the power of artificial intelligence and widespread surveillance technology to do what the police state lacks the manpower and resources to do efficiently or effectively: be everywhere, watch everyone and everything, monitor, identify, catalogue, cross-check, cross-reference, and collude.

Everything that was once private is now up for grabs to the right buyer.

Governments and corporations alike have heedlessly adopted AI surveillance technologies without any care or concern for their long-term impact on the rights of the citizenry.

As a special report by the Carnegie Endowment for International Peace warns, “A growing number of states are deploying advanced AI surveillance tools to monitor, track, and surveil citizens to accomplish a range of policy objectives—some lawful, others that violate human rights, and many of which fall into a murky middle ground.”

Indeed, with every new AI surveillance technology that is adopted and deployed without any regard for privacy, Fourth Amendment rights and due process, the rights of the citizenry are being marginalized, undermined and eviscerated.

Cue the rise of digital authoritarianism.

Digital authoritarianism, as the Center for Strategic and International Studies cautions, involves the use of information technology to surveil, repress, and manipulate the populace, endangering human rights and civil liberties, and co-opting and corrupting the foundational principles of democratic and open societies, “including freedom of movement, the right to speak freely and express political dissent, and the right to personal privacy, online and off.”

The seeds of digital authoritarianism were planted in the wake of the 9/11 attacks, with the passage of the USA Patriot Act. A massive 342-page wish list of expanded powers for the FBI and CIA, the Patriot Act justified broader domestic surveillance, the logic being that if government agents knew more about each American, they could distinguish the terrorists from law-abiding citizens.

It sounded the death knell for the freedoms enshrined in the Bill of Rights, especially the Fourth Amendment, and normalized the government’s mass surveillance powers.

Writing for the New York Times, Jeffrey Rosen observed that “before Sept. 11, the idea that Americans would voluntarily agree to live their lives under the gaze of a network of biometric surveillance cameras, peering at them in government buildings, shopping malls, subways and stadiums, would have seemed unthinkable, a dystopian fantasy of a society that had surrendered privacy and anonymity.”

Who could have predicted that 50 years after George Orwell typed the final words to his dystopian novel 1984, “He loved Big Brother,” we would come to love Big Brother.

Yet that is exactly what has come to pass.

After 9/11, Rosen found that “people were happy to give up privacy without experiencing a corresponding increase in security. More concerned about feeling safe than actually being safe, they demanded the construction of vast technological architectures of surveillance even though the most empirical studies suggested that the proliferation of surveillance cameras had ‘no effect on violent crime’ or terrorism.”

In the decades following 9/11, a massive security-industrial complex arose that was fixated on militarization, surveillance, and repression.

Surveillance is the key.

We’re being watched everywhere we go. Speed cameras. Red light cameras. Police body cameras. Cameras on public transportation. Cameras in stores. Cameras on public utility poles. Cameras in cars. Cameras in hospitals and schools. Cameras in airports.

We’re being recorded at least 50 times a day.

It’s estimated that there are upwards of 85 million surveillance cameras in the U.S. alone, second only to China.

On any given day, the average American going about his daily business is monitored, surveilled, spied on and tracked in more than 20 different ways by both government and corporate eyes and ears.

Beware of what you say, what you read, what you write, where you go, and with whom you communicate, because it will all be recorded, stored and used against you eventually, at a time and place of the government’s choosing.

Yet it’s not just what we say, where we go and what we buy that is being tracked.

We’re being surveilled right down to our genes, thanks to a potent combination of hardware, software and data collection that scans our biometrics—our faces, irises, voices, genetics, microbiomes, scent, gait, heartbeat, breathing, behaviors—runs them through computer programs that can break the data down into unique “identifiers,” and then offers them up to the government and its corporate allies for their respective uses.

As one AI surveillance advocate proclaimed, “Surveillance is no longer only a watchful eye, but a predictive one as well.” For instance, Emotion AI, an emerging technology that is gaining in popularity, uses facial recognition technology “to analyze expressions based on a person’s faceprint to detect their internal emotions or feelings, motivations and attitudes.” China claims its AI surveillance can already read facial expressions and brain waves in order to determine the extent to which members of the public are grateful, obedient and willing to comply with the Communist Party.

This is the slippery slope that leads to the thought police.

The technology is already being used “by border guards to detect threats at border checkpoints, as an aid for detection and diagnosis of patients for mood disorders, to monitor classrooms for boredom or disruption, and to monitor human behavior during video calls.”

For all intents and purposes, we now have a fourth branch of government: the surveillance state.

This fourth branch came into being without any electoral mandate or constitutional referendum, and yet it possesses superpowers, above and beyond those of any other government agency save the military. It is all-knowing, all-seeing and all-powerful. It operates beyond the reach of the president, Congress and the courts, and it marches in lockstep with the corporate elite who really call the shots in Washington, DC.

The government’s “technotyranny” surveillance apparatus has become so entrenched and entangled with its police state apparatus that it’s hard to know anymore where law enforcement ends and surveillance begins.

The short answer: they have become one and the same entity. The police state has passed the baton to the surveillance state, which has shifted into high gear with the help of artificial intelligence technologies. The COVID-19 pandemic helped to further centralize digital power in the hands of the government at the expense of the citizenry’s privacy rights.

“From cameras that identify the faces of passersby to algorithms that keep tabs on public sentiment online, artificial intelligence (AI)-powered tools are opening new frontiers in state surveillance around the world.” So begins the Carnegie Endowment’s report on AI surveillance note. “Law enforcement, national security, criminal justice, and border management organizations in every region are relying on these technologies—which use statistical pattern recognition, machine learning, and big data analytics—to monitor citizens.”

In the hands of tyrants and benevolent dictators alike, AI surveillance is the ultimate means of repression and control, especially through the use of smart city/safe city platforms, facial recognition systems, and predictive policing. These technologies are also being used by violent extremist groups, as well as sex, child, drug, and arms traffickers for their own nefarious purposes.

China, the role model for our dystopian future, has been a major force in deploying AI surveillance on its own citizens, especially by way of its social credit systems, which it employs to identify, track and segregate its “good” citizens from the “bad.”

Social media credit scores assigned to Chinese individuals and businesses categorize them on whether or not they are worthy of being part of society. A real-name system—which requires people to use government-issued ID cards to buy mobile sims, obtain social media accounts, take a train, board a plane, or even buy groceries—coupled with social media credit scores ensures that those blacklisted as “unworthy” are banned from accessing financial markets, buying real estate or travelling by air or train. Among the activities that can get you labeled unworthy are taking reserved seats on trains or causing trouble in hospitals.

In much the same way that Chinese products have infiltrated almost every market worldwide and altered consumer dynamics, China is now exporting its “authoritarian tech” to governments worldwide ostensibly in an effort to spread its brand of totalitarianism worldwide. In fact, both China and the United States have led the way in supplying the rest of the world with AI surveillance, sometimes at a subsidized rate.

This is how totalitarianism conquers the world.

While countries with authoritarian regimes have been eager to adopt AI surveillance, as the Carnegie Endowment’s research makes clear, liberal democracies are also “aggressively using AI tools to police borders, apprehend potential criminals, monitor citizens for bad behavior, and pull out suspected terrorists from crowds.”

Moreover, it’s easy to see how the China model for internet control has been integrated into the American police state’s efforts to flush out so-called anti-government, domestic extremists.

According to journalist Adrian Shahbaz’s in-depth report, there are nine elements to the Chinese model of digital authoritarianism when it comes to censoring speech and targeting activists: 1) dissidents suffer from persistent cyber attacks and phishing; 2) social media, websites, and messaging apps are blocked; 3) posts that criticize government officials are removed; 4) mobile and internet access are revoked as punishment for activism; 5) paid commentators drown out government criticism; 6) new laws tighten regulations on online media; 7) citizens’ behavior monitored via AI and surveillance tools; 9) individuals regularly arrested for posts critical of the government; and 9) online activists are made to disappear.

You don’t even have to be a critic of the government to get snared in the web of digital censorship and AI surveillance.

The danger posed by the surveillance state applies equally to all of us: lawbreaker and law-abider alike.

When the government sees all and knows all and has an abundance of laws to render even the most seemingly upstanding citizen a criminal and lawbreaker, then the old adage that you’ve got nothing to worry about if you’ve got nothing to hide no longer applies.

As Orwell wrote in 1984, “You had to live—did live, from habit that became instinct—in the assumption that every sound you made was overheard, and, except in darkness, every movement scrutinized.”

In an age of too many laws, too many prisons, too many government spies, and too many corporations eager to make a fast buck at the expense of the American taxpayer, we are all guilty of some transgression or other.

No one is spared.

As Elise Thomas writes for Wired: “New surveillance tech means you’ll never be anonymous again.”

It won’t be long before we find ourselves looking back on the past with longing, back to an age where we could speak to whomever we wanted, buy whatever we wanted, think whatever we wanted, go wherever we wanted, feel whatever we wanted without those thoughts, words and activities being tracked, processed and stored by corporate giants, sold to government agencies, and used against us by militarized police with their army of futuristic technologies.

Tread cautiously: as I make clear in my book Battlefield America: The War on the American People and in its fictional counterpart The Erik Blair Diaries1984 has become an operation manual for the omnipresent, modern-day AI surveillance state.

Without constitutional protections in place to guard against encroachments on our rights when power, AI technology and militaristic governance converge, it won’t be long before Philip K. Dick’s rules for survival become our governing reality: “If, as it seems, we are in the process of becoming a totalitarian society in which the state apparatus is all-powerful, the ethics most important for the survival of the true, free, human individual would be: cheat, lie, evade, fake it, be elsewhere, forge documents, build improved electronic gadgets in your garage that’ll outwit the gadgets used by the authorities.”

Source: https://bit.ly/3PGkWcK

ABOUT JOHN W. WHITEHEAD

Constitutional attorney and author John W. Whitehead is founder and president of The Rutherford Institute. His most recent books are the best-selling Battlefield America: The War on the American People, the award-winning A Government of Wolves: The Emerging American Police State, and a debut dystopian fiction novel, The Erik Blair Diaries. Whitehead can be contacted at staff@rutherford.org. Nisha Whitehead is the Executive Director of The Rutherford Institute. Information about The Rutherford Institute is available at www.rutherford.org.

Publication Guidelines / Reprint Permission

John W. Whitehead’s weekly commentaries are available for publication to newspapers and web publications at no charge. Please contact staff@rutherford.org to obtain reprint permission.

“The government solution to a problem is usually as bad as the problem and very often makes the problem worse.”—Milton Friedman

You’ve been flagged as a threat.

Before long, every household in America will be similarly flagged and assigned a threat score.

Without having ever knowingly committed a crime or been convicted of one, you and your fellow citizens have likely been assessed for behaviors the government might consider devious, dangerous or concerning; assigned a threat score based on your associations, activities and viewpoints; and catalogued in a government database according to how you should be approached by police and other government agencies based on your particular threat level.

If you’re not unnerved over the ramifications of how such a program could be used and abused, keep reading.

It’s just a matter of time before you find yourself wrongly accused, investigated and confronted by police based on a data-driven algorithm or risk assessment culled together by a computer program run by artificial intelligence.

Consider the case of Michael Williams, who spent almost a year in jail for a crime he didn’t commit. Williams was behind the wheel when a passing car fired at his vehicle, killing his 25-year-old passenger Safarian Herring, who had hitched a ride.

Despite the fact that Williams had no motive, there were no eyewitnesses to the shooting, no gun was found in the car, and Williams himself drove Herring to the hospital, police charged the 65-year-old man with first-degree murder based on ShotSpotter, a gunshot detection program that had picked up a loud bang on its network of surveillance microphones and triangulated the noise to correspond with a noiseless security video showing Williams’ car driving through an intersection. The case was eventually dismissed for lack of evidence.

Although gunshot detection program like ShotSpotter are gaining popularity with law enforcement agencies, prosecutors and courts alike, they are riddled with flaws, mistaking “dumpsters, trucks, motorcycles, helicopters, fireworks, construction, trash pickup and church bells…for gunshots.”

As an Associated Press investigation found, “the system can miss live gunfire right under its microphones, or misclassify the sounds of fireworks or cars backfiring as gunshots.”

In one community, ShotSpotter worked less than 50% of the time.

Then there’s the human element of corruption which invariably gets added to the mix. In some cases, “employees have changed sounds detected by the system to say that they are gunshots.” Forensic reports prepared by ShotSpotter’s employees have also “been used in court to improperly claim that a defendant shot at police, or provide questionable counts of the number of shots allegedly fired by defendants.”

The same company that owns ShotSpotter also owns a predictive policing program that aims to use gunshot detection data to “predict” crime before it happens. Both Presidents Biden and Trump have pushed for greater use of these predictive programs to combat gun violence in communities, despite the fact that found they have not been found to reduce gun violence or increase community safety.

The rationale behind this fusion of widespread surveillance, behavior prediction technologies, data mining, precognitive technology, and neighborhood and family snitch programs is purportedly to enable the government takes preemptive steps to combat crime (or whatever the government has chosen to outlaw at any given time).

This is precrime, straight out of the realm of dystopian science fiction movies such as Minority Report, which aims to prevent crimes before they happen, but in fact, it’s just another means of getting the citizenry in the government’s crosshairs in order to lock down the nation.

Even Social Services is getting in on the action, with computer algorithms attempting to predict which households might be guilty of child abuse and neglect.

All it takes is an AI bot flagging a household for potential neglect for a family to be investigated, found guilty and the children placed in foster care.

Mind you, potential neglect can include everything from inadequate housing to poor hygiene, but is different from physical or sexual abuse.

According to an investigative report by the Associated Press, once incidents of potential neglect are reported to a child protection hotline, the reports are run through a screening process that pulls together “personal data collected from birth, Medicaid, substance abuse, mental health, jail and probation records, among other government data sets.” The algorithm then calculates the child’s potential risk and assigns a score of 1 to 20 to predict the risk that a child will be placed in foster care in the two years after they are investigated. “The higher the number, the greater the risk. Social workers then use their discretion to decide whether to investigate.”

Other predictive models being used across the country strive to “assess a child’s risk for death and severe injury, whether children should be placed in foster care and if so, where.”

Incredibly, there’s no way for a family to know if AI predictive technology was responsible for their being targeted, investigated and separated from their children. As the AP notes, “Families and their attorneys can never be sure of the algorithm’s role in their lives either because they aren’t allowed to know the scores.”

One thing we do know, however, is that the system disproportionately targets poor, black families for intervention, disruption and possibly displacement, because much of the data being used is gleaned from lower income and minority communities.

The technology is also far from infallible. In one county alone, a technical glitch presented social workers with the wrong scores, either underestimating or overestimating a child’s risk.

Yet fallible or not, AI predictive screening program is being used widely across the country by government agencies to surveil and target families for investigation. The fallout of this over surveillance, according to Aysha Schomburg, the associate commissioner of the U.S. Children’s Bureau, is “mass family separation.”

The impact of these kinds of AI predictive tools is being felt in almost every area of life.

Under the pretext of helping overwhelmed government agencies work more efficiently, AI predictive and surveillance technologies are being used to classify, segregate and flag the populace with little concern for privacy rights or due process.

All of this sorting, sifting and calculating is being done swiftly, secretly and incessantly with the help of AI technology and a surveillance state that monitors your every move.

Where this becomes particularly dangerous is when the government takes preemptive steps to combat crime or abuse, or whatever the government has chosen to outlaw at any given time.

In this way, government agents—with the help of automated eyes and ears, a growing arsenal of high-tech software, hardware and techniques, government propaganda urging Americans to turn into spies and snitches, as well as social media and behavior sensing software—are spinning a sticky spider-web of threat assessments, behavioral sensing warnings, flagged “words,” and “suspicious” activity reports aimed at snaring potential enemies of the state.

Are you a military veteran suffering from post-traumatic stress disorder? Have you expressed controversial, despondent or angry views on social media? Do you associate with people who have criminal records or subscribe to conspiracy theories? Were you seen looking angry at the grocery store? Is your appearance unkempt in public? Has your driving been erratic? Did the previous occupants of your home have any run-ins with police?

All of these details and more are being used by AI technology to create a profile of you that will impact your dealings with government.

It’s the American police state rolled up into one oppressive pre-crime and pre-thought crime package, and the end result is the death of due process.

In a nutshell, due process was intended as a bulwark against government abuses. Due process prohibits the government of depriving anyone of “Life, Liberty, and Property” without first ensuring that an individual’s rights have been recognized and respected and that they have been given the opportunity to know the charges against them and defend against those charges.

With the advent of government-funded AI predictive policing programs that surveil and flag someone as a potential threat to be investigated and treated as dangerous, there can be no assurance of due process: you have already been turned into a suspect.

To disentangle yourself from the fallout of such a threat assessment, the burden of proof rests on you to prove your innocence.

You see the problem?

It used to be that every person had the right to be assumed innocent until proven guilty, and the burden of proof rested with one’s accusers. That assumption of innocence has since been turned on its head by a surveillance state that renders us all suspects and overcriminalization which renders us all potentially guilty of some wrongdoing or other.

Combine predictive AI technology with surveillance and overcriminalization, then add militarized police crashing through doors in the middle of the night to serve a routine warrant, and you’ll be lucky to escape with your life.

Yet be warned: once you get snagged by a surveillance camera, flagged by an AI predictive screening program, and placed on a government watch list—whether it’s a watch list for child neglect, a mental health watch list, a dissident watch list, a terrorist watch list, or a red flag gun watch list—there’s no clear-cut way to get off, whether or not you should actually be on there.

You will be tracked wherever you go, flagged as a potential threat and dealt with accordingly.

If you’re not scared yet, you should be.

We’ve made it too easy for the government to identify, label, target, defuse and detain anyone it views as a potential threat for a variety of reasons that run the gamut from mental illness to having a military background to challenging its authority to just being on the government’s list of persona non grata.

As I make clear in my book Battlefield America: The War on the American People and in its fictional counterpart The Erik Blair Diaries, you don’t even have to be a dissident to get flagged by the government for surveillance, censorship and detention.

All you really need to be is a citizen of the American police state.

Source: https://bit.ly/3N6L27u

“Abortion on demand is the ultimate State tyranny; the State simply declares that certain classes of human beings are not persons, and therefore not entitled to the protection of the law. The State protects the ‘right’ of some people to kill others, just as the courts protected the ‘property rights’ of slave masters in their slaves. Moreover, by this method the State achieves a goal common to all totalitarian regimes: it sets us against each other, so that our energies are spent in the struggle between State-created classes, rather than in freeing all individuals from the State. Unlike Nazi Germany, which forcibly sent millions to the gas chambers (as well as forcing abortion and sterilization upon many more), the new regime has enlisted the assistance of millions of people to act as its agents in carrying out a program of mass murder.”—Ron Paul

Who gets to decide when it comes to bodily autonomy?

Where does one draw the line over whose rights are worthy of protecting? And how do present-day legal debates over bodily autonomy, privacy, vaccine mandates, the death penalty and abortion play into future discussions about singularity, artificial intelligence, cloning, and the privacy rights of the individual in the face of increasingly invasive, intrusive and unavoidable government technologies?

Caught up in the heated debate over the legality of abortion, we’ve failed to think about what’s coming next. Get ready, because it could get scary, ugly and overwhelming really fast.

Thus far, abortion politics have largely revolved around who has the right to decide—the government or the individual—when it comes to bodily autonomy, the right to privacy in one’s body, sexual freedom, and the rights of the unborn.

In 1973, the U.S. Supreme Court ruled in Roe v. Wade that the Fourteenth Amendment’s Due Process Clause provides for a “right to privacy” that assures a woman’s right to abort her pregnancy within the first two trimesters.

Since that landmark ruling, abortion has been so politicized, polarized and propagandized as to render it a major frontline in the culture wars.

In Planned Parenthood v. Casey (1992), the Supreme Court reaffirmed its earlier ruling in Roe  when it prohibited states from imposing an “undue burden” or “substantial obstacle in the path of a woman seeking an abortion before the fetus attains viability.”

Thirty years later, in the case of Dobbs v. Jackson Women’s Health Organization, the Supreme Court is poised to revisit whether the Constitution—namely, the Fourteenth Amendment—truly provides for the right to an abortion.

At a time when abortion is globally accessible (approximately 73 million abortions are carried out every year), legally expedient form of birth control (it is used to end more than 60% of unplanned pregnancies), and considered a societal norm (according to the Pew Research Center, a majority of Americans continue to believe that abortion should be legal in all or most cases), it’s debatable whether it will ever be truly possible to criminalize abortion altogether.

No matter how the Supreme Court rules in Dobbs, it will not resolve the problem of a culture that values life based on a sliding scale. Nor will it help us navigate the moral, ethical and scientific minefields that await us as technology and humanity move ever closer to a point of singularity.

Here’s what I know.

Life is an inalienable right. By allowing the government to decide who or what is deserving of rights, it shifts the entire discussion from one in which we are “endowed by our Creator with certain inalienable rights” (that of life, liberty property and the pursuit of happiness) to one in which only those favored by the government get to enjoy such rights. The abortion debate—a tug-of-war over when an unborn child is considered a human being with rights—lays the groundwork for discussions about who else may or may not be deserving of rights: the disabled, the aged, the infirm, the immoral, the criminal, etc. The death penalty is just one aspect of this debate. As theologian Francis Schaeffer warned early on: “The acceptance of death of human life in babies born or unborn opens the door to the arbitrary taking of any human life. From then on, it’s purely arbitrary.

If all people are created equal, then all lives should be equally worthy of protection. There’s an idea embraced by both the Right and the Left according to their biases that there is a hierarchy to life, with some lives worthier of protection than others. Out of that mindset is born the seeds of eugenics, genocide, slavery and war.

There is no hierarchy of freedoms. All freedoms hang together. Freedom cannot be a piece-meal venture. My good friend Nat Hentoff (1925-2017), a longtime champion of civil liberties and a staunch pro-lifer, often cited Cardinal Bernardin, who believed that a “consistent ethic of life” viewed all threats to life as immoral: “[N]uclear war threatens life on a previously unimaginable scale. Abortion takes life daily on a horrendous scale. Public executions are fast becoming weekly events in the most advanced technological society in history, and euthanasia is now openly discussed and even advocated. Each of these assaults on life has its own meaning and morality. They cannot be collapsed into one problem, but they must be confronted as pieces of a larger pattern.”

Beware slippery slopes. To suggest that the end justifies the means (for example, that abortion is justified in order to ensure a better quality of life for women and children) is to encourage a slippery slope mindset that could just as reasonably justify ending a life in order for the great good of preventing war, thwarting disease, defeating poverty, preserving national security, etc. Such arguments have been used in the past to justify such dubious propositions as subjecting segments of the population to secret scientific experiments, unleashing nuclear weapons on innocent civilians, and enslaving fellow humans.

Beware double standards. As the furor surrounding COVID-19 vaccine mandates make clear, the debate over bodily autonomy and privacy goes beyond the singular right to abortion. Indeed, as vaccine mandates have been rolled out, long-held positions have been reversed: many of those who historically opposed the government usurping a woman’s right to bodily autonomy and privacy have no qualms about supporting vaccine mandates that trample upon those very same rights. Similarly, those who historically looked to the government to police what a woman does with her body believe the government should have no authority to dictate whether or not one opts to get vaccinated.

What’s next? Up until now, we have largely focused the privacy debate in the physical realm as it relates to abortion rights, physical searches of our persons and property, and our communications. Yet humanity is being propelled at warp speed into a whole new frontier when it comes to privacy, bodily autonomy, and what it means to be a human being.

We haven’t even begun to understand how to talk about these new realms, let alone establish safeguards to protect against abuses.

Humanity itself hangs in the balance.

Remaining singularly human and retaining your individuality and dominion over yourself—mind, body and soul—in the face of corporate and government technologies that aim to invade, intrude, monitor, manipulate and control us may be one of the greatest challenges before us.

These battles over COVID-19 vaccine mandates are merely the tipping point. The groundwork being laid with these mandates is a prologue to what will become the police state’s conquest of a new, relatively uncharted, frontier: inner space, specifically, the inner workings (genetic, biological, biometric, mental, emotional) of the human race.

If you were unnerved by the rapid deterioration of privacy under the Surveillance State, prepare to be terrified by the surveillance matrix that will be ushered in within the next few decades.

Everything we do is increasingly dependent on and, ultimately, controlled by technological devices. For example, in 2007, there were an estimated 10 million sensor devices connecting human utilized electronic devices (cell phones, laptops, etc.) to the Internet. By 2013, it had increased to 3.5 billion. By 2030, there will be an estimated 100 trillion sensor devices connecting us to the internet by way of a neural network that approximates a massive global brain.

The end goal? Population control and the creation of a new “human” species, so to speak, through singularity, a marriage of sorts between machine and human beings in which artificial intelligence and the human brain will merge to form a superhuman mind.

The plan is to develop a computer network that will exhibit intelligent behavior equivalent to or indistinguishable from that of human beings by 2029. And this goal is to have computers that will be “a billion times more powerful than all of the human brains on earth.” As former Google executive Mo Gawdat warns, “The reality is, we’re creating God.”

Neuralink, a brain-computer chip interface (BCI), paves the way for AI control of the human brain, at which point the disconnect between humans and AI-controlled computers will become blurred and human minds and computers will essentially become one and the same. “In the most severe scenario, hacking a Neuralink-like device could turn ‘hosts’ into programmable drone armies capable of doing anything their ‘master’ wanted,” writes Jason Lau for Forbes.

Advances in neuroscience indicate that future behavior can be predicted based upon activity in certain portions of the brain, potentially creating a nightmare scenario in which government officials select certain segments of the population for more invasive surveillance or quarantine based solely upon their brain chemistry.

Clearly, we are rapidly moving into the “posthuman era,” one in which humans will become a new type of being. “Technological devices,” writes journalist Marcelo Gleiser, “will be implanted in our heads and bodies, or used peripherally, like Google Glass, extending our senses and cognitive abilities.”

Transhumanism—the fusing of machines and people—is here to stay and will continue to grow.

In fact, as science and technology continue to advance, the ability to control humans will only increase. In 2014, for example, it was revealed that scientists had discovered how to deactivate that part of our brains that controls whether we are conscious or not. Add to this the fact that increasingly humans will be implanted with microchips for such benign purposes as tracking children or as medical devices to assist with our health.

Such devices “point to an uber-surveillance society that is Big Brother on the inside looking out,” warns Dr. Katina Michael. “Governments or large corporations would have the ability to track people’s actions and movements, categorize them into different socio-economic, political, racial, or consumer groups and ultimately even control them.”

All of this indicates a new path forward for large corporations and government entities that want to achieve absolute social control.

It is slavery in another form.

Yet we must never stop working to protect life, preserve our freedoms and maintain some semblance of our humanity.

Abortion, vaccine mandates, transhumanism, etc.: these are all points along the continuum.

Even so, there will be others. For instance, analysts are speculating whether artificial intelligence, which will eventually dominate all emerging technologies, could come to rule the world and enslave humans. How will a world dominated by artificial intelligence redefine what it means to be human and exercise free will?

Scientists say the world’s first living robots can now reproduce. What rights are these “living” organisms entitled to? For that matter, what about clones? At the point that scientists are able to move beyond cloning organs and breeding hybrid animals to breeding full-bodied, living clones in order to harvest body parts, who is to say that clones do not also deserve to have their right to life protected?

These are ethical dilemmas without any clear-cut answers. Yet one thing is certain: as I make clear in my book Battlefield America: The War on the American People and in its fictional counterpart The Erik Blair Diaries, putting the power to determine who gets to live or die in the hands of the government is a dangerous place to start.

Source: https://bit.ly/3G6OgEs

ABOUT JOHN W. WHITEHEAD

Constitutional attorney and author John W. Whitehead is founder and president The Rutherford Institute. His books Battlefield America: The War on the American People and A Government of Wolves: The Emerging American Police State are available at www.amazon.com. He can be contacted at johnw@rutherford.org. Nisha Whitehead is the Executive Director of The Rutherford Institute. Information about The Rutherford Institute is available at www.rutherford.org.

Publication Guidelines / Reprint Permission

John W. Whitehead’s weekly commentaries are available for publication to newspapers and web publications at no charge. Please contact staff@rutherford.org to obtain reprint permission.