AI Phobia: A Live Talk on Fear, Facts, and Freedom

Artificial intelligence is everywhere—at work, in schools, in our phones, and increasingly in politics and policing. On Saturday, January 3, 2026 at 12PM EST, Dearborn Blog hosts a live online discussion to separate legit concerns from sci-fi panic—and talk about what community control could actually look like.

Artificial intelligence is everywhere—at work, in schools, in our phones, and increasingly in politics and policing. On Saturday, January 3, 2026 at 12PM EST, Dearborn Blog hosts a live online discussion to separate legit concerns from sci-fi panic—and talk about what community control could actually look like.

The event: “AI PHOBIA” (Live Online)

Dearborn Blog’s Live Discussions series—hosted by Dr. Ali Ajami and Wissam Charafeddine—returns with a timely theme: AI PHOBIA.

When: Saturday, January 3, 2026 — 12PM EST
Where: Online livestream — follow @DEARBORNBLOG for the link and reminders
Guests:

  • Philena Farley — Legal Senior Service Specialist and Ohio Green Party Co-Chair
  • Mike Akanan — Controls Engineering Manager & Serial Entrepreneur

This conversation isn’t about “AI is magic” or “AI is evil.” It’s about the real stuff: jobs, privacy, bias, deepfakes, surveillance, corporate power, and what regular people can do besides doom-scroll.


Why “AI phobia” is real (and not just dramatic)

Humans have always been suspicious of powerful new tools. Fire burned villages. Electricity shocked people. The internet… well, it invented comment sections.

AI is different in one key way: it doesn’t just extend our muscles (like machines) or extend our reach (like the internet). It extends decision-making—sometimes invisibly, at scale, and with consequences. That’s why major institutions have started pushing formal “risk management” approaches: not to kill innovation, but to stop innovation from steamrolling basic rights. The NIST AI Risk Management Framework is one example—focused on identifying and managing AI risks to people and society, not just technical performance. [1]

UNESCO’s global AI ethics recommendation goes even broader, grounding AI governance in human rights, dignity, transparency, and human oversight—a fancy way of saying: machines shouldn’t quietly run our lives without accountability. [2]


The fears people actually have (aka: not killer robots… mostly)

When folks say they’re worried about AI, they usually mean things like:

  • “Will I lose my job, or get squeezed harder at work?”
  • “Is this system biased against me?”
  • “Who’s collecting my data—and what are they doing with it?”
  • “How do we even tell what’s real anymore?” (deepfakes, fake news, synthetic audio)
  • “Are governments and corporations using this to surveil or punish dissent?”

Those aren’t irrational fears. They’re political economy fears—about power, incentives, and the fact that new technology tends to get used first by the people who already have the most leverage.


Jobs: the truth is messier than the hot takes

You’ll hear two loud stories:

  1. “AI will take all the jobs.”
  2. “AI won’t change anything.”

Reality, as always, chooses chaos.

The International Labour Organization (ILO) has emphasized that the biggest near-term effect of generative AI is often task transformation and augmentation, not immediate full automation—while still warning that exposure is uneven across occupations and can be highly gendered (with heavy impacts on clerical-style work). [3]

Meanwhile, the World Economic Forum’s Future of Jobs Report 2025 frames AI as a major driver of skill shifts and job churn through 2030—meaning some roles shrink, others grow, and many jobs get redesigned around new workflows. [4]

Translation: the danger isn’t only unemployment. It’s also who benefits from productivity gains—and whether workers get more dignity and time, or just more monitoring and speed-ups.



Worth knowing: The ILO finds generative AI is more likely to reshape tasks than instantly erase entire occupations—while the World Economic Forum predicts major workforce transitions as employers reorganize around AI and data-driven work. [3][4]

Rights, privacy, and the “who controls the machine” question

A lot of AI anxiety boils down to one issue: control.

Even when AI tools are “optional,” workplaces can quietly turn them into requirements. Even when AI systems are “assistive,” they can become decision-makers in practice—especially when managers, agencies, or contractors treat the model’s output like it’s gospel.

That’s why global and national frameworks keep repeating the same drumbeat: transparency, accountability, and human oversight. [1][2]

And it’s not theoretical. The UN General Assembly has adopted a global AI resolution emphasizing privacy and human rights risks—an early sign that governments know the stakes are real, even if they don’t always regulate like they mean it. [5]


The global context: AI + war + surveillance (yes, it matters here)

AI “phobia” spikes when people see the technology used in life-and-death settings.

The UN has repeatedly warned about lethal autonomous weapon systems, with the UN Secretary-General calling them morally unacceptable and pushing for prohibition. [6] Humanitarian and legal experts have also highlighted risks in AI-assisted targeting, arguing it can worsen civilian harm when treated as a shortcut to “certainty.” [7]

And in the real world, major reporting has examined how AI tools and cloud/analytics systems are being used in modern warfare—including in Israel’s war on Gaza—raising serious ethical concerns about accountability, errors, and civilian protection. [8]

Dearborn doesn’t live in a bubble. Our communities are deeply connected to global struggles—especially when tech built “over there” gets deployed “over here” for policing, immigration surveillance, workplace monitoring, and narrative control. Talking about AI ethics without talking about human rights is like talking about traffic safety while ignoring cars.


Why this matters in Dearborn

Dearborn sits at a crossroads of:

  • Manufacturing and engineering culture (automation isn’t new here—AI is the next wave)
  • Small business hustle (AI tools can help… or undercut livelihoods)
  • Immigrant and minority communities (often first to feel surveillance creep)
  • Activism and civic engagement (where disinformation and digital targeting are growing threats)

So this conversation isn’t abstract. It’s personal, economic, and political.

And it’s also an opportunity: communities can push for public-interest rules—worker protections, algorithmic transparency, bans or limits on abusive surveillance, and procurement standards that prevent government agencies from buying “black box” systems that no one can audit.

The goal isn’t to worship AI or fear it—
it’s to govern it.

That’s the core of “AI PHOBIA”: moving from vibes to strategy.


How to watch and participate

  • Follow @DEARBORNBLOG for the livestream link and reminders.
  • Join live and drop questions in the comments—especially practical ones:
    • What AI tools should workers refuse to be monitored by?
    • What should schools and universities disclose when using AI?
    • What policies match a people-first, Green, pro-human-rights approach?

Social media caption (copy/paste)

AI PHOBIA — LIVE discussion 🤖⚡️
This Saturday Jan 3, 2026 at 12PM EST, we’re going live to talk about the real fears behind AI: jobs, privacy, bias, deepfakes, surveillance—and what regular people can actually do about it.

Hosted by Dr. Ali Ajami & Wissam Charafeddine
Guests: Philena Farley (Ohio Green Party Co-Chair) + Mike Akanan (Controls Engineering Manager/Entrepreneur)

Follow @DEARBORNBLOG for the livestream link.
Drop your questions in the comments now 👇

#Dearborn #ArtificialIntelligence #TechEthics #Privacy #Labor #GreenParty #CommunityPower #FreeSpeech #HumanRights


Disclaimer

Event details are based on the official promotional flyer and are subject to change by organizers or platform requirements. Dearborn Blog provides this information for community awareness and discussion, not as legal or professional advice. For corrections or comments you’d like added to this post, email info@dearbornblog.com.


Sources

  1. National Institute of Standards and Technology (NIST), Artificial Intelligence Risk Management Framework (AI RMF 1.0). NIST Publications+1
  2. UNESCO, Recommendation on the Ethics of Artificial Intelligence (global standard adopted by Member States). UNESCO+1
  3. International Labour Organization (ILO), Generative AI and jobs: A global analysis of potential effects on job quantity and quality (Working Paper 96) and related updates. International Labour Organization+2International Labour Organization+2
  4. World Economic Forum, The Future of Jobs Report 2025 (2025–2030 outlook). World Economic Forum+1
  5. Reuters, coverage of the UN General Assembly’s first global AI resolution emphasizing human rights and privacy risks. Reuters
  6. UN Office for Disarmament Affairs (UNODA), UN position on lethal autonomous weapons systems. Disarmament UNODA
  7. International Committee of the Red Cross (ICRC), analysis on risks of AI systems in military targeting support. blogs.icrc.org+1
  8. Associated Press investigation on AI tools used in Israel’s war operations and the ethical concerns raised. apnews.com

Please, leave a comment...

This site uses Akismet to reduce spam. Learn how your comment data is processed.