Information Integrity and Generative AI: How Law Tries (and Often Fails) to Protect Democracy from Deepfakes, Micro-targeting, and Algorithmic Lies – Lessons from the EU, Germany, and Turkey
[This PhD proposal was written by me, with occasional help from Grok, Claude, Gemini, ChagGPT, SciScpace, DeepSeek, Mistral and others for grammar, flow, and cutting repetitive bits. Every idea, every comparison, every frustration in this text is mine. I take full responsibility.]
Primary Research Question
How can law actually protect the information environment that democracy needs to survive the flood of generative AI – deepfakes, synthetic propaganda, behavioural manipulation – without killing free speech or innovation in the process?
Sub-questions
- What tools have the EU, Germany, and Turkey actually built to fight AI-generated disinformation and deepfakes?
- Do any of these tools work in real life, especially against micro-targeting and behavioural manipulation?
- Who ends up being held accountable – platforms, model makers, or nobody?
- How do different constitutional traditions handle the clash between “save democracy now” and “but we have rights”?
- What can we steal from each system to build something that actually works without turning into censorship?
Introduction
I’ve spent the last ten years bouncing between courtrooms, university lecture halls, and WhatsApp groups where journalists forward the latest blocked article at 2 a.m. In that time I’ve watched the ground shift under us.
In 2019 a deepfake porn video of an opposition politician spread like wildfire the night before elections. We filed complaints. Nothing happened for months. In 2023, during the general elections, hundreds of accounts pumped out AI-generated “news” that Erdoğan had died / had won 70 % / had fled the country – all in the space of an hour. By the time anyone reacted, the damage was done.
These are not hypotheticals for me. They are Tuesday.
Generative AI has broken the old contract: we used to assume political speech had an author you could sue, shame, or vote out. Now the author is a server farm somewhere, the evidence can be manufactured in seconds, and the amplifier is an algorithm nobody fully understands.
Europe is trying with the DSA and the AI Act. Germany wraps everything in constitutional proportionality. Turkey… well, Turkey took similar-looking laws and turned them into blunt instruments. Watching these three systems side by side is like watching the same play performed by three very different theatres: same script, wildly different outcomes.
That’s why I’m doing this PhD. I want to understand why the same legal tools that (sort of) work in Berlin become weapons in Ankara, and whether Brussels’ grand co-regulatory vision can survive contact with reality. Most of all I want to figure out what actually protects citizens when truth becomes optional.
Background and Comparative Rationale
I picked the EU, Germany, and Turkey deliberately because they are so different yet weirdly connected.
- The EU is the optimistic supranational experiment: let’s make platforms do systemic risk assessments and give researchers data.
- Germany is the anxious perfectionist: every moderation decision must survive three levels of judicial review.
- Turkey is the warning: what happens when the judiciary is intimidated, civil society is exhausted, and the same transparency clauses are used to demand data localisation instead of accountability.
I’m not pretending these are perfectly sealed boxes – Turkish drafts copy-paste from Brussels, German judges cite EU law, Brussels quietly worries about Turkish-style enforcement creep. But the differences in democratic health are stark, and that health seems to matter more than clever legal design.
The Turkish Context
Let me be blunt: in Turkey we have some of the most advanced-sounding internet laws in the world, and one of the worst enforcement cultures.
Law 5651 started in 2007 as a gambling-and-obscenity blocker. Fifteen years later it’s a full-blown content-control regime: local representatives, 24-hour takedown deadlines, social media taxes if you refuse, criminal articles for “disinformation that disturbs public order” (guess who decides what disturbs public order).
I’ve sat in courtrooms where the judge openly says “I can’t rule against the BTK decision, my promotion depends on it.” I’ve written Constitutional Court applications that were decided correctly – two years after the election everyone was fighting about.
And now generative AI walks in. Suddenly the state doesn’t even need to fabricate evidence the old way – it can generate it. Or claim something is AI-generated when it isn’t. The 2022 Disinformation Law is tailor-made for this ambiguity.
That’s why Turkey isn’t just a “negative case”. It’s the future arriving early.
Researcher Positionality
I’m a weird hybrid and I’m done pretending otherwise.
By day I teach constitutional law and IP at university. By night (and weekends) I still take digital-rights cases – mostly pro bono because no one else will. I’m on the mailing lists of half the NGOs in Istanbul, I get phone calls from journalists when their accounts are throttled, and I’ve lost count of how many urgent Constitutional Court applications I’ve filed after midnight.
This isn’t “field access”. This is my life.
That position gives me things a pure academic rarely gets: off-the-record conversations with regulators who admit the system is broken, real-time data on which platforms fold first when threatened, and a gut sense of how fast principles collapse when political survival is on the line.
It also means I’m not neutral. I’m pissed off. But I’ve learned to channel that anger into systematic comparison instead of just yelling on Twitter.
Methodology
- Classic doctrinal work on laws and court decisions (nothing sexy).
- 12–14 long interviews with people who actually do this stuff: BTK officials (anonymously), platform policy people in Berlin and Brussels, Turkish judges who still believe in the rule of law, exhausted NGO lawyers.
- Analysis of transparency reports, blocked-url lists, election-period takedown waves.
- A lot of triangulation because in Turkey official numbers and reality rarely meet.
I’m not promising generalisable laws of physics. I’m promising the most honest account I can write of how these systems actually behave when the stakes are high.
Expected Contributions
- A useful way to talk about “epistemic vulnerability” that lawyers can actually use in court.
- A clear explanation of why the same law produces accountability in Germany and control in Turkey.
- Hard evidence that institutional trust matters more than elegant legal design.
- Practical suggestions for the EU before it slides toward the Turkish version of “systemic risk mitigation”.
Conclusion
Four years from now generative AI will be ten times more powerful. Deepfakes will be indistinguishable, micro-targeting will be surgical.
I don’t know if law can keep up. But I do know this: if democratic institutions are healthy, even imperfect laws can be made to work. If they’re not, even the most beautiful regulation becomes another tool for control.
That’s what this PhD is about. Not saving democracy with clever clauses, but understanding the conditions under which law still stands a chance.
I’m ready to start.