Beyond Self-Regulation: AI, Media Power, and Public Accountability in Bangladesh
As Bangladesh moves toward a national AI policy, the debate over self-regulation in media reveals a deeper question: who will govern AI, by what norms, and in whose interest?
Bangladesh has entered a new phase in its AI debate. This is no longer a moment for broad wonder, fear, or technological spectacle. AI has moved from being discussed mainly as possibility to being handled as policy, newsroom workflow, public-service infrastructure, and a question of national direction. At a recent policy dialogue in Dhaka, several media leaders argued that AI in journalism should be governed primarily through self-regulation rather than law. Their concern was clear: in Bangladesh, legal regulation can easily become another instrument of control over media. That fear is real. But self-regulation alone is not enough for the kind of AI moment Bangladesh is now entering. (thedailystar.net)
This is precisely where the current conversation becomes important. Bangladesh’s official AI policy portal says the National Artificial Intelligence Policy 2026–2030 (Draft V2.0) has been released for public review, the consultation phase has concluded, and the committee is now reviewing feedback before final adoption. In other words, AI governance is no longer theoretical. It is being institutionalized. At the same time, the draft policy itself goes far beyond innovation talk. It frames AI as part of governance modernization, digital sovereignty, public service delivery, public trust, Bangla-language inclusion, risk management, and rights protection. (aipolicy.gov.bd)
This is why the current debate should not be reduced to a simple choice between state regulation and self-regulation. Both positions are too narrow. A state-only approach is risky in a country where legal instruments have often been used in ways that shrink rather than expand public freedom. But self-regulation, by itself, assumes that AI is mainly an internal professional matter for newsrooms, platforms, and private institutions. It is not. AI already affects truth, public trust, labor, language, visibility, disinformation, harassment, and the everyday relationship between institutions and citizens. Those are not merely private editorial questions. They are public questions. (thedailystar.net)
The recent Dhaka dialogue captured one side of the problem well. Shakhawat Liton warned that legal regulation could hand the government another tool to control the media. Shawkat Hossain argued that AI rules should align with each outlet’s editorial policy, and Talat Mamun suggested that standards set by major media houses could create a self-regulating norm across the sector. MRDI said it plans to develop a guideline for print, online, and television outlets. The discussion also revealed that AI is already being used in practice: Prothom Alo’s 60-word “Shorts” summaries are reportedly almost entirely AI-generated, while multiple speakers emphasized that journalists’ AI literacy remains weak and that localized codes are urgently needed. (thedailystar.net)
All of that is useful. None of it is sufficient.
Why? Because AI does not remain inside the newsroom once it is deployed. It shapes how information is summarized, translated, prioritized, visualized, recommended, and trusted. It affects how quickly errors spread, how deepfakes travel, how public fear is mobilized, how reputations are damaged, and how institutional authority is quietly reorganized under the language of efficiency. Once AI enters media, it also enters the public sphere more deeply. And once it enters public service, the stakes grow even further: classification, eligibility, redress, monitoring, and administrative judgment all begin to shift. Bangladesh’s draft policy clearly anticipates this broader transformation. It proposes a risk-based framework, rights-based safeguards for automated decision-making, public-service chatbots in Bangla, citizen redress mechanisms, AI labeling in digital media, and restrictions on harmful deepfake content.
This is where my earlier argument about the poetics and politics of AI must now be extended. In 2023, I argued that AI was not apolitical, that technology never arrives in Bangladesh as a neutral force, and that digital systems must be read through localization, inequality, political economy, and the public sphere. That argument still stands. But today the issue has changed. The main question is no longer only whether AI is political. The question is: who gets to govern AI in Bangladesh, by what norms, and in whose interest? (Chowdhury, 2023).
Bangladesh’s draft policy gives us one answer. It imagines AI as a strategic instrument of national transformation. It links AI to economic growth, governance modernization, and regional leadership. It asserts digital sovereignty as a core principle, saying Bangladesh should retain control over critical data, digital infrastructure, and AI systems in order to protect national security, citizen rights, and data privacy. It also explicitly says that Bangla and other nationally relevant languages should be centered in model design and digital public services. This is important. It means the draft does not see AI only as imported software. It sees AI as part of a larger national project.
That national ambition deserves to be taken seriously. But it also needs to be questioned seriously.
Digital sovereignty sounds attractive, especially in a world where most advanced AI systems are produced elsewhere and where dependence on foreign platforms, chips, cloud infrastructure, and models can deepen new forms of dependency. Yet sovereignty is not a magic word. It does not automatically tell us how power will be distributed inside the country. A system can be sovereign from the outside and still opaque, exclusionary, or overly centralized from within. The real issue is not sovereignty alone, but democratic sovereignty: who can question decisions, who can appeal harms, who can inspect systems, and who can prevent AI from becoming another layer of unaccountable administrative power. Bangladesh’s draft policy moves in that direction by promising human review, explanation, citizen redress, and public reporting for public-sector systems. But those promises will matter only if they become institutional habits rather than policy language.
The media debate makes this tension especially visible. News organizations are right to fear that government-led regulation of AI can slide into political control. But self-regulation has its own weaknesses. Large media houses may create internal rules, but who represents the public in those rules? Who speaks for citizens harmed by AI-generated misinformation, manipulated visuals, reputational damage, automated summarization errors, or opaque recommendation systems? Who protects smaller outlets, freelancers, local-language platforms, and audiences who do not have institutional power? If AI governance is left only to outlets themselves, then professional self-interest can too easily masquerade as public ethics. (thedailystar.net)
What Bangladesh needs, then, is not only self-regulation and not only law. It needs a public ethics of AI.
A public ethics of AI would include at least five things.
First, it would protect editorial freedom from becoming subordinate to state control. Any legal framework touching journalism must be narrow, transparent, and insulated from partisan misuse. The Daily Star dialogue rightly warned that vague law can become another weapon. (thedailystar.net)
Second, it would require institutional transparency. Newsrooms, platforms, and public agencies using AI should disclose where AI is being used, for what purpose, with what level of human oversight, and with what procedures for correction. Bangladesh’s draft policy already moves toward this logic through AI labeling, public reporting, risk classification, and explanation rights.
Third, it would build independent oversight, not just internal codes. Self-regulation without outside scrutiny easily becomes self-protection. Bangladesh’s draft proposes oversight and audit mechanisms in public-sector deployment, but media and platform contexts also need some independent public-interest review architecture, even if it is not direct government command.
Fourth, it would center Bangla and social context. AI in Bangladesh cannot be governed only through imported templates or English-language assumptions. The draft policy is right to emphasize Bangla and cultural context. But that also raises deeper questions: whose Bangla will AI normalize? Which registers, accents, archives, and class-coded forms of expression will become privileged? A serious AI ethics for Bangladesh must include language politics, not as a side issue but as a central one.
Fifth, it would take public harm seriously. Bangladesh’s draft explicitly addresses disinformation, deepfakes, technology-facilitated gender-based violence, and harmful AI-generated media. It also recognizes exclusion risks, public distrust, and the possibility that automated systems can damage rights, livelihoods, and access to services. This is where the policy is strongest. It understands that AI is not only about productivity. It is also about injury.
Seen this way, the real problem is not whether AI should be governed. It already is being governed, informally or formally, by someone. The real problem is whether Bangladesh will allow AI norms to emerge through convenience, platform habit, newsroom improvisation, procurement shortcuts, and selective institutional power — or whether it will build democratic standards before those habits harden into structure. The draft policy, despite its limits, at least recognizes that AI touches rights, language, media, public administration, and national direction all at once. The media debate, despite its caution, still risks treating AI mainly as an internal sectoral issue. Neither perspective is enough on its own. (thedailystar.net)
My own view is simple: self-regulation is necessary, but it is not enough. Bangladesh needs a layered model — editorial codes inside media institutions, independent public-interest scrutiny beyond them, and narrowly framed rights-based legal safeguards that protect citizens without handing the state another blank instrument of control. The choice is not between freedom and accountability. The real task is to build forms of accountability that do not destroy freedom. (thedailystar.net)
Bangladesh’s AI debate has now moved from poetics to institutions. That is why the stakes are higher. AI is no longer just a story about future technology. It is becoming a story about how truth is mediated, how governance is organized, how language is standardized, how citizens are classified, and how public trust is negotiated. The question before us is not whether AI will enter Bangladeshi life more deeply. It already has. The question is whether Bangladesh will govern this transition as a democratic public issue, or leave it to a mixture of fear, convenience, and opaque power. (Chowdhury, 2023)
References
Chowdhury, M. Z. (2023, August 7). Poetics and politics of AI. New Age.https://www.newagebd.net/article/208732/poetics-and-politics-of-ai
Government of Bangladesh. (2026). National Artificial Intelligence Policy 2026–2030 (Draft V2.0). AI Policy Bangladesh.https://aipolicy.gov.bd/
Self-regulation needed amid shift in AI: Say media leaders at Star-MRDI policy dialogue. (2026, April 7). The Daily Star.https://www.thedailystar.net/news/bangladesh/news/self-regulation-needed-amid-shift-ai-4145336