For many, navigating the treacherous waters of information overload has become an all too common struggle.
Australians are increasingly disengaging from conventional news sources, opting instead for the allure of social media platforms, influencers, and, more recently, generative artificial intelligence (AI) chatbots and summaries.
We find ourselves in a murky digital landscape where opaque algorithms dictate our information consumption. These algorithms often exhibit minimal regard for accuracy, quality, or the evidence-based journalism essential for fostering a robust community.
Simultaneously, local journalism faces an alarming decline. Distrust in mainstream media is escalating, exacerbated by the emergence of “zero-click” AI-generated search results that present information without directing users to news sites.
This trend diminishes traffic to these platforms, thereby eroding audience engagement, subscription viability, and overall revenue. The unchecked proliferation of AI is driving an already fragile news ecosystem to the brink.
A recent News Futures: Media Policy Roundtable convened 45 leaders spanning industry, government, non-profit sectors, digital platforms, and academia.
The consensus was clear: the opacity surrounding algorithms on social media, search engines, and AI platforms poses a significant threat to journalism and erodes audience trust.
The ensuing report, published today, advocates for a fundamental re-evaluation of how journalism is supported and defined in Australia.
Misinformation is Flourishing
Misinformation thrives when demand for information eclipses the supply of verified evidence. A vibrant and abundant flow of quality news can act as a counterbalance to this disarray.
Our research indicates a strong correlation between news consumption and the public’s capability to verify misinformation.
Regrettably, current laws and civic education initiatives have not kept pace with the burgeoning realm of AI-generated content, including deepfakes.
The absence of definitive standards for tracing online content origins or verifying its authenticity continues to exacerbate the issue. Given that many AI systems operate as black boxes, accountability remains elusive when errors or biases emerge.
Australians’ confidence in verifying misinformation is notably lacking. Only around 40% express confidence in their ability to assess the trustworthiness of websites or social media posts, while a mere 43% believe they can determine the veracity of online information.
This dilemma is further compounded by the increasing prevalence of low-quality AI-generated content, often referred to as ‘AI slop’ or ‘hallucinations.’ Notably, Australians are among the most apprehensive globally regarding online misinformation.
People Don’t Know Whom to Trust
Roundtable experts expressed concerns over the low level of media and AI literacy among the populace. A significant number of Australians struggle to assess online information and remain uncertain about trustworthy sources.
Faced with overwhelming uncertainty, many Australians resort to disengagement, with 69% often avoiding news altogether.
The digital landscape itself proves to be an unreliable conduit for news. Through algorithms, digital platforms execute invisible and unaccountable decisions that reshape public access to information.
By curating content selectively, these intermediaries elevate certain narratives while demoting others, with scant regard for quality or accuracy.
Yet, little impetus exists for these platforms to elucidate the workings of their algorithms or to disclose changes, including how news is prioritized or AI-generated content is created.
An urgent demand for transparency in algorithmic curation and mandatory labeling of AI-generated material is evident.
Where to From Here?
Roundtable participants delineated five key priorities that hold the potential to significantly enhance our information ecosystem. Three primarily target AI.
1. Increased transparency from major tech platforms. Australians deserve clarity on how algorithms curate news across search engines, social media, and AI chatbots. Additionally, clear labeling of AI-generated content is needed to rebuild trust and empower users.
2. Equitable regulations for AI’s use of journalistic content. AI enterprises should not exploit journalism without remuneration. Industry-wide licensing agreements, copyright reforms, and robust competition laws could ensure that news organizations are compensated when their work contributes to training generative AI models.
3. Prioritizing education on media and AI literacy. Equipping citizens with knowledge on algorithmic functioning, along with strategies to identify bias and misinformation, represents one of the most effective interventions available. This education must extend beyond schools to provide ongoing opportunities for adult upskilling.
4. Funding journalism as a public good. One-off grants are insufficient. Proposals like tax offsets for journalists’ salaries present a sustainable solution that could directly benefit newsrooms, particularly smaller and regional outlets, ensuring accountability.
5. Journalism training for content creators and digital-first outlets. A standardized industry code is essential for maintaining quality across the news ecosystem, necessitating collaboration among stakeholders.
In a landscape where invisible AI influences our perception, society cannot afford an information environment devoid of integrity.

Without decisive measures, the public interest journalism that underpins democracy and societal cohesion is at risk of further decline.
Source link: Theconversation.com.






