Transatlantic Collaboration Explores AI’s Impact on Wikipedia
Prominent experts in Artificial Intelligence are uniting for a groundbreaking transatlantic research initiative aimed at examining the intricate relationship between Wikipedia and generative AI technologies.
The project, titled Curating the Commons – AI, Wikipedia, and the Reconstruction of Notability, will delve into how this globally recognised repository of knowledge both influences and is influenced by the ascendancy of AI and large language models (LLMs).
Bringing together distinguished scholars from the University of Exeter and the University of North Carolina, this endeavour seeks to investigate how this evolving interplay may pose challenges to the integrity and credibility of Wikipedia and broader digital knowledge, with the potential to replicate and exacerbate existing biases and inaccuracies.
This research is supported by a £171,000 grant from the Arts and Humanities Research Council through its Bridging Responsible AI Divides (BRAID) programme, and is scheduled to continue for two years.
All BRAID projects aim to fortify collaborations between US and UK researchers while addressing the ethical, legal, and societal ramifications of AI technologies.
“Wikipedia occupies a unique position at the heart of global knowledge production, frequently serving as the first port of call for online information seekers,” asserts project lead Dr Patrick Gildersleve, a Lecturer in Communications and Artificial Intelligence as well as Co-Director of Exeter’s Critical AI Centre.
“Its articles often form the cornerstone for automated information summaries users encounter daily on major search engines and AI-generated tools, shaping the perceptions and beliefs of millions worldwide.”
“Simultaneously, AI-generated content is increasingly being embedded within Wikipedia itself, either directly through automation or indirectly via human editors employing AI-assisted tools,” notes Dr Francesca Tripodi, co-investigator and founder of the Search Prompt Integrity and Learning Lab at the University of North Carolina.
“The crux of this project is to comprehend how this continuous, reciprocal relationship between Wikipedia and AI models influences the quality and reliability of public knowledge.”
The researchers will scrutinise the training data and outputs of AI tools, including Perplexity AI, Google’s AI Overviews, and ChatGPT, seeking instances where AI outputs reference Wikipedia directly or indirectly.

Furthermore, they will analyse how these tools determine what or whom is considered “important” or “notable,” highlighting how they might reinforce Wikipedia’s own biases concerning gender, race, and geography.
Throughout this project, the team will engage with Wikipedia editors to explore their perceptions of AI-generated content by participating in and organising online and in-person ‘edit-a-thons’ to observe how editorial decisions regarding content inclusion are made.
To mitigate potential pressure on the live Wikipedia site, a smaller platform named WikipedAI will be developed to simulate the editing process.
Source link: Miragenews.com.






