A few articles have recently been published that report research conducted with my colleagues as part of the Everyday Misinformation Project, funded by the Leverhulme Trust and led by Andrew Chadwick (Loughborough University). This mixed-methods project ran from 2021-24 and focused on people’s everyday experiences, social contexts, and media diets to investigate how potentially misleading information spreads online, particularly on personal messaging apps such as WhatsApp, Facebook Messenger, Snapchat, or Apple Messages.
The latest publication is a study titled “Credibility as a double-edged sword: The effects of deceptive source misattribution on disinformation discernment on personal messaging“, published in Journalism & Mass Communication Quarterly and coauthored with Andrew Chadwick, Natalie-Anne Hall, and Brendan Lawson. The abstract is below:
Disinformation often features reputable sources to boost false information’s credibility, but does this deceptive source misattribution shape its spread on personal messaging? In a preregistered between-subjects survey experiment on U.K. WhatsApp users (N = 2,580), we showed participants WhatsApp messages containing true or false news attributed to either British Broadcasting Corporation (BBC) News or no source. Attribution to BBC News significantly increased message credibility. Importantly, however, participants’ responses to false messages attributed to BBC News were statistically indistinguishable from their responses to true messages. On personal messaging, source credibility can boost the spread of accurate news but can also be used deceptively to propagate falsehoods.
Another recent addition is “Unpacking credibility evaluation on digital media: a case for interpretive qualitative approaches“, published in the Annals of the International Communication Association and coauthored with Pranav Malhotra , Natalie-Anne Hall , Yiping Xia , Louise Stahl , Andrew Chadwick , and Brendan Lawson. Here is the abstract:
We argue for more serious consideration of interpretive qualitative approaches in research on information credibility evaluation in digitally mediated contexts. Through reviewing existing literature on credibility and drawing on our own experiences of conducting research projects on credibility evaluation in diverse cultural contexts, we contend that interpretive qualitative approaches help researchers develop a much-needed communicative and relationally and culturally situated understanding of credibility, complicating dominant quantitative and psychologically-oriented accounts. We detail how these approaches add important nuance to how credibility is conceptualized and operationalized and reveal the complexity of credibility evaluation as a social process. We also outline how they aid researchers studying misinformation engagement, especially in popular bounded social media places like private groups and chats. The approach we develop here provides new insights that can inform ongoing global efforts by researchers, policy makers, and citizens to more fully understand the complexity of information verification online.
In 2025, two more articles were also published in a journal issue after being available online first for a while. First, “The trustworthiness of peers and public discourse: exploring how people navigate numerical dis/misinformation on personal messaging platforms“, which came out in Information, Communication & Society and was coauthored with Brendan Lawson, Andrew Chadwick, and Natalie-Anne Hall. Below is the abstract:
Numbers are essential to how citizens understand the world, but also have distinctive power to confuse or manipulate. Numerical claims permeate online dis/misinformation, yet relatively little is known about how people engage with them. We conducted in-depth interviews (W1N = 102, W2N = 80) to explore how people gauge the trustworthiness of numbers on personal messaging platforms – highly popular yet difficult-to-research online spaces. Adopting a relational approach to informational trustworthiness, we find that numbers were not perceived as objective facts but as biased, technical, and verifiable. This spurred participants to engage in three practices to establish trustworthiness: contextualising peers’ motivations with reference to public discourse, selectively trusting peers’ competence in light of public signals of salient expertise, and using public sources to assess what peers share. These practices, which we found endured over time, suggest that norms of verification and correction on messaging platforms involve a complex integration of information from interpersonal relationships and public discourse.
Finally, “Misinformation rules!? Could ‘group rules’ reduce misinformation in online personal messaging?“, published in New Media & Society with Andrew Chadwick and Natalie-Anne Hall. Here is the abstract:
Personal messaging platforms are hugely popular and often implicated in the spread of misinformation. We explore an unexamined practice on them: when users create “group rules” to prevent misinformation entering everyday interactions. Our data are a subset of in-depth interviews with 33 participants in a larger program of longitudinal qualitative fieldwork (N = 102) we conducted over 16 months. Participants could also donate examples of misinformation via our customized smartphone application. We find that some participants created group rules to mitigate what they saw as messaging’s harmful affordances. In the context of personalized trust relationships, these affordances were perceived as making it likely that misinformation would harm social ties. Rules reduce the vulnerability and can stimulate metacommunication that, over time, fosters norms of collective reflection and epistemic vigilance, although the impact differs subtly according to group size and membership. Subject to further exploration, group rulemaking could reduce the spread of online misinformation.
All articles are available open access. I hope you find them helpful!

