The Care Calculator
Can technology treat being informed as something other than a transaction?
By Adam Ashby Gibbard
The debate around AI in journalism contains the usual fears about job replacement, but the real concerns run deeper. Can we trust models trained on stolen content to produce accurate information? Should we accept AI-generated summaries that bypass the people who wrote the original work, especially when those summaries regularly invent facts with full confidence? Google’s no-click AI responses and chatbots dispensing medical misinformation reveal a system where AI has been anointed with authority it never earned, while policies governing its use remain perpetually far behind.
Even within that mess, there’s still a need to address the information landscape that’s become too overwhelming and harmful to sustain the stream of information people need to make informed decisions—personal, social, or political. This essay examines whether AI, despite its threats, could help rebuild that relationship if wielded with intention rather than extraction in mind.
AI in Journalism
We’re at a particular turning point where AI is both upending traditional forms and processes of work and communication, not to mention how we construct truth and build consciousness, but also providing possibility, imagination and hope, all within a very short period of time. In the field of AI and journalism, there’s a similar mix of optimistic explorations and harrowing warnings. There’s no separating the positive and negative aspects of AI, especially as its development and application have come at a particular time when the digitization of the world has left journalism struggling with its identity.
The term AI has many popular interpretations, but it should be better understood as a general umbrella term for specific computer programs that resemble human intelligence and creation. These can be algorithms, machine learning, neural networks, and natural language processing. A primary focus is on generative AI, like ChatGPT, which can create text, images, audio and video from prompts.
Since the penny press, technological developments have played an important part in aiding journalists in their work, while also causing transitional upheavals. While ChatGPT is only about two years old, AI has been used in journalism since 2014 when automated journalism first started to be used to some extent[1]. Now we have news organizations adopting AI technologies as part of automated journalism, better understood as a variety of AI-driven tools that lead to machine-produced content[2]. The use of AI in journalism is primarily aimed at efficiencies in gathering, assessing, creating and presenting news[3]. AI is now being used by major news outlets to fact-check, transcribe, summarize, cross-check, edit and even generate entire articles[4].
While this is happening, general awareness of AI in journalism is relatively low, with 49 percent having heard little to nothing about it[5]. When it comes to people’s trust in AI content, only a minority currently feels comfortable using news made by humans with the help of AI (36%), and an even smaller proportion is comfortable using news made mostly by AI with human oversight[5].
AI in journalism is caught in the battle for truth, facing exploitation by those who spread disinformation. Nobel Prize-winning journalist Maria Ressa has spoken extensively on the battle for truth and the dangers of social media, first as a warning following the Cambridge Analytica scandal, and now even more as AI expands on the potential for spreading misinformation and disinformation. She states plainly that “without facts, you can’t have truth. Without truth, you can’t have trust. Without trust, we have no shared reality, no democracy, and it becomes impossible to deal with our world’s existential problems: climate, coronavirus, the battle for truth”[6].
The debates surrounding AI are many, but given the scale of issues we face there’s a lot of hidden potential in AI use to get us to where we need to be a lot faster. It’s worth discussing the potential there while ensuring we’re not falling into the trap of proselytizing it as the savior of humanity and the source of the cure for cancer.
A Different Framework
Research on human motivation identifies three core needs that drive engagement: autonomy, competence, and relatedness. When these three psychological needs are met, people are more likely to be motivated and engaged with a given activity, especially with conversational agents, like Siri or Alexa[7]. Within passive news consumption, many of these areas are not entirely met and can lead to disengagement.
This framework offers a minimum set of wellbeing requirements that can be applied to all technologies, regardless of context or activity[8]. From that standpoint, the creation of an alternate form of news consumption would aim to provide autonomy by giving the user control over content and how it is delivered, foster competence by providing a simple interface that helps people understand and contextualize information, and build relatedness through interaction that feels human and connected rather than one-way broadcasting.
The distinction between active and passive consumption matters here. Passive consumption—scrolling feeds, absorbing headlines, being delivered information—tends to be extrinsically motivated, driven by external rewards like social validation or fear of missing out. Active consumption—asking questions, seeking specific information, engaging in dialogue—tends to be intrinsically motivated, driven by genuine curiosity and a desire to understand. This shift from passive to active isn’t just semantic; it fundamentally changes how people relate to information and how that information impacts their wellbeing.
There’s also the concept of eudaimonic media, which focuses on meaning and fulfillment rather than just pleasure or entertainment. Where most news delivery optimizes for clicks and engagement, a eudaimonic approach would optimize for understanding, connection, and the capacity to act on what you’ve learned.
People are moving to platforms like ChatGPT in huge numbers precisely because they meet our needs for human-like interaction and eudaimonia. ChatGPT reached 800 million weekly active users by September 2025, processing over 1 billion queries daily[9][10]. The platform grew from 100 million to 800 million users in less than two years, with 92% of Fortune 500 companies now using it[10]. Among Americans, one-third have used an AI chatbot in the past three months, and 62% prefer engaging with chatbots rather than waiting for human agents[11][12]. This isn’t just hype—it’s a fundamental shift in how people seek and process information.
When the News Talks Back
AI chatbots bring forward the first instance of people holding meaningful, dynamic, human-like conversations with non-human actors. News chatbots aren’t new though. Early versions by major news organizations like the Wall Street Journal, CNN, ABC and NBC implemented their news chatbots through Facebook’s Messenger, and while people were receptive to them, they were hampered by limitations in their programming that made them slow, provide inaccurate information and generally inflexible[13]. Quartz Brief was the first to try and instill some personality into the chatbot platform and found some popularity[14].
In testing out existing news chatbots, one study found that their success relies on providing relevant information related to the request, being up to date, providing diverse information, being quick to reply and responding as human-like as possible[13]. A similar study that came out during the pandemic found that chatbots would be a very effective means of disseminating important and timely information, especially due to the ability to personalize or localize how the content was delivered[15].
One study looked at whether a conversational news delivery mode more positively affects intrinsic motivation and user engagement than a linear news delivery mode[16]. Their findings were that in comparison to linear news delivery, conversational news feeds were more comfortable, people remembered more information and news in the form of a dialogue generated higher engagement[16].
A similar study tested to see if people would be more open to differing points of view from a chatbot versus a news website to see if chatbots would be better at addressing polarization[17]. Their findings were that people were more likely to accept opposing news from a chatbot while also perceiving it to be more credible[17].
But this isn’t a techno-utopian argument. The same characteristics that make chatbots potentially powerful for constructive engagement also make them dangerous in the wrong hands.
The Risks We Can’t Ignore
As writer Nathan Robinson points out, AI’s development is happening within a capitalist framework where “the incentive structure pushes toward using AI to increase profits rather than human flourishing”[18]. The environmental costs alone are staggering—generating responses through AI uses approximately 16 ounces of water per 5-50 prompts, with data centers consuming massive amounts of energy to keep servers cool[19]. This is before considering the carbon footprint of training these models in the first place.
There’s also the very real fear of job displacement. If AI can generate articles, edit copy, and fact-check at scale, what happens to journalists? The concern isn’t hypothetical—newsrooms are already cutting positions while expanding AI capabilities.
Beyond economics and environment, there are psychological risks specific to conversational AI. The “uncanny valley” effect occurs when something appears almost, but not quite, human, creating discomfort and distrust. Chatbots that seem too empathetic or too understanding can cross into manipulation, especially when users form emotional dependencies on interactions that feel personal but aren’t. Meta has already integrated advertising into chatbot experiences, raising questions about whether these tools are designed to inform or to sell.
There’s also the risk that making news consumption more comfortable could reduce the productive friction that motivates action. If a chatbot can help you process difficult information without feeling overwhelmed, does that mean you’re more likely to act on that information—or less likely because the emotional urgency has been mediated away?
Care Over Clicks
What might journalism look like if success was measured not by clicks or time spent on page, but by whether people felt more informed, more capable, and more connected to their communities? This isn’t about abandoning accountability or watchdog journalism—it’s about recognizing that the current metrics incentivize harm. When the goal is to keep eyes on screens, negativity, outrage, and fear become features rather than bugs.
A care-centered approach would ask different questions: Did this help someone understand a complex issue? Did it provide context that made them feel less anxious and more capable? Did it connect them to others grappling with the same concerns? Did it offer pathways to action rather than just documenting problems? These aren’t soft questions—they’re fundamental to whether journalism serves its democratic function.
The shift from passive to active consumption, from broadcasting to conversation, from extraction to care—these aren’t just design choices. They’re ethical choices about what kind of relationship we want between people and information, and what kind of society that relationship will create.
As I’ve argued in a different essay, there’s a clear need to address the humanity lost in how we are presently informed if we’re going to succeed in collective progression.
A Question of Direction
Journalism serves a democratic function—it provides the information people need to self-govern[20]. If we’re designing AI tools for journalism, we must consider democratic implications, not just efficiency gains. An informed public isn’t just about access to facts; it’s about the capacity to process those facts, contextualize them, and act on them collectively.
The same AI that could help someone navigate information overload could also be weaponized to create filter bubbles, reinforce biases, or manipulate emotional responses for profit. The technology itself is neutral—its impact depends entirely on the incentives driving its development and deployment.
This is where the framework of autonomy, competence, and relatedness becomes critical. Does the tool give users control, or does it control users? Does it build understanding, or does it simplify to the point of distortion? Does it connect people to a shared reality, or does it splinter them into personalized realities that can’t speak to each other?
AI is the proverbial double-edged sword, but it’s better understood as a tool. The question isn’t whether the technology itself is good or bad—it’s who’s wielding it, for what purpose, and whether our economic system even allows us to benefit from what it could offer.
Regardless of the threats, LLMs are exceedingly better at processing and verifying information than humans drowning in overload. They are, at their core, information organizers that process billions of pieces of data, words, and ideas. Our minds do this too, but an AI doesn’t have to eat, sleep, work, parent, date, budget, cook, commute, socialize, or take care of itself. It’s a wildly complex information calculator that emulates human speech, and people should start treating it like that. When presented with a really hard math problem, you could work it out on paper, or you could use a calculator. That’s where we are with information right now—and getting the answer right is becoming harder and harder to do.
The choice ahead isn’t whether to use these tools. It’s whether we design them to care for people or extract from them. Can we build technology that treats being informed as something other than a transaction? That’s not a technical question—it’s an ethical one, and the answer will shape what kind of informed public we’re capable of becoming.
This essay is adapted from my MA thesis examining AI’s potential role in reimagining news consumption for mental health.
References
- Moravec, V., Hynek, N., Skare, M., Gavurova, B., & Kubak, M. (2024). Human or machine? The perception of artificial intelligence in journalism, its socio-economic conditions, and technological developments toward the digital future. Technological Forecasting and Social Change, 200, 123162. ↩
- Lewis, S. C., Guzman, A. L., & Schmidt, T. R. (2019). Automation, Journalism, and Human–Machine Communication: Rethinking Roles and Relationships of Humans and Machines in News. Digital Journalism, 7(4), 409–427. ↩
- Opdahl, A. L., Tessem, B., Dang-Nguyen, D.-T., Motta, E., Setty, V., Throndsen, E., Tverberg, A., & Trattner, C. (2023). Trustworthy journalism through AI. Data & Knowledge Engineering, 146, 102182. ↩
- Editor’s Note. (2023, Winter). Artificial Intelligence and the Future of Journalism: Where Do We Go From Here? Newspaper Research Journal, 44(1), 3–5. ↩
- Newman, N., Fletcher, R., Robertson, C. T., Arguedas, A. R., & Nielsen, R. K. (2024). Reuters Institute Digital News Report 2024. Reuters Institute for the Study of Journalism. ↩
- Ressa, M. (2021). The Nobel Peace Prize 2021. NobelPrize.org. https://www.nobelprize.org/prizes/peace/2021/ressa/lecture ↩
- Yang, X., & Aurisicchio, M. (2021, January 19). Designing Conversational Agents: A Self-Determination Theory Approach. Proceedings of the 2021 ACM Conference on Human Factors in Computing Systems. https://doi.org/10.1145/3411764.3445445 ↩
- Peters, D. (2023). Wellbeing Supportive Design: Research-Based Guidelines for Supporting Psychological Wellbeing in User Experience. International Journal of Human–Computer Interaction, 39(14), 2965–2977. ↩
- Nerdy Nav. (2025, October 4). Latest ChatGPT Statistics: 800M+ Users, Revenue (Oct 2025). Nerdy Nav. ↩
- DesignRush. (2025, June 30). ChatGPT Usage Statistics and Trends in 2025. DesignRush. ↩
- Master of Code. (2026, January 7). ChatGPT Statistics in Companies [January 2026]. Master of Code. ↩
- Exploding Topics. (2024, November 24). 40+ Chatbot Statistics (2025). Exploding Topics. ↩
- Zhang, Z., Zhang, X., & Chen, L. (2021). Informing the Design of a News Chatbot. Proceedings of the 21st ACM International Conference on Intelligent Virtual Agents, 224–231. ↩
- Willens, M. (2019, June 21). Quartz is shutting down its Quartz Brief mobile app July 1. Digiday. https://digiday.com/media/quartz-is-shutting-down-its-quartz-brief-mobile-app-july-1/ ↩
- Maniou, T. A., & Veglis, A. (2020). Employing Chatbots for News Dissemination during the Covid-19 Pandemic. Trends in Ict, 34. ↩
- Köb, L., Schlögl, S., & Richter, E. (2022). Chatbots for News Delivery: Investigations into Intrinsic Motivation and User Engagement. In L. Uden, I.-H. Ting, & B. Feldmann (Eds.), Knowledge Management in Organisations (pp. 294–305). Springer International Publishing. ↩
- Zarouali, B., Makhortykh, M., Bastian, M., & Araujo, T. (2021). Overcoming polarization with chatbot news? Investigating the impact of news content containing opposing views on agreement and credibility. European Journal of Communication, 36(1), 53–68. ↩
- Robinson, N. J. (2023, May 24). The grim fantasy of capitalist AI. Current Affairs. ↩
- Li, P., Yang, J., Islam, M. A., & Ren, S. (2023). Making AI Less “Thirsty”: Uncovering and Addressing the Secret Water Footprint of AI Models. arXiv preprint arXiv:2304.03271. ↩
- Kovach, B., & Rosenstiel, T. (2014). The elements of journalism: What newspeople should know, and the public should expect (Revised and updated third edition). Three Rivers Press. ↩
