AI Refusal in Libraries or AI Resistance and AI Reform?
On June 11th Violet Fox published a post on the Association for College and Research Libraries (ACRL) blog entitled AI Refusal in Libraries: A Starter Guide. Ever since then I’ve been troubled by it.
Fox’s arguments are solid and the supporting material good. I fully concur with many, if not all, of the concerns expressed. What has been troubling me is the word “refusal” and its implications.
For information professionals (and this post was clearly aimed at us), refusal sounds like abdication. It is an “opt out” approach which seems like a disavowal of professional responsibility.
Two recent books by researchers I have widely read and admire serve to highlight why this is a concern:
Emily M. Bender and Alex Hanna. The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want. HarperCollins, 2025.
R. David Lankes. Triptych: Death, AI, and Librarianship. 2025.
While Bender & Hanna do call for some “strategic refusal,” a critical part of the book is about resistance and opportunities for AI reform. Approaches very different from refusal. In doing so they underscore the importance of “trusted professionals” in overcoming the hype and focusing on responsible AI. Librarians are specifically identified to do this work.
In agreeing with Bender & Hanna about the role librarians can play, Lankes makes clear that to do this “librarians must understand AI at a deep level.”
And he continues, we must engage with AI:
“even if libraries see it as a fundamental betrayal of the values we hold dear, or a high-tech hallucination. Even if we never use it in writing, our collections already hold and will hold more AI written or AI enhanced materials. Even if we don’t use AI in our services to the community, the lives of our communities are already being shaped by AI.”
Librarians, collectively and individually, are doing this important work and learning about AI at a deep level. I note the excellent work at AI development labs at the Library of Congress, Stanford, and the National Library of the Netherlands as well as important work closer to home at the Ontario Council of University Libraries, A h/t as well to ai4lam and AICOP on Discord for creating communities of professionals concerned about AI in libraries.
Understanding at a deep level can still mean a level of abstraction away from the complex mathematics and statistical processing at the core of AI. But it does require study, attention, and the widespread and critical use of these tools.
Librarians and libraries are key partners in AI resistance and AI reform.
We cannot do this if we refuse.
…Mike
Postscript: Ok, so it is fair to ask at this point “what are YOU doing about it?”
My dissertation research led me to explainable AI (XAI) – making AI more trustworthy and accountable through a more robust regime of AI explanations. Some for programmers, some for everyday users (a variation called human-centered explainable AI – HCXAI). All this required the kind of deep dive into AI that Lankes calls for.
I’ve continued my research in HCXAI, written about it, and promoted it as critical to my academic colleagues. The next step is a bit different. And, I think, an example of how trusted professionals can engage.
In Canada, our new Prime Minister, Mark Carney, has included in his Cabinet, a Minister of Artificial Intelligence (and Digital Innovation). During the election campaign the PM promoted AI as a component of a new, revitalized Canadian economy. Canada has been a world leader in AI research but less good at developing Canadian-led companies and at widespread AI adoption.
I applaud this ambitious strategy but know it comes with risks. Explainability is a risk mitigation strategy. As a result, I will promote to the government (the Minister and my local MP to start with) my understanding of explainability as a core element to trust, accountability, and adoption. Yes, a single voice. But a voice.


I wish I’d read this before I wrote The Hollowing: Looking for Human Spaces in AI Safety, AGI, and AI Resistance. I’ll update next week with a link to yours.
AI isn't the problem. Corporate control is. Get in the game, libraries!