2025
16
Jul

Investigating the “Feeling Rules” of Generative AI and Imagining Alternative Futures

In Brief

Since the public debut of ChatGPT in November 2022, the calls for librarians to adopt and promote generative AI (GenAI) technologies and to teach “AI literacy” have become part of everyday work life. For instruction librarians with reservations about encouraging widespread GenAI use, these calls have become harder to sidestep as GenAI technologies are rapidly integrated into search tools of all types, including those that libraries pay to access. In this article, I explore the dissonance between, on the one hand, instruction librarians’ pedagogical goals and professional values and, on the other, the capacities, limitations, and costs of GenAI tools. Examining discourse on GenAI and AI literacy, I pay particular attention to messages we hear about the appropriate ways to think and feel about GenAI. These “feeling rules” often stand in the way of honest and constructive dialogue and collective decision making. Ultimately, I consider work from within and outside librarianship that offers another view: that we can slow down, look honestly at GenAI capacities and harms, take seriously the choice some librarians may make to limit their GenAI use, and collectively explore the kinds of futures we want for our libraries, our students, fellow educators, and ourselves.

By Andrea Baer


At the April 2025 Association of College & Research Libraries Conference, academic library workers gathered in person and online to explore the theme “Democratizing Knowledge + Access + Opportunity.” Before sessions about how to integrate generative AI (GenAI) tools into essential public services like teaching and research services, sociologist and professor of African American Studies Ruha Benjamin offered the opening keynote. Articulating the resonance of the conference theme for her, Benjamin reflected, “One way to understand the stakes of this conference, … why it’s so vital that we work in earnest to democratize knowledge, access, and opportunity at a moment when powerful forces are working overtime to monopolize, control, and ration these social goods, is that this is a battle over who gets to own the future, which is also a battle over who gets to think their own thoughts, who gets to speak and express themselves freely, and ultimately who gets to create” (Benjamin, 2025). Noting that technologies are never neutral but rather reflect “the values or lack thereof of their creators,” Benjamin drew a connection between current attacks on libraries and higher education and a category of technology that was prominent throughout the conference program: artificial intelligence. “[I]t should give us pause,” she asserted, “that some of the same people hyping AI as the solution to all of our problems are often the ones causing those problems to begin with.” Applause followed.

Though Benjamin did not name the prominence of AI across conference sessions, I was probably not the only person to notice the contrast between Benjamin’s critique of AI hype and the prevalence of conference sessions about promoting AI technologies and AI literacy in libraries.

As Benjamin continued, she turned to the chilling words of JD Vance at the 2025 Paris AI Summit: “Our schools will teach students how to manage, how to supervise, and how to interact with AI-enabled tools as they become more and more a part of our everyday lives.” As I listened, I thought these words could easily be mistaken as part of a talk on AI literacy by an academic librarian or educator with better intentions. I wondered how many others were thinking the same thing. Benjamin then reminded the audience of Vance’s ideological perspective, as she observed that in January 2021 Vance gave a speech at the National Conservatism Conference entitled, “The Universities are the Enemy,” in which he argued that universities must be aggressively attacked to accomplish his and his audience’s goals for the country (Vance, 2021).

It’s worth taking a brief step away from Benjamin’s keynote to point out that a couple of weeks after her talk, on April 23, President Donald Trump issued an executive order to promote AI literacy through a new the White House Task Force on AI Education that will “establish public-private partnerships to provide resources for K-12 AI education, both to enhance AI-related education but also to better utilize AI tools in education generally.” The executive order’s Fact Sheet states that “AI is rapidly transforming the modern world, driving innovation, enhancing productivity, and reshaping how we live and work.” Thus, “[e]arly training in AI will demystify this technology and prepare America’s students to be confident participants in the AI-assisted workforce, propelling our nation to new heights of scientific and economic achievement” (The White House, 2025). This laudatory language about AI is perhaps unsurprising for an administration that established the Department of Government Efficiency (DOGE). DOGE purportedly aims to reduce the “government waste, fraud, and abuse,” largely through eliminating government jobs and replacing workers with a combination of automation and tech workers who have been directed to violate digital privacy rights and regulations (Klein, 2025; Salvaggio, 2025).

What is perhaps more striking is the similarity between the White House’s rhetoric and that of many educators in universities and academic libraries. Can the difficulty of distinguishing between the dominant AI rhetoric in higher education and that from political leaders who have explicitly named universities as the enemy be a wake-up call for people in higher education and in libraries, a message that we need to give more weight to the ethical concerns surrounding GenAI technologies?[1]

Benjamin did not dwell long in her ACRL keynote on Vance’s vision for GenAI and AI literacy. Instead, she devoted most of her time to exploring imagination as a powerful means through which to envision the kinds of worlds we want to live in and to begin building. As she noted, imagination can envision dystopian futures, but it can also open more hopeful possibilities for the futures we want. “What if we took imagination seriously?” she asked. “Not as flights of fancy, but imagination as a resource, a capacity, a muscle? How might the powers of our collective imagination begin to transform the world around us?” (Benjamin, 2025). Here Benjamin articulated what I believe many in the academic library community have been thinking and feeling in the last few years, as pressure to integrate GenAI tools into library systems and library work has intensified, often accompanied by brief and perfunctory acknowledgements of GenAI’s present and potential harms that are then set aside.

Benjamin was inviting us to imagine alternatives to the narrative that GenAI technologies are the inevitable future of nearly all intellectual work. As I will explore, this process of imagining can include critically examining discourses about GenAI and AI literacy, as well as being curious about and attentive to our own affective experiences in response to GenAI technologies and discourses about them. If we accept this invitation to imagine, we might (re)discover what becomes out of view when so much of our attention is focused on a particular vision of the future of GenAI proliferation. We might widen our ideas of what is possible and nurture some sense of collective agency to work for the kinds of futures we want.

Of course, our individual imaginings and feelings don’t always match what a majority (real or imagined) appear to share. My own conceptions of, approaches to, and feelings about GenAI and AI literacy usually seem out of sync with the dominant discourse in higher education and librarianship (though with time I learned I have some company). Like many others, I am deeply concerned about the real and present costs of GenAI technologies that are rapidly being integrated into search and library tools. I am also unsettled by widespread overconfidence in these technologies’ abilities to generate mostly reliable information and to support research and learning. Both as a librarian and more recently as a professor of practice, I have struggled with how to understand and respond to the enthusiastic calls in higher education and academic librarianship for teaching a version of AI literacy which requires that educators and students use these tools, while giving limited attention to ethical questions surrounding GenAI. So often calls to teach AI literacy contribute to AI hype by misrepresenting GenAI’s capacities, minimizing acknowledgement of its harms, and implying that critique of GenAI stands in the way of human progress. We frequently hear that GenAI technologies are the inevitable future of the world and of libraries and that the only viable option is to embrace them, and quickly, before we fall behind. This message of urgency fits into an older narrative that libraries must embrace technological change or otherwise become obsolete (Birdsall, 2001; Glassman, 2017; Espinel and Tewell, 2023).

Through this article, I hope to further encourage what Benjamin recommends: rather than rushing to adopt and promote new technologies whose ethical implications raise major questions, we might slow down and claim more time and space for considering the present and potential implications of GenAI adoption and use. That time and space is necessary for the more expansive collective imagining that Benjamin proposes, imagining that takes into consideration the power and social structures that currently exist and those that we want to exist.

Of course, what we imagine to be desirable or possible is heavily shaped by our environments, social relationships and interactions, and the ideas and messages we encounter every day. Making space for imagination therefore also means making space for individual and collective inquiry that includes establishing agreed-upon facts about GenAI, rhetorical analysis of GenAI discourses, and critical reflection on our own thoughts and feelings about GenAI technologies and discourses. Being inclusive and expansive in imagining the futures we want also requires investigating the social expectations and pressures that often influence what we do and do not say in various professional circles.

With these things in mind, in this article I consider the dissonances between what we know about the limitations and harms of GenAI technologies and the imperatives we hear to adopt them. Can we reconcile the tensions between, on the one hand, the harms of GenAI technologies and, on the other, professional values like those articulated in the ALA Core Values of Librarianship, which include equity, intellectual freedom and privacy, public good, and sustainability (American Library Association, 2024)? And if we could magically resolve many of those tensions through a radically transformed AI infrastructure that is environmentally sustainable and does not depend on the exploitation of human labor, what might we lose when we offload cognitive tasks like searching for, selecting, reading, or synthesizing sources to GenAI technologies? What do we value about education and information literacy practices that need to be preserved with foresight and intention? Because my work as a librarian and as an educator centers on teaching and learning, I am especially interested in how we conceptualize and approach teaching what is often called AI literacy. 

A necessary step in this process of imagining is investigating the messages embedded in much of academic and library discourse about GenAI technologies and the appropriate ways to think and feel about them (what sociologist Arlie Hochschild might call “feeling rules”). For instruction librarians, this process includes examining conceptions and framings of AI literacy and its role in information literacy education. A critical analysis of this discourse can help us open conversations about what we want information literacy instruction and library search tools to look like and do. This inquiry can also help us identify ways we have choice and agency in our own use of and teaching about GenAI tools. After an initial consideration of the feeling rules of GenAI and dominant discourse on AI literacy, I finally consider alternative ways to think about GenAI and to respond to calls for widespread adoption. Looking to work from within and outside librarianship, I consider another view: that we can slow down; take time to look honestly and critically at what we know, think, and feel about GenAI and its impacts; and consider ways to work toward the kinds of futures that align with our professional values. Part of this process is allowing space for more critical and skeptical perspectives on and feelings about GenAI, including nuanced arguments for AI refusal, a term I unpack in more detail later.

Feeling Rules

An impetus for my writing is that I see in much of our professional discourse and interactions a tendency to dismiss or minimize critiques of GenAI technologies, and sometimes even a shaming of those “Luddites” who do not embrace the technological changes of the day.[2] As I have argued elsewhere and further consider in this article, an especially powerful strategy for shutting down critique of GenAI is the construction and imposition of “feeling rules”: social expectations about the appropriate ways to feel and display emotion in a given context (Hochschild, 1979, 1983; Baer, 2025).

Feeling rules, as first described by sociologist Arlie Hochschild, are social norms that prescribe what feelings are and are not appropriate to have and express (Hochschild, 1979). Though feeling rules are not confined to the workplace, they are a powerful part of the emotional labor we do in our places of employment (Hochschild, 1983).[3] Feeling rules are typically discussed in the context of specific moments of social interaction among individuals, while in this article I apply them to our social relationships on both a micro- and a macro-level – that is, as evident not only in discrete individual social interactions but also in discourse about GenAI technologies that is informed by social relationships.

While feeling rules are usually established by those in positions of power, they are often internalized by those for whom the feeling rules are intended (Hochschild, 1979, 1983). In the case of GenAI, messages that librarians should be enthusiastic and optimistic about technological changes, which are frequently described as inevitable, often imply or outright assert that those who question or resist certain technological developments are simply overwhelmed by irrational fear and anxiety that they need to overcome. Giving too much attention to those unpleasant emotions or their underlying thoughts, the discourse often goes, risks making the profession obsolete.

Many of the feeling rules we observe in librarianship or higher education, of course, are influenced by social conditions and norms that extend beyond them. Search for the term “AI anxiety” and you will find articles explaining it as a psychological condition to be overcome by integrating AI technologies into your everyday life (Comer, 2023; Cox, 2023; Okamoto, 2023). The antidote to AI anxiety, according to its experts: accept and embrace the technology. For example, in the BBC article “AI Anxiety: The Workers who Fear Losing their Jobs to AI,”  PricewaterhouseCoopers (PwC) Global AI and Innovation Technology Leader, Scott Likens, explains, “In order to feel less anxious about the rapid adoption of AI, employees must lean into the technology. … Instead of shying away from AI, employees should plan to embrace and educate” (Cox, 2023).

But what if emotional responses like “AI anxiety” are in large part deeply intelligent, a recognition of the unsettling facts about how most GenAI tools currently are built and deployed and what harmful impacts they already have? What if the cognitive dissonance that many of us experience when reading articles about AI anxiety or the necessity of AI adoption is worth our attention and curiosity? There is a stark mismatch between, on the one hand, imperatives to rapidly adopt and promote GenAI technologies and, on the other, the extensive documentation of the unethical labor practices upon which GenAI is built, as well at GenAI’s detrimental impacts on the environment, local communities, and society more broadly (Bender et al., 2021; Crawford, 2024; Electronic Privacy Information Center, 2023; Nguyen & Mateescu, 2024; Shelby et al., 2023). Despite this, librarians who are reluctant to adopt GenAI are frequently described as regressive and even harmful to a profession that must adapt to remain relevant. This shaming closes off open dialogue and critical thought.

For many librarians who teach, the calls to adopt GenAI, promote its use, and teach a kind of AI literacy that encourages others to do the same adds to this dissonance. We repeatedly hear that GenAI is the future of work and the university and that we must therefore embrace it in our own work and teaching, regardless of our own views. Projects and initiatives at our places of employment and in our professional associations urge us to use these tools robustly, partly so we can help students, faculty, and community members keep up and succeed in our ever-changing world. Library vendors and the technology companies that libraries and universities pay for services and subscriptions continue to integrate GenAI tools into their platforms, usually offering people little to no choice in whether they use these extractive tools (though perhaps it’s time that libraries demand more choice from our vendors). The apparent lack of criticality toward these vendors and companies is further perpetuated by the refrain that librarians must teach the AI literacy skills that students and researchers will inevitably need. When we do hear about the problems with GenAI technologies, like the persistent inaccuracies in information generated from large language models (LLMs), or the extensive list of GenAI’s environmental and societal harms, reservations are usually a short footnote, followed by a call for ethical GenAI use that sidesteps the fact that using GenAI technologies in their current forms inevitably means adding to their harmful impact.

While some AI technologies may be beneficial in specific domains and justified for narrow use cases – for example, machine learning in some instances of medical diagnosis and drug discovery (Ahmad et al., 2021; Suthar et al., 2022) – they are now being integrated widely and indiscriminately across domains, including in areas where they often hinder human thought more than they support it. As Shah and Bender argue, the LLMs being integrated into library search systems that supposedly save people precious time may actually prevent the exploration, discovery, and information literacy development that these resources have long been meant to enable (Shah & Bender, 2024). Their argument is further supported by accumulating research on the detrimental effects of the cognitive offloading of tasks to GenAI (Gerlich, 2025; Shukla et al., 2025).

I see detrimental impacts of GenAI reliance directly in my own work teaching academic research and information literacy. Increasingly, a large portion of students have turned to GenAI to do nearly all the cognitive work that previously would have taken them so much time and effort. In the process, many if not most of these students are not developing the critical thinking and writing skills that have long been considered foundational to higher education. I also see a smaller group of students who are deeply concerned about the costs of GenAI and who are choosing the more labor-intensive path of developing and articulating their own thinking, rather than immediately turning to chatbots. The latter group is learning far more, and is far better prepared for the workplace and meaningful participation in society more broadly. The contrasting perspectives and behaviors of my students reflects that students’ views and uses of GenAI are, like ours, not monolithic. And also like us, students hear many of the same simplistic messages: that GenAI is an amazing technology that will make work faster and easier and that the only way to be prepared for the workplace and relevant in the world is to embrace GenAI.

In academic libraries, those who want to take a slower and more cautious approach to GenAI are frequently criticized as holding the profession back, resisting the inevitability of technological change, inhibiting progress, neglecting to prepare students for the future, and denying reality. Such criticisms have a silencing effect, discouraging people from expressing their legitimate concerns about a technology that in the widest circulating discussions is surrounded by more hype than critical investigation.

But when we can free ourselves of shaming rhetoric, we are better positioned to both support one another as respected colleagues and to think critically, and imaginatively, about how we want to engage with and teach about GenAI technologies. Given the prevalence of hype and misunderstandings surrounding GenAI, unpacking discourse on GenAI and AI literacy is a powerful and necessary part of this work.

Rhetorics of AI Literacy

Calls for embracing GenAI in higher education and academic librarianship are frequently accompanied by declarations that AI literacy is one of the most essential skills that students must now develop to be prepared for the workforce and for the future in general. Definitions of AI literacy and related competencies regularly add to the AI hype that Benjamin cautions against, as they repeatedly misrepresent GenAI’s abilities, mandate GenAI adoption, and reinforce the message that GenAI is the inevitable future which we must therefore embrace through adoption and active use. Like GenAI discourse more broadly, AI literacy rhetoric often includes brief asides to consider the potential risks of AI technologies to ensure they are used ethically and responsibly. Like a perfunctory checklist, these acknowledgements rarely offer a meaningful examination of the extensive harms of GenAI, nor do they confront the reality that more ethical use will only be possible with radical changes to GenAI technologies and their infrastructures. With the emphasis on adoption and use, this discourse leaves little to no room for considering the possibility of non-use or critical examination of use cases that might not warrant AI use.

Consider, for example, the AI Literacy Framework developed by academic and technology teams at Barnard College. Based on Bloom’s taxonomy, it is composed of four levels: 1) Understand AI, 2) Use and Apply AI, 3) Analyze and Evaluate AI, and 4) Create AI. Here, using AI precedes considering critical perspectives on AI, such as ethical concerns. After students have engaged with level 3, where they “Analyze ethical considerations in the development and deployment of AI,” the next level (4) mandates creating more of these technologies (Hibbert et al., 2024). Stanford University Teaching Commons’ AI literacy framework, which emphasizes “human-centered values,” similarly begins with developing a basic understanding of AI tools, in part through AI use (“functional literacy”). Following functional literacy is “ethical AI literacy,” which involves “understanding ethical issues related to AI and practices for the responsible and ethical use of AI tools.” Again, non-use is not presented as an option. Instead, the framework authors explain, “You and your students can identify and adopt practices that promote individual ethical behavior and establish structures that promote collective ethical behavior” (Teaching Commons, Stanford University, n.d.).[4] As these AI literacy frameworks suggest, much of the literature on AI literacy reflects a strange mixture of the AI inevitability narrative, superficial acknowledgement of ethical concerns, and AI hype that frames GenAI as a transformative force that will better society.

AI literacy frameworks created within librarianship frequently share these characteristics. ACRL President Leo Lo’s 2025 “AI Literacy: A Guide for Academic Libraries” is one such influential document. It is described as “a guide to AI literacy that addresses technical, ethical, critical, and societal dimensions of AI, preparing learners to thrive in an AI-embedded world.” In this new world, librarians can “become key players in advancing AI literacy as technology shapes the future” (Lo, 2025, p. 120). What that future looks like, or what we want it to look like, is not discussed.

Like other AI literacy frameworks, Lo’s guide predicates AI literacy on AI use, as the document defines AI literacy as “the ability to understand, use, and think critically about AI technologies and their impact on society, ethics, and everyday life” [my emphasis] (Lo, 2025, p. 120). As with the previously mentioned AI literacy frameworks, this document presents AI as pervasive and socially beneficial, while omitting a meaningful examination of the material conditions on which creating and using these technologies currently rests. At various points, the guide briefly notes the need to consider the limitations and ethics of GenAI tools, statements that are quickly followed by an emphasis on AI adoption and promotion that supports the common good, social justice, and empowerment. Consider, for example, the section on the societal impact of AI on the environment and sustainability:

While AI remains resource-intensive with a notable environmental footprint, discussions on sustainability should encompass more than just reducing consumption. The real potential lies in using AI to drive systemic changes that promote social and environmental well-being. For example, AI can optimize energy management in cities, creating smarter, more sustainable urban environments. It also has the capacity to revolutionize agricultural supply chains, increasing efficiency, reducing waste, and supporting sustainable practices across production and distribution. By integrating sustainability into the societal dimension of AI literacy, we can better understand AI’s role not just as a technological advancement, but as a force capable of reshaping our economic, social, and environmental landscapes for the better. [my emphasis] (Lo, 2025, p. 122)

Here, a minimization of the costs of AI coexists with an idealization of a future made possible by AI. No references are made to the water-thirsty and energy-hungry data centers rapidly being built to power GenAI, or how these data centers disproportionately harm economically disadvantaged communities and areas that are especially prone to drought (Barringer, 2025). If such harms seem like a distant problem that does not affect most of us, we are likely to be proven wrong. For example, in my current home of Austin, Texas, which is prone to both drought and power grid failures, data centers are big business (Buchele, 2024).

The influential role of Lo’s AI Literacy Guide is further reflected in another key ACRL effort to promote the integration of AI in academic libraries: the ACRL AI Competencies for Academic Library Workers (“AI Competencies”) (ACRL AI Competencies for Library Workers Task Force, 2025). The first draft, published online this past March, builds on Lo’s AI Literacy Guide. Like Lo’s AI Literacy Framework, AI Competencies does not consider whether GenAI tools are the optimal technologies for information literacy education, library research, or critical inquiry.

While Lo’s aforementioned AI Literacy Guide is apparently designed for library instruction, the AI Competencies document concentrates on the abilities that library workers should possess. Despite this different focus, the task force also associates their work with information literacy and notes early in the document that while developing the competencies, they “recognized significant parallels between responsible AI use and the principles of critical information literacy, as outlined in documents like the ACRL Framework for Information Literacy for Higher Education” (p. 1). This suggests the potential relevance of the document to librarians’ instructional work.

Before engaging in a closer examination of the AI Competencies first draft, I should stress that upon releasing the document the authors solicited feedback from the library community to inform future revisions. At the Generative AI in Libraries (GAIL) Conference this past June, the task force co-chairs shared the feedback they received and the kinds of revisions they plan to make (Jeffery and Coleman, 2025). Much of that feedback mirrors my own concerns about common conceptions of AI literacy that I have discussed thus far, conceptions that are reflected in the AI Competencies first draft as well. A considerable number of responses challenged the implications that library workers must use AI, that AI literacy necessitates AI use, and that responsible GenAI use is possible. Some also commented that the document did not adequately acknowledge GenAI technologies’ harms and that the description of AI dispositions (which I discuss in more detail momentarily) was not appropriate for a competencies document. The task force’s receptiveness to this input – which contrasts professional discourse about GenAI that I previously observed – suggests that many in our profession may be eager and now better positioned for more open and honest conversations about GenAI technologies than in the earlier days of learning about them.

Regardless of how the final draft of the AI Competencies document develops, the dispositions outlined in the first draft are worth closer attention because of the feeling rules about GenAI that they imply (for example, the expectation that competent library workers will embrace GenAI technologies and feel positively about them).[5] As the AI Competencies task force explains, the document’s dispositions “highlight the importance of curiosity, adaptability, and a willingness to experiment with AI tools” (pp. 2-3). Library workers who demonstrate the appropriate AI literacy dispositions: “Are open to the potential of responsible human-AI collaboration to unlock a future of greater equity and inclusion,” “Seek uses of AI that center and enhance human agency rather than displace and inhibit it,” and “Pursue continuous professional reflection and growth, especially concerning ethical and environmental responsibilities” (p. 3). Implicit within these dispositions is the belief that use of AI tools in their current form can lead to greater equity and can enhance human agency rather than displacing it. The document does not discuss actions or responses one might take in light of the harmful impacts of GenAI technologies. Instead, questioning whether AI tools should be used appears antithetical to the AI competencies articulated in the document. Like many other AI literacy frameworks and guides, this document implies that reflection is sufficient for demonstrating the correct AI competency dispositions. Such rhetoric, not unique to this document, obfuscates the reality that people have limited control over or insight into what the AI companies that own most AI tools do to build and maintain them.

When AI literacy documents assume GenAI use and come to dominate conversations about GenAI in academic libraries and higher education, or even become codified through formal adoption by institutions or organizations, how does this position library workers and educators who disagree with the assumptions embedded within those documents? Should these individuals be considered “AI illiterate,” in need of developing proper GenAI practices, attitudes, and dispositions? Through the lens of these documents, resisting rapid adoption of GenAI tools or questioning their value might be considered incompetence, regardless of how well informed or thoughtful someone’s perspective on GenAI is.

The AI Competencies first draft provides a window into many of the feeling rules about GenAI currently circulating in academic librarianship. Fortunately they ultimately may not be codified in the final version. The task force’s honesty and critical reflection about the academic library community’s feedback, including questions about the appropriateness of including AI dispositions, is evidence that feeling rules and the narratives that help to drive them are never fully solidified and are rarely universally accepted. Feeling rules are often sites of contestation. Moreover, they can shift and change as we learn more and as we engage in critical reflection and dialogue.

New Imaginings for Responding to GenAI

As the critical feedback on the AI Competencies suggests, alternatives to the dominant AI literacy discourse and its implied feeling rules exist, even when those different viewpoints are harder to find. As some educators demonstrate, when we challenge the feeling rules embedded in much of the higher education and library GenAI discourse, we can open new possibilities for thinking about and responding to calls for GenAI adoption and AI literacy instruction that promote this adoption. We can begin to imagine ways of acting that might be out of view when we are mired in a particular set of feeling rules about GenAI (rules that have largely been constructed by the tech companies that stand to profit from the continued use and growth of their data-extracting products).

Charles Logan is among the educators going against the grain of AI enthusiasm and inviting us to think differently about common conceptions of AI literacy. Building on Nichols et al.’s (2022) work on the limits of digital literacy, Logan interrogates the extent to which AI literacy is even possible, given GenAI’s opaqueness and the hegemonic systems on which these technologies are built (Logan, 2024; Nichols et al., 2022). Noting the assumption of AI use in AI literacy discussions, Logan cautions, “An AI literacy devoid of power analysis and civic action risks becoming a talking point for Big Tech, and … a means for corporations like OpenAI and Google to set the terms of how educators and students think about and use their chatbots” (Logan, 2024, p. 363). Instead, Logan proposes a “more heterogeneous approach to generative AI” that allows room for non-use and critical inquiry into GenAI. One pedagogical response is “mapping ecologies of GenAI” that illuminate “its social, technical, and political-economic relations” (Logan, 2024, p. 362). For example, Logan describes a classroom mapping activity developed by Pasek (2023), in which students locate a nearby data center and investigate questions such as, “What potential land use, energy, or water conflicts might exist because of the data center?” and “Who benefits from the data center being here? Who loses?” (Pasek, 2023, cited in Logan, 2024, p. 366).

Drawing from the work of educators and scholars like Logan, librarian Joel Blechinger pays particular attention to dominant framings of AI literacy, which are connected to a longer tradition of presenting literacy as an antidote to intractable social issues and structural problems. Reiterating the question of whether AI literacy is possible, Blechinger asks librarians, “to what extent are efforts to theorize—and proclaim a new era of—AI Literacy premature? Do these efforts instead reflect our own professional investment in the transcendent power of literacy—what Graff & Duffy (2014) have termed ‘the literacy myth’—more than the applicability of literacy to GenAI?” Similar to Logan, Blechinger proposes that one alternative pedagogical approach could be to draw from a politics of refusal, rather than assuming AI use (Blechinger, 2024).

While some may have a knee-jerk negative response to the term refusal, the concept is more nuanced than one might first think. Writing and rhetoric scholars and teachers Jennifer Sano-Franchini, Megan McIntyre, and Maggie Fernandes, who authored “Refusing GenAI in Writing Studies: A Quickstart Guide,” describe GenAI refusal as encompassing “the range of ways that individuals and/or groups consciously and intentionally choose to refuse GenAI use, when and where we are able to do so.” Such refusal, they write, “is not monolithic,” nor does it “imply a head-in-the-sand approach to these emergent and evolving technologies.” Moreover, “refusal does not necessarily imply the implementation of prohibitive class policies that ban the use of GenAI among students” (Sano-Franchini et al., 2024).

This conception of GenAI refusal is aligned with the work of scholars like Carole McGranahan, who explains in “Theorizing Refusal: An Introduction” (2016) that “[t]o refuse can be generative and strategic, a deliberate move toward one thing, belief, practice, or community and away from another. Refusals illuminate limits and possibilities, especially but not only of the state and other institutions.” Such a politics of refusal, embedded in the fields of critical and feminist data studies, can be a source for imagining new possibilities, while being informed about the material conditions that underlie and shape technologies and technological use (D’Ignazio, 2022; Garcia et al., 2022; Zong & Matias, 2024).

Sano-Franchini, McIntyre, and Fernandes’s act of refusal, supported by an extended analysis of GenAI’s material impacts on society in general and on writing studies and higher education more specifically, can also be understood as a refusal to accept the feeling rules implied in so much of the discourse on AI literacy. The authors present ten premises on which they ground refusal as a reasoned disciplinary response to GenAI technologies. The first of these – “Writing studies teacher-scholars understand the relationship between language, power, and persuasion”– is especially relevant to considering the feeling rules that drive much of generative AI discourse in higher education and in libraries. The authors observe that the metaphors often applied to these technologies obscure the human labor that goes into GenAI training and ascribe human abilities to these technologies in ways “designed to cultivate trust in corporate, exploitative, and extractive technologies.” I would add that the messages we hear from our employers and other educators positioned as experts in GenAI and AI literacy further encourage us to trust these technologies over our reservations about them. Instead, Sano-Franchini, McIntyre, and Fernandes write, “We must be critical of the ways that these metaphors and affective associations are used to exaggerate the abilities of these products in ways that strengthen the marketing efforts of Big Tech corporations like OpenAI.” With this criticality, writing studies scholars can “use language that most accurately—and transparently—reflects the actual technology and/or … [highlight] the discursive limitations of the language … [they] commonly use to describe these products.” The authors draw attention to the economics behind GenAI and the ways it is promoted and marketed. Asking us to examine who truly benefits from the increased use of GenAI in higher education, they note that people in the EdTech industry have largely shaped this discourse (for example, articles in InsideHigherEd and The Chronicle of Higher Education written by individuals with close ties to the EdTech industry).

Such examinations of the language used to discuss GenAI in higher education help to illuminate what usually goes unspoken in that discourse. Sano-Franchini, McIntyre, and Fernandes’s critical examination of GenAI can be seen not just as a refusal to adopt GenAI technologies into their teaching. It is also a refusal to follow the feeling rules behind much of GenAI discourse. It is refusing to be shamed or to doubt oneself for having concerns about the value, ethics, and potential impacts of the GenAI technologies being so heavily promoted at our institutions. The authors choose critical thought over compliance with mandates that stifle critical inquiry and dialogue. Regardless of whether an individual or a group adopts a stance of GenAI refusal (a position that the authors stress looks different in practice for each individual in their context), examining and questioning the feeling rules implicit in much of GenAI discourse better enables us to make more intentional and informed choices about how we do or do not use these technologies and how we teach about them.

Examples of librarians challenging the feeling rules of dominant GenAI discourse exist, even when they are outliers. ACRL’s invitation to Ruha Benjamin to give the 2025 conference keynote is just one example of an interest within our profession to hear more critical perspectives. Library workers’ feedback on the ACRL AI Competencies for Academic Library Workers is another. Some librarians are also vocalizing a need for slower and more critical investigations into GenAI tools, even when doing so risks social ostracism.

In the April 2025 issue of College & Research Library News, Ruth Monnier, Matthew Noe, and Ella Gibson candidly discuss their concerns about the GenAI tools that are increasingly being used and promoted in their organizations. Drawing attention to both the hype and the many ethical questions surrounding GenAI, they note the unpopularity of expressing reservations about adopting GenAI into libraries. Noe reflects, “the hype cycle is real and here it often feels like the choices are to get on board or lay down on the tracks to become part of a philosophy joke.” Monnier concurs: “I agree it is weird how fast universities, corporations, and individuals have pushed for the adoption and usage of generative AI, especially in the context of the rhetoric about ‘how bad’ social media and cellphones are within a K-12 environment. What makes this technology so unique or special that we as a society feel the immediate need to use and adopt it compared to other previous technologies?” (Monnier et al., 2025). The scope of this article does not allow for a close examination of such work, but additional resources in which library workers challenge the dominant feeling rules of GenAI include Joel Blechinger’s “Insist on Sources: Wikipedia, Large Language Models, and the Limits of Information Literacy Instruction” (2024), Violet Fox’s zine “A Librarian Against AI” (2024); and Matthew Pierce’s “Academic Librarians, Information Literacy, and ChatGPT: Sounding the Alarm on a New Type of Misinformation” (2025). Such work does not deny the fact that GenAI tools exist, nor does it suggest we can or should ignore these tools’ existence. It does open space for thinking more critically about the actual capacities and impacts of GenAI and making more intentional and informed choices about how we (dis)engage with GenAI technologies.

Many in our profession likely will not agree with much of what I have said here, but regardless of our individual views of GenAI technologies, I hope we can all agree that we value critical inquiry, and that an essential part of that process is making space for a consideration of varied perspectives and experiences. Critical inquiry and dialogue become possible and richer when we investigate the feeling rules that may be shaping, and sometimes limiting, professional discourse and practice. As we expand critical conversations about GenAI, we have more power to imagine the futures we want to build and cultivate, as Ruha Benjamin invites us to do.

In the spirit of collective imagining, I close with some questions I would like to collectively explore in and beyond libraries. I have organized these questions into two main areas: professional conversations and interactions and our teaching practices.

Professional conversations:

  • How can we be more inclusive of varied perspectives in our conversations about GenAI and related work, as we acknowledge the challenge of speaking honestly when one disagrees with dominant framings of GenAI and AI literacy?
  • How can we more critically examine our discourses and dialogues about GenAI, as we identify areas that may be unclear, inaccurate, or based on assumptions that need further investigation?
  • How do we practice a culture of care in these dialogic spaces and engage in constructive critique of ideas, not a critique of individuals?
  • How do we align our discourse about GenAI and related work with our professional and personal values, including those articulated in the ALA Core Values of Librarianship and the ALA Ethics of Librarianship?
  • How do we preserve time and energy for valuable work that may not be centered on GenAI, and that has been deprioritized because of the presently dominant focus on GenAI?  

Teaching practices:

  • Historically, what have we valued about librarianship and information literacy education that still remains vital to us? How do we continue our engagement with those dimensions of our work?
  • What agency do students, faculty, and library workers have in whether/how they choose to use GenAI tools? What might it look like for teaching about GenAI technologies to allow for choice in whether and when to use GenAI tools? How can opting for non-use be respected as a choice that may be well-informed and even strategic?
  • What skills, understandings, and practices are prioritized or deprioritized in our teaching? What might be gained and what might be lost through our different prioritizations of pedagogical content and learning experiences? What guides our decisions about what to teach and how?

Many of the resources referenced in this article’s section on alternative imaginings can be springboards for further dialogue and for imagining the futures we want to have and to help build.

In closing, I return now to the end of Ruha Benjamin’s 2025 ACRL keynote. Ultimately, Benjamin revised her opening question “Who owns the future?” to “Who shares the future?” This reframing invites us to imagine collectively. That imagining will inevitably include differing views and beliefs, and it will not always be comfortable. But it can be more generative (in the human sense) and more inclusive when we consider questions like those above, and when we remember that most of us want a future in which people and communities can pose and explore their own questions, find sources of information worth their trust, and work together to actively make informed choices that support the common good. Most of us will hopefully also agree that this collective work is worth the discomfort of looking honestly at the feeling rules embedded in much of GenAI discourse and librarianship. We may be better able to discover and work toward the futures we want when we break those rules in ways that are kind and affirmative of everyone’s humanity, and that prioritize human thought and action over automation.


Acknowledgements

Though this work lists one author, the reality is that many people helped shape it.

My sincere thanks to external reviewer Joel Blechinger and Lead Pipe internal reviewers Ryan Randall and Pamella Lach for the time, thought, and care they gave to providing constructive feedback on the various stages of this article. Thank you also to Pamella, as Publishing Editor, for facilitating all steps of the publishing process, and to all members of the Lead Pipe Editorial Board for their attention to this article, the opportunity to publish it here, and all the work that goes into sustaining this volunteer-driven, open access publishing venue. I also want to express my appreciation to Melissa Wong, who provided writing feedback on a separate article on dominant narratives about generative AI in librarianship and encouraged me to further develop that article’s discussion of GenAI and feeling rules.


References

ACRL AI Competencies for Library Workers Task Force. (2025). AI competencies for academic library workers (Draft—March 5, 2025). https://www.ala.org/sites/default/files/2025-03/AI_Competencies_Draft.pdf

Ahmad, Z., Rahim, S., Zubair, M., & Abdul-Ghafar, J. (2021). Artificial intelligence (AI) in medicine, current applications and future role with special emphasis on its potential and promise in pathology: Present and future impact, obstacles including costs and acceptance among pathologists, practical and philosophical considerations. A comprehensive review. Diagnostic Pathology, 16(1), 24. https://doi.org/10.1186/s13000-021-01085-4

American Library Association. (2024, January 21). Core values of librarianship. https://www.ala.org/advocacy/advocacy/intfreedom/corevalues

Baer, A. (2025). Unpacking predominant narratives about generative AI and education: A starting point for teaching critical AI literacy and imagining better futures. Library Trends, 73(3), 141-159. https://doi.org/10.1353/lib.2025.a961189

Barringer, F. (2025, April 8). Thirsty for power and water, AI-crunching data centers sprout across the West. Bill Lane Center for the American West, Stanford University. https://andthewest.stanford.edu/2025/thirsty-for-power-and-water-ai-crunching-data-centers-sprout-across-the-west

Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610–623. https://doi.org/10.1145/3442188.3445922

Benjamin, R. (2025, April 2). Opening keynote. ACRL 2025, Minneapolis, MN.

Birdsall, W. F. (2001). A political economy of librarianship? Progressive Librarians Guide, 18. http://www.progressivelibrariansguild.org/PL/PL18/001.pdf

Blechinger, J. (2024, June 7). Insist on sources: Wikipedia, Large Language Models, and the limits of information literacy instruction. CAPAL 2024 (Canadian Association of Professional Academic Libraries), Online. https://doi.org/10.60770/36Y6-3562

Buchele, M. (2024, June 21). AI could strain Texas power grid this summer. KUT News. https://www.kut.org/energy-environment/2024-06-21/ai-texas-ercot-grid-conditions-artificial-intelligence-crypto

Comer, J. (2023, July 15). The psychological fears associated with AI. Psychology Today. https://www.psychologytoday.com/us/blog/beyond-stress-and-burnout/202307/the-psychological-fears-associated-with-ai

Cox, J. (2023, July 13). AI anxiety: The workers who fear losing their jobs to artificial intelligence. BBC. https://www.bbc.com/worklife/article/20230418-ai-anxiety-artificial-intelligence-replace-jobs

Crawford, K. (2024). Generative AI’s environmental costs are soaring—And mostly secret. Nature, 626(8000), 693–693. https://doi.org/10.1038/d41586-024-00478-x

D’Ignazio, C. (2022). D’Ignazio, C. (2022). Chapter 6: Refusing and using data. In C. D’Ignazio (Ed.), Counting Feminicide: Data Feminism in Action. MIT Press. https://mitpressonpubpub.mitpress.mit.edu/pub/cf-chap6

Electronic Privacy Information Center. (2023). Generating harms: Generative AI’s impact & paths forward. Electronic Privacy Information Center. https://epic.org/documents/generating-harms-generative-ais-impact-paths-forward

Espinel, R., & Tewell, E. (2023). Working conditions are learning conditions: Understanding information literacy instruction through neoliberal capitalism. Communications in Information Literacy, 17(2), 573–590. https://doi.org/10.15760/comminfolit.2023.17.2.13

Evans, L., & Sobel, K. (2021). Emotional labor of instruction librarians: Causes, impact, and management. In I. Ruffin and C. Powell (Eds.), The Emotional Self at Work in Higher Education (pp. 104–119). IGI Global. https://www.igi-global.com/chapter/emotional-labor-of-instruction-librarians/262882

Fox, V. (2024). A librarian against AI. https://violetbfox.info/against-ai

Garcia, P., Sutherland, T., Salehi, N., Cifor, M., & Singh, A. (2022). No! Re-imagining data practices through the lens of critical refusal. Proceedings of the ACM on Human-Computer Interaction, 6 (CSCW2, Article no. 315), 1–20. https://doi.org/10.1145/3557997

Gerlich, M. (2025). AI tools in society: Impacts on cognitive offloading and the future of critical thinking. Societies, 15(1), 6. https://doi.org/10.3390/soc15010006

Glassman, J. (2017). The innovation fetish and slow librarianship: What librarians can learn from the Juicero. In the Library with the Lead Pipe. https://www.inthelibrarywiththeleadpipe.org/2017/the-innovation-fetish-and-slow-librarianship-what-librarians-can-learn-from-the-juicero

Graff, H. J. & Duffy. J. (2014). Literacy myths. In B.V. Street, S. May (Eds.), Literacies and Language Education. Encyclopedia of Language and Education. Springer. https://doi.org/10.1007/978-3-319-02321-2_4-1

Hibbert, M., Altman, E., Shippen, T., & Wright, M. (2024, June 3). A framework for AI literacy. Educause Review. https://er.educause.edu/articles/2024/6/a-framework-for-ai-literacy

Hochschild, A. R. (1979). Emotion work, feeling rules, and social structure. American Journal of Sociology, 85(3), 551–575. https://doi.org/10.1086/227049

Hochschild, A. R. (1983). The managed heart: Commercialization of human feeling (1st ed.). University of California Press. https://archive.org/details/managedheart00arli

Ivanova, I. (2025, May 20). Duolingo CEO says AI is a better teacher than humans—But schools will exist “because you still need childcare.” Fortune. https://fortune.com/2025/05/20/duolingo-ai-teacher-schools-childcare

Jeffery, K. & Coleman, J. (2025, June 16). ACRL AI competencies for library workers. Generative AI in Libraries (GAIL) Conference. Online. https://www.youtube.com/watch?v=PLvf_OhaWZg

Klein, N. (2025, April 14). Silicon Valley’s AI coup: “It’s draining our real world” [Podcast]. Retrieved May 28, 2025, from https://podcasts.apple.com/us/podcast/silicon-valleys-ai-coup-its-draining-our-real-world/id1748845345?i=1000703466413

Lo, L. (2025). AI literacy: A guide for academic libraries. College & Research Libraries News, 86(3), 120-122. https://doi.org/10.5860/crln.86.3.120

Logan, C. (2024). Learning about and against generative AI through mapping Generative AI’s ecologies and developing a Luddite praxis. ICLS 2024 Proceedings (International Society of the Learning Sciences). https://repository.isls.org//handle/1/11112

McGranahan, C. (2016). Theorizing refusal: An introduction. Cultural Anthropology, 31(3). https://doi.org/10.14506/ca31.3.01

Merchant, B. (2023). Blood in the machine: The origins of the rebellion against Big Tech. Little, Brown and Company.

Monnier, R., Noe, M., & Gibson, E. (2025). AI in academic libraries, part one: Concerns and commodification. College & Research Libraries News, 86(4), Article 4. https://doi.org/10.5860/crln.86.4.173

Nguyen, A., & Mateescu, A. (2024). Generative AI and labor: Value, hype, and value at work. Data & Society. https://datasociety.net/library/generative-ai-and-labor

Nichols, T. P., Smith, A., Bulfin, S., & Stornaiuolo, A. (2022). Critical literacy, digital platforms, and datafication. In R. A. Pandya, J. H. Mora, N. A. Alford, & R. S. de R. Golden (Eds.), The Handbook of Critical Literacies (pp. 345–353). Routledge. https://doi.org/10.4324/9781003023425-40

Okamoto, S. (2023, June 26). Worried about AI? You might have AI-nxiety – here’s how to cope. The Conversation. http://theconversation.com/worried-about-ai-you-might-have-ai-nxiety-heres-how-to-cope-205874

Pasek, A. (2023). Getting into fights with data centers: Or, a modest proposal for reframing the climate politics of ICT. Experimental Methods and Media Lab. https://emmlab.info/Resources_page/Data%20Center%20Fights_digital.pdf

Pierce, M. (2025). Academic librarians, information literacy, and ChatGPT: Sounding the alarm on a new type of misinformation. College & Research Libraries News, 86(2), Article 2. https://doi.org/10.5860/crln.86.2.68

Salvaggio, E. (2025, February 9). Anatomy of an AI coup. Tech Policy Press. https://techpolicy.press/anatomy-of-an-ai-coup

Sam Altman [@sama]. (2022, March 20). I think US college education is nearer to collapsing than it appears.[Tweet]. Twitter. https://x.com/sama/status/1505597901011005442

Sano-Franchini, J., McIntyre, M., & Fernandes, M. (2024). Refusing GenAI in writing studies: A quickstart guide. Refusing GenAI in Writing Studies. https://refusinggenai.wordpress.com

Selber, S. A. (2004). Multiliteracies for a digital age. Southern Illinois University Press.

Shah, C., & Bender, E. M. (2024). Envisioning information access systems: What makes for good tools and a healthy web? ACM Transactions on the Web, 18(3), 33:1-33:24. https://doi.org/10.1145/3649468

Shelby, R., Rismani, S., Henne, K., Moon, Aj., Rostamzadeh, N., Nicholas, P., Yilla-Akbari, N., Gallegos, J., Smart, A., Garcia, E., & Virk, G. (2023). Sociotechnical harms of algorithmic systems: Scoping a taxonomy for harm reduction. Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society, 723–741. https://doi.org/10.1145/3600211.3604673

Shukla, P., Bui, Ph. Levy, S. S., Kowalski, M., Baigelenov, A., & Parsons, P. (2025, April 25). De-skilling, cognitive offloading, and misplaced responsibilities: Potential ironies of AI-assisted design. CHI EA ’25: Proceedings of the Extended Abstracts of the CHI Conference on Human Factors in Computing Systems (Article No.: 171), 1-7. https://doi.org/10.1145/3706599.3719931

Shuler, S., & Morgan, N. (2013). Emotional labor in the academic library: When being friendly feels like work. The Reference Librarian, 54(2), 118–133. https://doi.org/10.1080/02763877.2013.756684

Sloniowski, L. (2016). Affective labor, resistance, and the academic librarian. Library Trends, 64(4), 645–666. https://doi.org/10.1353/lib.2016.0013

Sobel, K., & Evans, L. (2020). Emotional labour, information literacy instruction, and the COVID-19 pandemic. Journal of Learning Development in Higher Education, 19. https://journal.aldinhe.ac.uk/index.php/jldhe/article/view/607 

Suthar, A. C., Joshi, V., Prajapati, R., Suthar, A. C., Joshi, V., & Prajapati, R. (2022). A review of generative adversarial-based networks of machine learning/artificial intelligence in healthcare. In S. Suryanarayan Iyer, A. Jain, & J. Wang (Eds.), Handbook of Research on Lifestyle Sustainability and Management Solutions Using AI, Big Data Analytics, and Visualization. IGI Global Scientific Publishing. https://doi.org/10.4018/978-1-7998-8786-7.ch003

Teaching Commons, Stanford University. (n.d.). Understanding AI literacy. Teaching Commons, Stanford University. https://teachingcommons.stanford.edu/teaching-guides/artificial-intelligence-teaching-guide/understanding-ai-literacy

The White House. (2025, April 23). Fact sheet: President Donald J. Trump advances AI education for American youth. The White House. https://www.whitehouse.gov/fact-sheets/2025/04/fact-sheet-president-donald-j-trump-advances-ai-education-for-american-youth

Vance, J. (2021, November 2). The universities are the enemy. National Conservatism Conference 2, Orlando, Florida. https://nationalconservatism.org/natcon-2-2021/presenters/jd-vance

Zong, J., & Matias, J. N. (2024). Data refusal from below: A framework for understanding, evaluating, and envisioning refusal as design. ACM Journal on Responsible Computing, 1(1), 1–23. https://doi.org/10.1145/3630107


[1] For those who would argue we should not conflate the extreme views of a few politicians with those of the AI industry, it is worth noting statements by tech leaders who have argued AI can replace education (Ivanova, 2025; Sam Altman [@sama], 2022). 

[2] For an exploration of who the Luddites actually were and why the term’s pejorative use is misplaced, see Brian Merchant’s book Blood in the Machine (2023).

[3] Hochshild’s initial research on emotional labor focused on the experiences of flight attendants and debt collectors (Hochshild, 1983). Subsequent research by others building on Hochschild’s work examined the emotional labor of numerous caring professions, including librarianship, where workers are often expected to consistently display a friendly and cheerful demeanor (Evans & Sobel, 2021; Shuler & Morgan, 2013; Sloniowski, 2016; Sobel & Evans, 2020).

[4] The Stanford University Teaching Commons AI Literacy Framework is based partly on Selber’s 2004 multiliteracy framework, which includes three main dimensions of literacy: functional literacy, critical literacy (related to social and ethical issues), and rhetorical literacy.

[5] The choice to include dispositions in the AI Competencies for Academic Library Workers was likely inspired by the ACRL Framework for Information Literacy, which lists dispositions for each of its six conceptual frames.