2026
1
Apr

Making Sense of GenAI Amidst AI Hype and AI Personalization

By Sarah Morris

In Brief: Artificial intelligence (AI) literacy frameworks emphasize the importance of understanding Generative AI (GenAI) technologies. But our collective and individual understanding of GenAI is heavily shaped and mediated by the hype narratives that surround it, where GenAI is depicted as powerful, magical, and inevitable. Amidst such compelling narratives, we can face challenges in navigating narrative extremes and exaggerations and in making informed decisions about using GenAI tools. Alongside AI hype, we are also experiencing AI personalization features which encourage trust and positive feelings towards GenAI tools. Taken together, AI hype and AI personalization can challenge and even hinder our ability to engage critically and thoughtfully with GenAI. In this article, I will explore how our understanding of GenAI is influenced by AI hype and AI personalization and consider how hype narratives and personalization features fuel one another and encourage trust in and awe towards GenAI. By centering AI hype and AI personalization as key components to understanding and exploring GenAI, and by incorporating critical media and information literacy skills into AI literacy, I feel that we can develop an AI literacy that better contextualizes GenAI and encourages reflective and critical approaches that can help learners make sense of their emotionally complex experiences with and reactions to GenAI.

Introduction 

Generative Artificial Intelligence (or GenAI) is often depicted in terms of superlatives. Compared to humans, it is described as smarter, faster, more efficient, more accurate, more personable, and even more dangerous. GenAI is a specific form of artificial intelligence, but many of the conversations and commentary surrounding AI in general are focused on or referring to GenAI. Essentially, GenAI can produce text, images, video, audio, or code in response to a user prompt (Striker & Sccapicchio, 2026). GenAI tools include chatbots like ChatGPT or Gemini, or image or video generation tools like Sora. Much of what we experience as AI in our daily lives is some form of GenAI. Our collective understanding of GenAI, whether as a force for good or as a force for some apocalyptic level disaster, is heavily shaped and mediated through narrative. In particular, the narratives that hype artificial intelligence, often extolling and anthropomorphizing its various virtues, greatly influence not only our understanding of these tools but also our ability to critically investigate, discuss, and respond to the entire artificial intelligence landscape. The hype narratives surrounding GenAI frequently minimize its harms, mischaracterize its capabilities, and distract from its failures (Baer, 2025b; Bender & Hanna, 2025). AI hype fundamentally shapes how we conceptualize and discuss GenAI via narratives that can be misleading or manipulative. But while we are experiencing and navigating GenAI through the lens of AI hype, we are also experiencing GenAI through the accompanying lens of AI personalization.

The personalized nature of various GenAI tools closely mirrors the tendencies we see with things like social media algorithms, where people are shown content that reinforces their views, all in an effort to keep people glued to a given platform (Bourne, 2024). With GenAI this personalization is even more insidious, increasingly leading users to rely on chatbots as confidants, friends, therapists, and even romantic partners (Garofalo & Vecchione, 2025). This appears to be by design, to an extent, as evidenced by a batch of recent commercials and offline advertising efforts like billboards depicting AI chatbots as friendly companions that can help you with mundane activities like preparing a meal, exercising, or deciding on home décor (Swant, 2025).  Whether these narratives are an effort to assuage fears about the dangerous capabilities of AI tools, highlight AI tools as powerful, albeit in a nonthreatening way, or attract new users with promises of both usefulness and fun, AI hype narratives seem increasingly intertwined with AI personalization features. I believe that the hype narratives surrounding artificial intelligence and the personalized nature of GenAI actually fuel one another, where hype narratives lead people to see these tools as powerful and magical and personalization features lead people to place their trust in these tools, and thus become more susceptible to believing hype narratives about artificial intelligence.

As librarians and educators seek to develop AI literacy frameworks to make sense of this emerging and evolving technological landscape, I argue that we need to give further attention to the ways in which we talk about, experience, emotionally respond to, and engage with GenAI tools. What effects do AI hype and AI personalization have on our ability to think critically and even clearly about these tools when we are being besieged by everything from relentless positivity, proclamations of inevitability, or visions of doom, all while being swayed by sycophantic chatbots that reaffirm everything we type? How can we make informed decisions about using GenAI technologies in media environments where critical and even accurate information about AI can be hard to come by? We are in a situation where narratives about AI technologies and our varied experiences using these technologies both potentially hinder our ability to make informed decisions and to think critically about generative AI and AI technologies (Nguyen & Mateescu, 2024; Bender & Hanna, 2025). If we as librarians and educators strive to develop an AI literacy that is rooted in critical thinking, nuance, and ethics, as outlined in places like the AI Competencies for Academic Library Workers (ACRL, 2025), then we need to contend with AI hype and AI personalization and equip learners to approach GenAI with both a critical and a reflective lens.

In this article, I hope to examine the interconnected trends of AI hype and AI personalization. I would like to consider how we can utilize the insights that arise from exploring AI hype and AI personalization as key aspects of our understanding of GenAI to develop a more critical and human-centered approach to AI literacy. And I am eager to consider how this framing can equip us to further develop a more critical AI literacy that better contextually situates GenAI, in line with approaches from critical information and media literacy, which examine the social and political construction and dimensions of information (Tewell, 2015; Kellner & Share, 2005). First, I will explore the dynamics between emerging narratives about GenAI and emerging frameworks for conceptualizing AI literacy. While I believe that the narratives surrounding GenAI influence AI literacy, I also argue that AI literacy frameworks form their own sorts of narratives about AI tools and technologies and influence how many, particularly educators and librarians, understand and respond to GenAI. Second, I will look at trends within AI hype, including themes of power, magic, and inevitability, that shape these narratives and our ensuing understanding of and reaction to AI technologies. I will then turn to examining trends of AI personalization within the framework of AI hype narratives and consider how the trust that can be inspired by AI personalization can reinforce AI hype. To close, I will look at ways that we can equip learners to better unpack, interrogate, and understand GenAI through the lenses of AI hype and AI personalization.

A focus on AI hype and AI personalization can help us center the often complex, emotional, and confusing experiences people are having with GenAI and help us as librarians explore how we can best equip people to think critically amidst AI and information environments that often do not lend themselves to critical thought and reflective practices. Ultimately, I believe that librarians, educators, and learners can benefit from the introduction of two lenses into emerging AI literacy frameworks. First is a focus on contextual analysis, where we can take a critical approach to analyzing the narratives surrounding technologies like AI and examine how that mediates, shapes, and influences our experiences with said technologies. Second is a focus on reflective practice that empowers learners to better recognize and critically engage with narratives and technological tools that might be personalized to an alarming degree. By grounding AI literacy with this sort of critical analysis, contextualization, and reflective practice, I feel that we can strengthen AI literacy, situate AI literacy within broader trends around critical media and information literacy, and equip learners to better engage with AI technologies in our complex and rapidly changing information environment.

The Evolving State of AI Literacy and AI Narratives

The hype narratives surrounding GenAI tend to exaggerate the benefits, capabilities, power, and successes of these technologies while minimizing its issues and flaws. And we can see these narratives emerging everywhere from commercials to public comments from AI companies to news articles to chatter on social media. But while the hype narratives promoting GenAI are increasingly ubiquitous, there are alternative narratives emerging that question and criticize the relentless hype surrounding GenAI. To make sense of this cycle of hype and disillusionment, we have a graphically represented cycle for exploring technological hype. The Gartner Hype Cycle, created in 1995 by Gartner analyst Jackie Fenn, provides a compelling framework for exploring AI hype narratives and places AI hype into context with other previous technological hype cycles (Gartner). According to the Gartner Hype Cycle, new technologies tend to follow a certain track in terms of both narrative and public reception and perception. A given technology is praised, extolled, and exalted, to the point of peak absurdity, before careening downhill into the evocatively named “trough of disillusionment.” (Gartner). Following this crash, in terms of expectations and sentiment, people will accept the new technology as useful for some things and not for others, settling into more realistic expectations. Recent research has speculated that this hype cycle model may not hold true for different kinds of technologies and has also posited that the nature of our media landscape and our modern technology sector, with its emphasis on speed and rapid new developments, are leading to repeated, less linear, and more pervasive hype cycles. (Dedehayir & Steinert, 2016; Van Lente et al., 2013; Goncalves & Bareis, 2025).  

The hype we are seeing with GenAI seems to be reaching new heights thanks in part to the nature of our current media and information ecosystem. Social media thrives on virality, with hype narratives poised to find success amongst platforms and algorithms that favor attention-grabbing content and spectacle (Bareis, 2024). And hype narratives are nothing if not attention-grabbing. In some respects, the hype narratives surrounding GenAI have found an ideal home amidst our current online information environment (Bourne, 2024). Recognizing AI hype as part of a longer history of technological hype, market frenzy, and raised expectations can help us better critically analyze the current wave of hype narratives that we are seeing surround GenAI and recognize the ways in which AI technologies are operating as part of a technological industry where hype cycles serve as expressions of power, ways to amass capital, and as a central aspect of technological development and our media ecosystem (Hao, 2025; Bender & Hanna, 2025)

AI literacy has emerged in conjunction with the seemingly abrupt and all-encompassing arrival of GenAI itself, and AI literacy has continued to evolve alongside our shifting understanding of GenAI. We can deepen our insights into the shifts and trends within AI literacy by situating AI literacy within the broader milieu of AI hype narratives, as well as within the longer history of technological hype (Van Lente et al., 2013; Bender & Hanna, 2025). While AI literacy tends to call for critical thinking, ethical understanding, and thoughtful approaches, AI hype tends to highlight things like ease, speed, simplicity, convenience, and the lack of need for deep thought, complexity, or worry (Bareis, 2024). AI hype narratives tend to heavily anthropomorphize AI technologies as well, to the extent that it can be difficult to discuss GenAI without utilizing terms that ascribe these tools more ability, and more humanity, than is warranted (Barrow, 2024; Placani, 2024). These hype narratives also presume an inevitability to GenAI, as if the emergence and ensuing dominance of GenAI in our society is an inescapable fact (Baer, 2025b). While the humanizing language surrounding GenAI can influence or even limit the vocabulary we use to discuss GenAI, the inevitability narratives surrounding GenAI can potentially dissuade critical discussion altogether. After all, why discuss or debate something that is inevitable? The nature of AI hype narratives poses challenges for more critical approaches, as these narratives often suggest that AI technologies are beyond questioning, beyond human foibles, and beyond reproach. The hype narratives that cast AI as somehow superior to humans can dissuade criticism and questions directed towards GenAI and those who have created it (Baer, 2025a; Baer, 2025b; Campolo & Crawford, 2020). Overall, the inviolability found within AI hype narratives can shape, and even hinder, the ways in which we question and criticize GenAI. Given the persuasive nature of AI hype narratives, and the potential harms inherent within AI technologies, there is a real and growing need for more critical and nuanced approaches to GenAI in the face of relentless hype narratives that seem to dissuade thinking deeply about AI in the first place.   

Many AI literacy frameworks, including work from Leo Lo and places like UNESCO and the Digital Education Council, increasingly highlight ethics and critical thinking as tenants of what it means to be AI literate, alongside using and understanding various AI tools and technologies (Lo, 2025, Miao & Shiohira, 2024; Digital Education Council, 2025). However, these AI literacy frameworks exist within and amidst pervasive and compelling AI hype narratives and can echo the underlying assumption that GenAI is powerful, inevitable, and potentially transformative (Baer, 2025b). How can we encourage understanding of GenAI without dissecting the often-misleading hype narratives surrounding it? And how can we gain insights from using highly personalized GenAI tools without reflecting on that experience? It seems to me that AI literacy frameworks can benefit from incorporating more critical approaches that can equip learners to more thoughtfully engage with GenAI and avoid inadvertently reinforcing AI hype narratives. Floridi, in work on the AI bubble, which AI hype is creating, notes that we need to “[m]aintain a critical and balanced perspective about AI developments, no matter what people with vested interests may say, recognising the technology’s potential and limitations” (Floridi, 2024, p. 12). To me, this is a call for embracing critical information and media literacy approaches that investigate and question narratives of power as a way to navigate AI hype.

AI hype is introducing a degree of cognitive dissonance as well, with a contrast between the extreme expectations set by AI hype and the reality of AI tools not performing as promised (Baer 2025a; Floridi, 2024). And this cognitive dissonance seems to be giving rise to increased criticism of GenAI. There are emerging frameworks and schools of thought that challenge the centrality of using GenAI, such as the AI refusal movement which argues that using AI is ethically unacceptable in many instances (Fox, 2024). Resources from places like the Rutgers Critical AI initiative also illustrate ways to utilize critical information and media literacy approaches for exploring GenAI. And as AI hype seems to grow and reach new and more bombastic heights, critiques of the AI enterprise rooted in privacy, labor concerns, and eco-critical stances, among others, have grown in response (Nguyen & Mateescu, 2024). There seems to be a by-play between hype narratives and the eventual counter narratives that emerge seeking to puncture the hype, whether through concern, disagreement, or just sheer exasperation with whatever outlandish claims are being raised by various trending hype narratives.

I think we can place AI hype itself and conceptions of AI literacy within a longer history of over-hyped technology and within the context of our current social media era. If we situate AI hype and AI personalization, as well as AI literacy, within this space, we can draw upon critical approaches and reflective practices that have emerged in media and information literacy spaces and put these lessons into conversation with GenAI (Soken & Nygreen, 2024). We are seeing calls for AI literacy to emphasize ethics and critical thinking (Lo, 2025). But in order to do that I think that we need to better contend with the context of AI, and how AI is being discussed, perceived, and received (Sloane et al., 2024; Bourne, 2024; Baer, 2025). Thinking critically and ethically about AI involves understanding not just how this technology works but how AI is being packaged and presented, how people are experiencing and understanding AI, the culture into which AI is being unleashed, and how AI literacy itself is situated within this environment. I think we can bring these threads together with an eye to developing a more critical AI literacy that considers the influence of AI hype and personalization on our understanding of GenAI.

Understanding AI Hype

AI hype not only influences our understanding of AI, but it also sets up certain parameters for our conversations about AI. The crux of AI hype narratives seems to be a narrative of power, with a focus on the amazing and terrible things that GenAI can do, as well as an underlying theme of who is in power in this AI landscape (Hao, 2025; Bender & Hanna, 2025). Hype is about influence, about generating excitement, and about inspiring strong emotions (Sloane et al., 2024; Bourne, 2024). And, significantly, hype is not accidental but rather crafted to attract positive attention and funding (Goncalves & Bareis, 2025). Within these narratives, there seems to be an idea that GenAI is powerful, that using GenAI can make you better and more powerful (as if the sheen of GenAI can rub off on you), and that creating and developing GenAI tools imbues you with a degree of mysticism. In fact, some have started to note the uncanny similarities between AI hype and a religious movement, complete with commandments, origin myths, prophecies, a belief in the apocalypse, ritualistic practice, and acceptable forms of behavior and language (Epstein, 2024). 

Before delving further into what AI hype tends to say, it is worth noting who is crafting and sharing these narratives. Creators of AI technologies, including the heads of various technology companies and the marketing departments of those companies, contribute a great deal of AI hype into our media environment (Bender & Hannah, 2025). And many of the companies who play major roles in the AI technology landscape already exercise undue influence in our media landscape, controlling our information discovery platforms (like Google) and our social media sites (like Meta). Many of our major technology companies, whether they are driving the development of GenAI technologies or are hopping on the bandwagon of GenAI developments, are contributing and promoting AI hype narratives and are pushing GenAI features on their platforms, further contributing to the feeling that GenAI is inescapable and inevitable. From exclusive interviews to high-profile outlets, to commercials, to conveniently timed “leaks” about new features, to press releases, there is a never-ending stream of hype emerging from these companies (Duarte, 2024; Hao, 2025). If we apply critical analysis to these narratives, some motives emerge. Money, sustained power, and influence are factors that are driving AI hype narratives of more corporate origin (Hao, 2025; Bender & Hanna, 2025). After all, a fantastic, useful, and powerful tool will attract users, investors, and more positive attention. AI hype narratives also emerge from media outlets, governments, other industries, and from users of AI technologies, all of whom echo and reinforce the hype produced by various corporate interests (Hao, 2025; Bareis & Katzenbach, 2022).

Interestingly, AI doom narratives arguably operate as another side of AI hype narratives (Vinsel, 2021). After all, GenAI must be powerful and incredible if it can potentially trigger the apocalypse. Here the doom narratives can feed into the overall hype narratives surrounding GenAI, potentially distracting us from more complex and nuanced challenges and issues associated with AI technologies (Hanna & Bender, 2023). As Sloane et al. (2024) note, “Although situated as polar opposites, stories of excitement and of terror are both integral to the practice of AI hyping because they grossly simplify AI narratives and pit them against the realities of AI design and use” (p. 670). This polarized interplay of terror and excitement, doom and joy, dystopia and utopia, form the crux of AI hype narratives and create challenges for discussing GenAI with nuance and critical discernment. Within these outlandish claims is a degree of confusion and increased unease and even distaste. In an article with The Scholarly Kitchen, Jones (2025) posits what many of us have been wondering and asking: what exactly do AI tools actually do?  Some recent studies have illustrated that people tend to like GenAI less the more they learn about it (Chen et al., 2024; Tully et al., 2025). While more research is needed in this area, recent surveys do indicate that there might be an inverse relationship between learning about AI and liking AI, which has implications for AI literacy education. If your motive is to get people using AI, then it stands to reason that the stories you tell about AI will gloss over its issues. AI hype narratives seem to discourage criticism and critical thought, while encouraging unquestioned use and enthusiasm towards GenAI (Duarte, 2024). In contrast, if your motive is to educate people about AI, then it seems you need to cut through the persuasive and distracting hype surrounding AI (Ndungu, 2024; Baer, 2025a; Soken & Nygreen, 2024).

What does it mean to critically engage with something in the midst of being inundated with outlandish propaganda? How can librarians and other educators equip learners to critically engage with AI technologies, to question them, and to potentially challenge claims made about and by GenAI technologies in information environments inundated with AI hype, where GenAI is positioned as authoritative? Within a hype cycle, embracing a critical approach involves not just information and understanding but the confidence and knowledge to make and share critical views and arguments (Baer, 2025a; Baer, 2025b; Soken & Nygreen, 2024). There are a few motifs and themes within the AI hype narratives that I feel are worth unpacking, and that have implications for how we can develop a critical AI literacy imbued with a focus on narrative, context, and reflection. To my mind, there are three areas that are key for understanding the current nature of AI hype and the ways that AI hype is shaping our understanding of and relationship with GenAI.

The first area is power. Power can of course be enticing, but it can also be prohibitive in that the perception of power can squash dissent. As Duarate (2025) argues, our ability to think critically about GenAI can be “dramatically impeded by exposure to inaccurate information, especially when it is delivered confidently and compellingly by AI executives and other influential figures” (para 4). Whether the narratives about GenAI are inaccurate, distracting, misleading, exaggerated, or some combination of those things, these narratives seem as if they are designed to influence more than inform. AI hype narratives promote the power of AI tools, but they also serve as expressions of power and influence from individuals and groups, such as technology companies, crafting and sharing them (Hao, 2024). Power is central to the AI hype narratives we are currently seeing and to the emerging counternarrative, where critics of GenAI and the AI enterprise often dissect how GenAI tools actually do not work as advertised and are not as powerful as proclaimed (Bender & Hanna, 2025; Nguyen & Mateescu, 2024). And power leads us to a few other themes that are, in some respects, unique to AI hype narratives when compared to the hype narratives we have seen about other technologies.

Next is magical thinking. There is a degree of magic surrounding narratives about AI and hype narratives in particular. According to these narratives, AI can do an endless array of wondrous and wonderful things and can make astonishing leaps in performance (Mitchell, 2025). The sense of magic imbuing AI can lead people to believe in the capabilities and power of AI unquestioningly. And a belief in the magic of AI has been linked to lower levels of AI literacy, with a study from Tully et al. (2025) noting that individuals with lower levels of AI literacy are more likely to perceive AI as magical and more likely to be receptive towards using AI tools. Magic is a key aspect of AI hype narratives, and an aspect of AI personalization as well, with GenAI appearing as some sort of all-powerful and all-knowing companion, like a sort of technological fairy godmother. But magic also crops up in the nature of AI hype narratives themselves, not just in how AI tools allegedly perform. As David Morris (2024) notes in his work on AI and magic, “Magicians hack our attentional, perceptual, and cognitive tendencies to make us perceive and believe what is not there” (p. 3047). Here AI technologies function as magical tools while the creators of AI technologies function as magicians, using dazzling techniques to divert our attention. This sort of technique lies at the core of AI hype narratives, which arguably distract from real issues and complexities surrounding the development and deployment of GenAI (Hanna & Bender, 2023). Recognizing the magic running through narratives surrounding AI, and how it shapes our perception of these tools as immensely powerful, is a key aspect of approaching AI with a critical lens.

The final area worth considering is inevitability. Within AI hype narratives, AI technologies are presented as somehow inevitable and unquestionable (Baer, 2025b; Gonclaves & Bareis, 2025). As noted, these narratives can take on a sort of religious fervor, as if AI technologies are somehow preordained (Epstein, 2024). A prevailing sentiment seems to be that AI is here, it is not going anywhere, and everyone must adapt themselves to this new AI-driven reality. This sort of narrative can dissuade questioning, both through more overt prohibitions and through more subtle implications about futility (if AI is inevitable, then what use is it complaining or questioning?) and progress (if you question progress does that mean you are somehow backwards?) (Baer, 2025a). The hype narratives that emphasize the inevitability of GenAI can also hinder critical engagement with AI technologies and even cast the act of asking questions as being unduly negative or resisting inevitable technological progress.

Taken together, these trends within AI hype narratives can make critical thinking and critical engagement with AI incredibly challenging. Even critiquing GenAI in the midst of an environment dominated by AI hype runs the risk of giving too much credence to AI’s alleged power (Sloane et al., 2024). To critically engage with AI technologies, we need to cut through narratives of power, magic, and inevitability, which can involve taking the time to untangle and rebut various hype narratives and claims before moving onto things like actual critiques, policy proposals, or more nuanced arguments (Sloane et al., 2024). While AI hype can be a distraction, understanding and analyzing AI hype is a vital component of a more critical AI literacy. By borrowing from critical information and media literacy, we can weave skills in analysis and evaluation into AI literacy and better equip learners to ask questions, consider the context of GenAI, dissect narratives of power (with hype narratives are at their core), and more thoughtfully consider how we are experiencing and understanding GenAI amidst the outlandish claims of AI hype.

Unpacking AI Personalization

Amidst the frenetic hype surrounding AI, which can beggar belief, is the emotionally appealing, persuasive, and at times manipulative nature of AI personalization. AI personalization can take the form of agreeableness, positivity, and even sycophancy (Hermann, 2022; Kaffee & Pistilli, 2025; Selvi, 2025). AI chatbots are endlessly helpful, rarely disagree or argue, and (if the hype is to be believed) always do what you ask. The personalized nature of AI tools, and the experience of using these seemingly friendly, agreeable, and helpful tools can create feelings and emotions among users that I feel are important to recognize and consider as we strive to develop more human-centered approaches to AI literacy. A study from Data and Society notes that while “our participants know the chatbot is neither ‘real’ nor ‘intelligent,’ they also know that the feelings it elicits in them are genuine,” describing how users find chatbots safe, easy to talk to, and comforting (Garofalo & Vecchione, 2025). Even if people are aware of the nature of AI personalization, and the artifice of these tools, feelings of trust and fondness can still emerge. However, many users are not aware of the machinations behind AI tools and how the personalized features are in many respects an effort to keep users glued to a given chatbot platform (Lupetti & Murray-Rust, 2024). We can face challenges in critically engaging with AI due to AI hype, where narratives present AI as powerful, magical, inevitable, and something that shouldn’t be questioned. But the personalized nature of AI can add further challenges to our ability to engage critically with GenAI. While AI hype narratives might strain credulity, the personalized nature of AI, and the emotional aspects of that personalization, can make questioning and challenging emotionally resonate and appealing AI difficult nevertheless.

GenAI chatbots have a tendency towards positivity and agreeableness which can foster trust and reliance. As Kaffee and Pistili note, GenAI “systems already simulate care, empathy, and attentiveness” (para 9). Meanwhile, Gary Marcus (2025) argues that GenAI chatbots fool people into thinking they can behave like humans, when in reality these tools are just mimicking humans. Constantly hearing that everything you say and think is fantastic can be enticing, if not addictive. In fact, when ChatGPT released an update in the summer of 2025 that toned down the sycophancy, users complained (Tangermann, 2025). This personalization also seems to exacerbate trends we have already seen in social media spaces with things like filter bubbles and echo chambers, where algorithms curate customized environments where you only see and hear what the algorithm thinks you want to see and hear. As AI gets further embedded into many of our existing online tools and spaces, from search engines to social media sites, what effect will this have on people’s ability to identify and critique GenAI? If someone is hearing what they want to hear, or feel trust towards the powerful, magical, and personalized tool they are using, will they be inclined to analyze or question that tool?

We can benefit from unpacking AI personalization within the context of AI hype narratives that emphasize power, magic, inevitability, and the superior nature of AI when compared to humans. Notably, the personalized nature and experience of GenAI reinforces many of the themes found within AI hype narratives. AI chatbots seem poised to act as the ultimate personal assistants, able to handle any task or question without complaint or without tiring. The speed with which AI chatbots respond, and the confidence with which they do so belies the chronic issue of so-called AI hallucinations that have plagued AI chatbots since their launch (Hicks et al., 2024). AI chatbots give the impression of being powerful and wise and the hype narratives surrounding AI reinforce the behavior of the chatbots themselves. As a result, we are seeing emerging issues with cognitive offloading with GenAI technologies, where people trust these tools and become overly reliant on their AI personal assistants, potentially degrading their own skills and cognitive abilities (Kulal, 2025; Skibba, 2025). Overall, this reliance on these powerful GenAI tools can lead to trust and to an affinity toward these tools.

Magical thinking and the magic narratively surrounding GenAI also intersects with the personalized experience of using AI tools. As we have seen, AI hype narratives frequently imbue AI with a sense of mysticism and magic. And something that is always at the core of magical narratives is trust and belief (Morris, 2024). Endlessly cheery and agreeable AI tools ask for trust, even if the ideas it shares are half-baked or the sources are made-up or the writing is mediocre. The underlying promise seems to be if you don’t look to closely or delve too deeply, if you trust the magic and the speed and the power, if you accept the results that you are (quickly) given, if you place your trust and your cognition into AI’s hands, then you will have nothing to worry about. The overall personalized user experience and design of GenAI can contribute to a sense of “enchantment” with using AI tools (Lupetti & Murray-Rust, 2024). But this experience with enchantment goes beyond using AI tools and shapes the nature of, and potential goals of, AI hype narratives as well. As Campolo and Crawford (2020) note, the experience of enchantment shields creators of AI tools from scrutiny and accountability. The user experience of GenAI often discourages reflection and deep thought while the magic trick of AI hype narratives and AI user experience encourage trust and belief. The positive feelings generated (pun intended) towards AI by the personalization of AI technologies can reinforce AI hype narratives.

Just as we can experience challenges in critically engaging with GenAI amidst hype narratives that emphasize the amazing and powerful nature of AI technologies, we can experience difficulties with thinking critically and clearly about AI in the midst of the emotional experience of AI personalization. Additionally, the experience of using AI technologies can be quite emotionally complex, while our individual and collective responses to AI development are also rooted in strong emotions like fear, anxiety, enthusiasm, curiosity, and even frustration and anger (Chen et al., 2024; Bourne, 2024). I think it is important to recognize that we as librarians and educators might have strong feelings towards AI ourselves, just as our learners might also have complicated emotions about AI (Baer, 2025a; Fox, 2025; Monnier et al., 2025) As we continue to develop AI literacy in response to AI trends, I think we have to acknowledge and even center the emotional aspects of our experiences with and reaction to AI. 

One potential way forward with this is to borrow from critical information and media literacies, which emphasize the complex experiences people have with information and the ways that media shapes, and is shaped by, systems of power (Soken & Nygreen, 2024; Kellner, 2005). If our understanding of GenAI is shaped by narratives of power in the guise of AI hype and by our experiences with using these tools under the influence of AI personalization, then I believe we can benefit from bringing critical approaches that address these facets of GenAI into AI literacy. AI hype might seek to present AI as unprecedented and amazing, but I feel that AI is part and parcel of broader trends in technological hype, personalization, and what Bourne refers to as “affective capitalism,” or a capitalism rooted in emotional appeals and personalization (Bourne, 2024, p. 758). And if GenAI is part of these broader trends, then I think we can situate AI literacy within existing trends and approaches found in critical information and media literacy.

In environments colored by ubiquitous AI hype narratives and the personalized effects of AI technologies, the ability to reflect is crucial. While it is important for learners to understand AI, I feel that it is also key for learners to be able to reflect upon and identify how AI is making them feel and how they are responding to AI, increasingly important given how persuasive AI hype and personalization can be. Incorporating reflection into AI literacy alongside skills like critical thinking will strengthen existing aspects of AI literacy like ethical reasoning and evaluation and will highlight a skill set that can better enable people to navigate the emotional complexities of AI hype and AI personalization, however appealing and persuasive it might be. By exploring both hype narratives and the personalized output from GenAI, we can develop richer approaches to AI literacy. 

Developing Critical AI Literacies

The experiences and effects of AI hype and AI personalization complicate our efforts to engage critically and thoughtfully with generative AI tools and technologies and the many challenges and issues these technologies introduce. A more critical and reflective approach to AI literacy can help us unpack these narratives of power and influence. But I think a challenge for librarians and educators is in finding ways to make that focus explicit, central, and sustained amidst all the other demands inherent within AI literacy and a broader information literacy for that matter. In my own work as an instruction librarian, I have felt the pressure of time constraints and the enormity and complexity of the information literacy topics I am aiming to address. Personally, I feel that intentionality and an emphasis on equipping learners to ask questions rather than settle on a single correct answer can create space for more critical and reflective approaches which can greatly benefit us as we explore more critical and contextualized approaches to AI literacy. Librarians and educators can bring in examples of AI hype narratives or AI personalization, pose questions, and encourage learners to share their own experiences. Taking a little time, even when time is short during an instruction session, to spark curiosity and awareness can equip learners to better take in the bigger picture and context of GenAI, beyond simply using an individual tool. Ultimately, I believe that librarians, educators, and our learners can benefit from the introduction of two lenses into emerging AI literacy frameworks.

First is a focus on context and contextual analysis, where we take a critical approach to analyzing the narrative context surrounding technologies like AI and how that mediates, shapes, and influences our experiences with said technologies. This concern with narratives of power is a framing that can be particularly beneficial for gaining a deeper and more critical understanding of AI technologies (Soken & Nygreen, 2024; Baer, 2025b). Many AI literacy frameworks, including the AI Competencies for Academic Library Workers (ACRL, 2025) include a call for developing an understanding of AI technologies, including how they work and how they are developed. But I believe that we can extend this understanding to include a focus on how AI technologies and tools are presented, received, and conceptualized by the public. The narratives hyping AI, whether through commercials, interviews, media coverage, or online social media posts, greatly shape how we conceptualize and discuss AI and can even dissuade us from criticizing or questioning AI technologies thanks to the aura of power, magic, and inevitability that AI hype narratives create around Generative AI. When teaching others about AI technologies, librarians and other educators can discuss trends in AI hype with students, encourage students to reflect on the AI hype narratives they have seen and encountered, and share examples of AI hype narrative for analysis, reflection, and discussion (Soken & Nygreen, 2024, Ndungu, 2024). I believe that equipping students to think critically about AI and to feel confident in sharing their opinions and views is an important component of developing a more critical AI literacy and a broader and richer understanding of GenAI. And this approach has implications for information and media literacy more generally, where we can encourage learners to think critically about technologies other than AI that might also be overly-hyped in the media or cast as powerful or beyond reproach.

The second lens that we can introduce to AI literacy is a focus on reflective practice that empowers learners to better recognize and critically engage with narratives and technological tools, like AI, that might be highly personalized. As we have seen, AI hype and the experience of using AI tools can discourage reflection and critical analysis and encourage trust and awe. Emphasizing reflection as a key component of AI literacy mirrors approaches that are increasingly utilized in broader media and information literacies (Soken & Nygreen, 2024; Ndungu, 2024). Researchers like Riesen (2025) have argued that reflective practices can help learners better contextualize and apply information literacy skills. I believe reflection can also help learners find personal meaning, value, and context for AI literacy skills. AI literacy frameworks generally have a section that calls for evaluation of AI output. But I think we can also encourage an evaluation of our own thoughts and feelings towards AI, and a reflective approach to both using AI tools and consuming content about AI tools. What emotions are arising? Why might an AI tool foster a certain kind of user experience? What motivations underlie narratives surrounding AI? These are questions that can be part of a reflective practice where students are encouraged to pause, consider, and reflect on their own experiences with AI as a way to better critically analyze AI technologies. AI literacy emphasizes using AI tools and analyzing the output of those tools. But by taking a step backward and outward, and by posing questions about the implications of GenAI, the narratives being woven about and around GenAI, and the experiences people are having with GenAI, we can encourage learners to ask questions, sort through their thoughts and feelings, share their ideas, and begin to engage more critically with not just individual GenAI tools but the entire GenAI enterprise.

Conclusion

Putting AI hype and AI personalization into conversation can help us develop an AI literacy that not only focuses on critical thinking but on reflection, context, and the complex emotional experiences that we have with AI technologies. I think that a human-centered AI literacy can and should embrace the complicated, messy, and emotional aspects of our collective and individual experiences with GenAI and the stories we imbibe and tell ourselves about these tools. And by centering and acknowledging the emotional complexities of our experiences with, and reactions to, GenAI, we can better engage in conversations with learners and delve into issues surrounding GenAI and its development and use.

The personalized experience of using AI tools and the hype surrounding AI cannot be separated from our understanding of GenAI. Rather, AI hype and AI personalization deeply shape and influence our experience with GenAI and how we perceive, react to, and make decisions about GenAI, including when, where, and how we use these AI tools. By grounding AI literacy with this sort of critical analysis, contextualization, and reflective practice, I feel that we can strengthen both AI literacy and information literacy and equip learners to better engage in our complex and rapidly changing information environments. Librarians and other educators can work to develop an AI literacy that is concerned with and informed by the context in which AI technologies are developed and in which they emerge as well as the complex and emotional human experience of using, understanding, and responding to GenAI.


Acknowledgements 

I want to extend my sincere thanks to my internal reviewer Brea McQueen, my publishing editor Brittany Paloma Fiedler, and my external reviewer Rosalind Tedford for their time, attention to detail, constructive feedback, and support. Their thoughtful comments, ideas, and feedback proved invaluable throughout the stages of shaping this article. I am fortunate to have collaborated with Rosalind on previous projects related to information and AI literacy, and I’d like to extend a thank you to her and Dan Chibnall for serving as thought-partners and collaborators over the years. I’d also like to thank Andrea Baer and Brady Beard for their time, generosity, and willingness to discuss generative AI and librarianship with me. Their work has helped to shape and inspire my own. Finally, a thank you to the Lead Pipe Editorial Board for the opportunity to publish my work here.


Suggested Tags

Generative AI; AI literacy; AI hype

References

ACRL (2025). AI competencies for academic library workers. https://www.ala.org/acrl/standards/ai

Baer, A. (2025a). Unpacking predominant narratives about generative AI and education: A starting point for teaching critical AI literacy and imagining better futures. Library Trends, 73(3), 141-159. https://muse.jhu.edu/pub/1/article/961189/pdf

Baer, A. (2025b). Investigating the ‘feeling rules’ of generative AI and imagining alternative futures. In the Library with the Lead Pipe. https://www.inthelibrarywiththeleadpipe.org/2025/ai-feeling-rules/

Bareis, J. (2024). Ask me anything! How ChatGPT got hyped into being. Preprint. Center for Open Science. https://doi.org/10.31235/osf.io/jzde2

Bareis, J., & Katzenbach, C. (2022). Talking AI into being: The narratives and imaginaries of national AI strategies and their performative politics. Science, Technology, & Human Values, 47(5), 855-881. https://doi.org/10.1177/01622439211030007

Barrow, N. (2024). Anthropomorphism and AI hype. AI and Ethics, 4(3), 707-711. https://doi.org/10.1007/s43681-024-00454-1

Bender, E.M., & Hanna, A. (2025). The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want. Harper.

Bourne, C. (2024). AI hype, promotional culture, and affective capitalism. AI and Ethics, 4(3), 757-769. https://doi.org/10.1007/s43681-024-00483-w

Campolo, A., & Crawford, K. (2020). Enchanted determinism: Power without responsibility in artificial intelligence. Engaging Science, Technology, and Society. https://knowledge.uchicago.edu/record/6022?v=pdf

Chen, Y. S., Tang, Y. C., & Chen, C. (2024). The ethical deliberation of generative AI in media applications. Emerging Media, 2(2), 259-276. https://doi.org/10.1177/27523543241277563

Dedehayir, O., & Steinert, M. (2016). The hype cycle model: A review and future directions. Technological Forecasting and Social Change, 108, 28-41. https://doi.org/10.1016/j.techfore.2016.04.005

Digital Education Council (2025). Digital Education Council AI literacy framework. https://www.digitaleducationcouncil.com/post/digital-education-council-ai-literacy-framework

Duarte, T. (2024). As the AI bubble deflates, the ethics of hype are in the spotlight. Tech Policy Press. https://www.techpolicy.press/as-the-ai-bubble-deflates-the-ethics-of-hype-are-in-the-spotlight/

Epstein, G. (2024). Silicon Valley’s obsession with AI looks a lot like religion. The MIT Press Reader. https://thereader.mitpress.mit.edu/silicon-valleys-obsession-with-ai-looks-a-lot-like-religion/

Floridi, L. (2024). Why the AI hype is another tech bubble. Philosophy & Technology, 37(4). https://doi.org/10.1007/s13347-024-00817-w

Fox, V. (2024). A librarian against AI. https://violetbfox.info/against-ai

Garofalo, L., & Vecchione, B. (2025). All the lonely people: on being alone with digital companions. Data and Society. https://datasociety.net/points/all-the-lonely-people/

Gartner. Gartner Hype Cycle. https://www.gartner.com/en/research/methodologies/gartner-hype-cycle

Goncalves, A.B., & Bareis, J. (2025). Expanding hype literacy to protect democracy. Tech Policy Press. https://www.techpolicy.press/expanding-hype-literacy-to-protect-democracy/

Hanna, A. &  Bender, E. (2023). AI causes real harm: let’s focus on that over the end-of-humanity hype. Scientific American. https://www.scientificamerican.com/article/we-need-to-focus-on-ais-real-harms-not-imaginary-existential-risks/

Hao, K. (2025). Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI. Penguin Press.

Hermann, E. (2022). Artificial intelligence and mass personalization of communication content—An ethical and literacy perspective. New media & society, 24(5), 1258-1277. https://doi.org/10.1177/14614448211022702

Hicks, M. T., Humphries, J., & Slater, J. (2024). ChatGPT is bullshit. Ethics and Information Technology, 26(2), 38. https://doi.org/10.1007/s10676-024-09775-5

Jones, P. (2025). Three years after the launch of ChatGPT, do we know where this is heading? The Scholarly Kitchen. https://scholarlykitchen.sspnet.org/2025/10/13/three-years-after-the-launch-of-chatgpt-do-we-know-where-this-is-heading/

Kaffee, L., & Pistilli, G. (2025). Before AI exploits our chats, let’s learn from social media mistakes. Tech Policy Press. https://www.techpolicy.press/before-ai-exploits-our-chats-lets-learn-from-social-media-mistakes/

Kellner, D., & Share, J. (2005). Toward critical media literacy: Core concepts, debates, organizations, and policy. Discourse: studies in the cultural politics of education, 26(3), 369-386. DOI: 10.1080/01596300500200169

Kulal, A. (2025). Cognitive risks of AI: Literacy, trust, and critical thinking. Journal of Computer Information Systems, 1-13. https://doi.org/10.1080/08874417.2025.2582050

Lo, L. S. (2025). AI literacy for all: A universal framework [Preprint]. University of New Mexico Digital Repository. https://digitalrepository.unm.edu/cgi/viewcontent.cgi?article=1216&context=ulls_fsp

Lupetti, M. L., & Murray-Rust, D. (2024). (Un)making AI magic: A design taxonomy. Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems (pp. 1-21). https://doi.org/10.1145/3613904.3641954

Marcus, G. (2025). Why DO large language models hallucinate? Marcus on AI. https://garymarcus.substack.com/p/why-do-large-language-models-hallucinate

Miao, F., & Shiohira, K. (2024). AI competency framework for students. UNESCO Publishing. https://www.unesco.org/en/articles/ai-competency-framework-students

Mitchell, M. (2025). Magical thinking on AI. AI: A Guide for Thinking Humans. https://aiguide.substack.com/p/magical-thinking-on-ai

Monnier, R., Noe, M., & Gibson, E. (2025). AI in academic libraries, part one: Concerns and commodification. College & Research Libraries News, 86(4), 173. doi:https://doi.org/10.5860/crln.86.4.173

Morris, D. (2024). Magical thinking and the test of humanity: We have seen the danger of AI and it is us. AI & SOCIETY, 39(6), 3047-3049. https://doi.org/10.1007/s00146-023-01775-1

Ndungu, M. W. (2024). Integrating basic artificial intelligence literacy into media and information literacy programs in higher education: A framework for librarians and educators. Journal of Information Literacy, 18(2), 1–18. https://doi.org/10.11645/18.2.641

Nguyen, A., & Mateescu, A. (2024). Generative AI and labor: Value, hype, and value at work. Data & Society. https://datasociety.net/library/generative-ai-and-labor

Placani, A. (2024). Anthropomorphism in AI: Hype and fallacy. AI and Ethics, 4(3), 691-698. https://doi.org/10.1007/s43681-024-00419-4

Riesen, K. (2025). Incorporating signature pedagogies into library instruction through

reflective pedagogy. Portal: Libraries and the Academy, 25(1), 137-150.

https://dx.doi.org/10.1353/pla.2025.a950012

Rutgers (2026). Critical AI. Rutgers School of Arts and Sciences Critical AI. https://sites.rutgers.edu/critical-ai/

Selvi, A. F. (2025). Meet your new AI teacher: hypes, promises, and realities in AI-powered language education platforms. Applied Linguistics Review. https://doi.org/10.1515/applirev-2025-0224

Skibba, R. (2025). Are we offloading critical thinking to chatbots? Undark. https://undark.org/2025/09/12/critical-thinking-chatbots/

Sloane, M., Danks, D., & Moss, E. (2024). Tackling AI hyping. AI and Ethics, 4(3), 669-677. https://doi.org/10.1007/s43681-024-00481-y

Soken, A., & Nygreen, K. (2024). Framing generative AI through a critical media literacy lens: A reflective practitioner-inquiry study. International Journal of Transformative Teaching and Learning in Higher Education, 1(1), 7. https://commons.library.stonybrook.edu/cgi/viewcontent.cgi?article=1010&context=ijttl

Stryker, C. & Scapicchio, M. (2026). What is generative AI? The 2026 Guide to Machine Learning. IBM. https://www.ibm.com/think/machine-learning#605511093

Swant, M. (2025). The surprising advertising strategy AI companies are investing in to stand out. Inc. https://www.inc.com/marty-swant/the-surprising-advertising-strategy-ai-companies-are-investing-in-to-stand-out/91281145

Tangermann, V. (2025). OpenAI announces that it’s making GPT-5 more sycophantic after user backlash. Futurism. https://futurism.com/openai-gpt5-more-sycophantic

Tewell, E. (2015). A decade of critical information literacy: A review of the literature. Communications in information literacy, 9(1), 2. DOI: 10.15760/comminfolit.2015.9.1.174

Tully, S. M., Longoni, C., & Appel, G. (2025). Lower artificial intelligence literacy predicts greater AI receptivity. Journal of Marketing, DOI: 10.1177/00222429251314491

Van Lente, H., Spitters, C., & Peine, A. (2013). Comparing technological hype cycles: Towards a theory. Technological Forecasting and Social Change, 80(8), 1615-1628. https://doi.org/10.1016/j.techfore.2012.12.004

Vinsel, L. (2021). You’re doing it wrong: Notes on criticism and technology hype. Medium. https://sts-news.medium.com/youre-doing-it-wrong-notes-on-criticism-and-technology-hype-18b08b4307e5