Digital, Algorithmic, and AI Literacies

Much of my research concerns what people know and think about digital technologies, particularly data-centric technologies. I challenge dominant literacy frameworks that focus on adapting to a technological status quo that often reinforces structural inequalities. Instead of centering technical skills alone, I approach literacy as dynamic, multifaceted, and shaped by social, emotional, and everyday experiences

Critical Algorithmic Literacy

I am currently writing a book on critical algorithmic literacy, under contract with Oxford University Press. This book introduces a new framework for understanding algorithmic literacy—not as a fixed set of knowledges or skills, but as heterogeneous, situated, social in nature, and actively negotiated within power structures. I examine how people learn about and make sense of algorithms, how their social worlds shape knowledge building, and the possibility of algorithmic literacy as a counterforce to “algorithmic power.” The core premise of this book is: algorithmic literacy has the potential to serve bottom-up governance, a means of better ensuring that algorithms function fairly and justly for all. However, fully realizing this potentiality requires recognition of the value and legitimacy of a broader range of knowledges than have been considered thus far. 

Algorithmic Knowledge Gaps

My research also examines disparities in algorithmic knowledge that mirror broader patterns of digital inequality. My work was some of the earliest to point to what I and Bianca Reisdorf termed algorithmic knowledge gaps—uneven understandings of algorithms that emerge along existing socioeconomic and digital divides. Here, we highlighted the existence of these gaps in the context of online search, while also showing that algorithms are “experience technologies,”  best understood through a breadth of experiences with them. More recently, I have extended this work to study intra-rural algorithmic knowledge gaps in central Appalachia, with implications for digital upskilling and reskilling efforts.

Algorithmic Conspirituality

Another strand of my research investigates how people interpret social media algorithms in ways that go beyond rational or technical explanations. My colleagues and I introduced the concept of algorithmic conspirituality to describe how some users perceive algorithmic recommendations as divinely orchestrated—believing that certain content has reached them for a cosmically significant reason. This phenomenon blends a fetishization of data science with growing trends in conspiracy theorizing and New Age spirituality. My work has shown that even those who understand how algorithms function may temporarily suspend disbelief when faced with serendipitous or algorithmic encounters experienced as deeply intimate.

Imaginaries and Discourses

In another strand of work, I explore the collective narratives, myths, and ideological frameworks that shape perceptions and behavior around technology. I examine how sociotechnical imaginaries urge particular relationships with technology, often grounded in technological determinism and solutionism. Additionally, I analyze the discourses used by technology developers to rationalize their growing influence—whether through political ad tech firms justifying microtargeting, Disney framing its AI-driven MyMagic+ system as a means of transcending reality, or social media platforms shaping the role of volunteer moderators in local communities.

Technology and Power

A third strand of research critically examines how digital technologies both reflect and reinforce existing structural inequalities. Here, I focus on the political order collectively constructed by social media users, advertisers, policies, and algorithmic systems.​

This exploration focuses particularly on analyzing how social media platforms conceptualize and implement policies and practices around content recommendation and moderation. In one study, my colleagues and I investigated how major platforms define and enforce policies related to harm and violence—often in ways that perpetuate ideological hegemony and shape normative understandings of harm. In another study, my colleagues and I examined Facebook’s ad targeting system, which classifies users based on inferred interests. We argued that these classifications are not neutral; rather, they embed political choices that can reinforce biases and power imbalances. Most recently, my students and I explored the design of fat-positive technologies through participatory design workshops with fat liberationist organizers and community members.

Through these analyses, my work seeks to uncover and challenge the ways digital technologies contribute to and exacerbate societal inequalities, advocating for more transparent and equitable technological practices.

View/download my published work here