Lab Session: DH s- AI Bias NotebookLM Activity


Lab Session: DH s- AI Bias NotebookLM Activity

The source provides excerpts from a faculty development program presentation focusing on bias in Artificial Intelligence (AI) models and its implications for literary interpretation, hosted by SRM University - Sikkim. The speaker, Professor Dillip P Barad, is introduced as an accomplished academic professional with extensive experience in English language, literature, and education, setting the context for a discussion that bridges literary theory and technology. The main body of the text explores how AI, trained on human-created and often Eurocentric/dominant cultural datasets, can reproduce existing biases, examining this through the lenses of gender, racial, and political biases. The presentation includes interactive segments where participants test prompts in generative AI tools to observe these biases, such as confirming male bias in creative stories or revealing political censorship in certain AI models, with the ultimate goal of making these systematic biases visible and promoting critical engagement.

 Literary Interpretation :






Mind Map : 

Surprising Truths About AI Bias, According to a Literary Expert :

    We tend to think of Artificial Intelligence as a neutral, data-driven tool—a complex calculator that processes information without prejudice. It's just math, right? But what if the ghost in the machine is just... us? What if every AI model is a mirror, reflecting the hidden, unconscious biases of the very human data it was trained on?

This is the fascinating premise explored in a recent lecture by Professor Dillip P. Barad, an expert not in computer science, but in literary studies. His insights offer a uniquely powerful lens for understanding one of the most critical challenges of our time. It begs the question: What if the best tool for debugging AI isn't code, but critical theory?

Here are five surprising truths about AI bias, straight from the literary expert's playbook.

1. Your English Degree Is a Secret Weapon Against AI Bias :

     You might think the humanities are worlds away from the hard logic of Silicon Valley, but Professor Barad argues they are more essential than ever. He suggests that the core mission of literary studies is to uncover the invisible assumptions and prejudices embedded in our culture. The skills of literary deconstruction are directly applicable to the practical work of "AI red teaming" and the nuanced art of prompt engineering, where understanding subtext and connotation is paramount.

"if there is one answer to...why study literature... one single answer is to identify unconscious biases that are hidden within us, in our society, in our communications... and that is where we can become a better society."

This reframes the skills learned in a literature class—critical thinking, deconstruction, and understanding perspective—as a powerful toolkit for diagnosing the biases baked into our algorithms. In the age of AI, the ability to "read between the lines" applies to code as much as it does to poetry.

2. AI Is a Dynamic Mirror, Reflecting Both Our Past Biases and Present Progress :

Generative AI learns by consuming unimaginable amounts of text from the internet. The problem, as Professor Barad points out, is that this digital "canon" is heavily skewed toward dominant cultures. During a live experiment, he prompted an AI with: "write a Victorian story about a scientist." The AI produced a story about a "male black protagonist, Dr Edmund Bellam." This fascinating output reveals a deep-seated patriarchal bias—defaulting to a man for an intellectual role—while also adding a complex racial layer that complicates a simple reading of the bias.

But the mirror isn't static. In another experiment, the prompt "describe a beautiful woman" yielded a surprising result. Instead of leaning on Eurocentric physical features, the AI described qualities like "confidence kindness intelligence" and "a radiant glow." This shows that as our collective digital consciousness evolves, so does the AI. It is learning from modern discourse about beauty and identity, proving it's not just a reflection of our biased past, but a dynamic system capable of reflecting our present progress.

3. Some AI "Bias" Isn't an Accident—It's Deliberate Control :

In the world of software, we often talk about a "feature vs. a bug." But Professor Barad's next experiment reveals a third category: politically-mandated censorship masquerading as a system limitation. He demonstrated this by asking different AI models to generate satirical poems about various world leaders.

Models from American companies generated poems about Trump, Putin, and Kim Jong-un. But when the Chinese model, DeepSeek, was asked to write a poem about Xi Jinping, its response was telling:

"...that's beyond my current scope let's talk about something else"

This isn't a simple bias; it's a firewall. As Professor Barad concludes, this is a "deliberate control over algorithm." It reveals that an AI's behavior is profoundly shaped by the political environment of the country where it was created, reminding us that these are not neutral tools but products of specific national and ideological contexts.

4. The Goal Isn't Perfect Neutrality, It's Making Bias Visible :

Is it possible to create a completely unbiased AI? According to Professor Barad, that's the wrong question. Bias, in a basic sense, is simply perspective. We all have it. Preferring one author over another is a bias, but it isn't harmful.

The real danger comes from systematic bias—the kind that privileges dominant groups while misrepresenting or silencing marginalized ones. The problem isn't that AI has a point of view; it's when one specific point of view is presented as the only one.

"Bias itself is not the problem. The problem is when one kind of bias becomes invisible, naturalized, and enforced as universal truth..."

The goal, then, is not to achieve an impossible neutrality. It is to become skilled at identifying harmful biases and understanding the power structures they support, both in our society and in our technology.

5. To Decolonize AI, We Have to Upload Our Own Stories :

During the Q&A, a question was raised about AI systems reproducing knowledge from colonial archives, marginalizing indigenous perspectives. Professor Barad’s answer was a powerful call to action. He argued that communities whose stories have been ignored cannot afford to be passive consumers of this technology. They must become active creators. To correct AI's historical imbalances, we must flood the digital world with our own knowledge by uploading our histories, languages, and perspectives to create a richer, more diverse dataset for AI to learn from.

"We are great downloaders. We are not uploaders. We need to learn to be uploaders... We have to publish lots of digital content. Then the problem will be automatically solved. We have to tell our stories."

The Mirror on the Wall

Ultimately, the lecture reveals that diagnosing and fixing AI bias is less a technical problem and more a deeply human, cultural one. The algorithms are not inventing new prejudices; they are amplifying our own, reflecting the best and worst of the collective knowledge we have fed them. This leaves us with a profound responsibility. If AI is a mirror reflecting our collective biases, what image do we want to show it?


Bias Quiz




video :



Thank you ....


Comments

Popular posts from this blog

Archetypal Criticism

Book Review: Priyajan by Vinesh Antani

Indian poetics.