Misinformation · Inclusion

Fighting Islamophobia with Context-Aware AI

How we built ShamelaGPT — an AI system grounded in verified Islamic scholarship that provides accurate, nuanced answers to combat misinformation.

The Challenge

Misinformation about Islam spreads faster than corrections ever can. Out-of-context quotes, distorted interpretations, and deliberate misrepresentations circulate freely across social media and search results. Mainstream AI chatbots, trained on general internet data, often reproduce these inaccuracies — amplifying harm rather than correcting it.

The Muslim community needed an AI tool that could provide accurate, scholarly responses — grounded in authentic texts and cultural context — accessible to anyone, anywhere.

Our Approach

We built ShamelaGPT as a context-aware AI system that fundamentally differs from general-purpose chatbots in three ways:

Technical Architecture

ShamelaGPT is a cross-platform mobile application built with native performance on both iOS (SwiftUI) and Android (Jetpack Compose), sharing a unified design system across platforms.

SwiftUI (iOS) Jetpack Compose (Android) RAG Pipeline Source Verification Engine Speech-to-Text OCR RTL Layout Support Offline-First Architecture

The source verification engine uses a Retrieval-Augmented Generation (RAG) pipeline that cross-references AI responses against a curated corpus of classical Islamic texts, modern scholarship, and verified hadith collections. Responses are tagged as verified or unverified in real-time.

3
Languages supported
2
Native platforms
3
Input modalities

The Impact

ShamelaGPT represents a new category of AI: systems designed not just to answer questions, but to actively counter misinformation by providing verifiable, sourced, culturally-aware responses.

By making authentic Islamic knowledge accessible and conversational, we're helping build a more tolerant, informed world — one query at a time. The system proves that AI can be a force for understanding rather than division, when designed with the right intent and the right data.

Lessons Learned

Building something that fights misinformation?

We partner with teams applying AI to sensitive, high-stakes domains where accuracy and trust are essential.

Start a Conversation