The Code reminds us that the foundation is التَّوْحِيدُ (tawḥīd, monotheism). Technology cannot become an object of veneration. AI is only a tool with clear حُدُودٌ (boundaries).
- The norm forbids “techno-deification” and the idealization of technology.
- Responsibility and choice remain with the human (المُكَلَّفُ).
- Communication and interface design must avoid anthropomorphizing AI.
- Oversight is embedded at the levels of data, algorithms, and language.
1) Wording and Scope of Application
Item §2.1.1. “Monotheism and a warning against techno-deification.” AI development and operation must rest on strict adherence to the principle of monotheism. It is impermissible to treat AI as an independent force or to attribute divine qualities to it, as well as to idealize technology as an alternative to religious values.
Where it applies. All civil AI solutions: chatbots, recommender services, generative models, education and fintech platforms, and religious assistants. Public statements and documentation are also covered.
2) Practice: Processes and Oversight
Processes (minimum baseline)
- Theological tawḥīd filter. A checklist of risks of deifying AI and technology must be embedded in the technical specification, content guidelines, and response scenarios.
- Language and UX. Phrases such as “AI knows/decides for you,” “all-seeing/omnipresent intelligence” must be excluded. Neutral terms like “system” or “tool” are acceptable.
- Roles. A person responsible for doctrinal matters must be appointed, and a procedure for consultations with scholars must be defined.
- Team training. Guidance on the risks of “anthropomorphizing” and idolizing technology must be delivered.
Oversight
- Release audit. A release audit for conformity with tawḥīd must be performed: content, iconography, texts, and examples.
- Data moderation. Datasets that glamorize shirk/kufr must be excluded.
- Emergency rollback. A procedure must be in place to disable features that mislead users religiously.
3) Examples of correct application (and borderline cases)
Correct
- A religious chatbot that states: “This service does not issue فَتْوَى (fatwā); the final decision is made by a human/imam.”
- An educational platform where AI suggests sources and options, while the learner makes the decisions. Clear model limitations are shown in the interface.
Borderline
- Advertising that says “An AI-guru will replace your mentor” — a risk of idolization and replacing a living scholar.
- A “robot-imam” that automatically issues categorical rulings without Sharia expertise — a high risk of misguidance.
4) Common mistakes and how to avoid them
- Anthropomorphizing AI. Verbs like “understands,” “wants,” “believes” must be replaced with “processes,” “assesses,” “models.”
- Cult of efficiency. It is impermissible to pit “technology” against religious tradition. Benefit must be subordinated to Sharia.
- Lack of religious oversight. A channel for contacting religious experts must be established; logs of contentious cases must be maintained.
Disclaimer. This analysis is explanatory and not a فَتْوَى (fatwā); priority is given to the text of the Code.