AI can already generate realistic photos, videos and audio. Many Muslims come across clips that provoke anger or fear. Without verification, such content may lead to sin and harm to people.
The Islamic Code for the Application of AI reminds that preserving reason and protecting people’s honor remain a priority. Therefore, any AI-generated content requires careful attention and clear rules for a believer.
- Deepfakes have appeared that are difficult to distinguish from reality.
- Islam prohibits spreading gossip and slander.
- AI accelerates the spread of false news and rumors.
- A Muslim should check sources before sharing content.
- Decisions about people must be based only on verified data.
How deepfakes work and why it matters
Modern AI models are trained on large sets of images and audio. Then the system generates a new image or voice that looks and sounds like a real person. This type of content is called a deepfake.
A Muslim sees a clip and thinks the event is real. In fact, it may be a completely artificial scene. The risk is especially high when the deepfake concerns imams, scholars or other well-known people.
The Islamic Code for the Application of AI links this topic to the principle of trust. Working with information is treated as a trusted amanah. Therefore, not every technological effect is an acceptable way to convey information about people.
A sharia perspective: verifying news and protecting honor
The Qur’an points to the obligation of carefully verifying messages. If news comes from a wrongdoer, the information must be checked so that people are not harmed unjustly. This principle also applies to content created by AI.
Spreading unverified videos and audio may turn into gossip or even serious slander. This is especially true if a clip shows sins, violence or supposed quotes of religious figures. In such a case, the harm affects both honor and trust within the community.
The Islamic Code for the Application of AI stresses that preserving honor and dignity falls within the goals of sharia. Therefore, work with suspicious content should be built around minimizing harm. Any automatic ranking or recommendation systems do not remove personal responsibility from the believer.
What is acceptable, what requires caution, and what is unacceptable
| Category | Description | Examples |
|---|---|---|
| Acceptable | Careful viewing of news and AI clips while checking the sources. | Educational videos about AI from reputable institutes; expert explanations about deepfakes. |
| With caution | Sharing material only after additional checking and clarification. | News about a conflict where editing is possible; clips featuring well-known Muslims. |
| Unacceptable | Spreading suspicious videos and audio without verification and advice. | “Leaked” private conversations, indecent clips attributed to a believer, deepfakes with haram. |
In each family and community, a person should be appointed who is responsible for checking controversial materials before they are posted in group chats. For official mosque pages, a contact point should be defined for issues of accuracy and corrections. Community media should keep logs of the verification of doubtful videos and news.
How to act in practice
- When receiving a clip or news item, a person needs to pause and not forward it immediately.
- The source should be checked: who published the material, whether there are explanations and a date.
- It is advisable to compare the news with several reliable resources or official statements.
- If a deepfake is suspected, one can lower the sound, look closely at the face and pay attention to lip and hand movements.
- If there is doubt regarding the honor of a specific Muslim, the spread of the content should stop until the situation is clarified.
- For community channels, restrictions on the use of AI and the transfer of personal data should be published.
- Decisions about a person based on a clip must be left to a human being after consulting scholars. An algorithm and a crowd in the comments cannot bear such responsibility.
Common mistakes and how to avoid them
- Trusting a clip just because it is “viral”. To reduce this risk, emotions should be consciously separated from the analysis of facts.
- Assuming that “if it was posted in a Muslim chat, then it is true”. To prevent this mistake, administrators need to introduce verification rules and remind participants about them.
- Using offensive comments under a suspicious video. Here, remembering the prohibition of gossip and the responsibility for every word is helpful.
- Trying to “expose sinners” on one’s own without considering the consequences. To avoid harm, one should remember the danger of spreading discord and not fuel it with unnecessary actions.
When to consult an imam or specialist
Consulting an imam or a knowledgeable sharia specialist is appropriate when a clip touches on halal and haram, family relations or the reputation of a specific person. Advice is also needed if a deepfake is related to religious teaching or supposed quotes from scholars.
For mass mailings in community chats, the imam or sharia council should discuss in advance a policy for working with news. In this case, clear rules for participants and a procedure for correcting mistakes should be published. For complex technical questions, it is useful to involve cybersecurity and media specialists.
Brief guidance
Information has become a fast-moving flow, and deepfakes have only intensified it. A Muslim who strives for God-consciousness views every clip as a possible test. Verifying information and protecting people’s honor become a constant form of worship.
Work with AI and media requires discipline, awareness of the goals of sharia and responsibility before Allah ﷻ. Within each community, a transparent system for handling news and complaints about content should be established. Then technologies stop leading the person and instead become a controlled tool.
The advice provided is for information only and does not constitute a fatwa; in doubtful situations, please consult a knowledgeable scholar.




