Michigan banned intimate deepfakes: compliance with the Islamic AI Code.

On August 26, 2025, the State of Michigan criminalized the creation and distribution of intimate deepfakes—synthetic images/video/audio featuring a real person in a sexual context without consent. The new law is officially titled the Protection from Intimate Deep Fakes Act (HB 4047/4048) and introduces both criminal and civil liability. In simple terms, we explain what exactly is prohibited, what sanctions are provided, and how this relates to the “Islamic Code for the Application of AI”: where there is full compliance, where divergences are possible, and in which processes compliance will be conditionally permissible.

Contents

What happened and why it matters

Context and confirmed facts

Assessment under the Islamic Code for the Application of AI

Compliance with the Code

Divergences/non-compliance

Conditional compliance

Compliance matrix

Practical conclusions and actions (checklist)

Risks/limitations and what must not be done

Check outside the Code

Candidates for updating the Code

What’s next: monitoring and preparation

Section disclaimer

What happened and why it matters

On 26.08.2025, the Governor of Michigan signed a package of laws (HB 4047/4048) that directly prohibits the creation and distribution of unsolicited intimate deepfakes (video/photo/audio) depicting a real person in a sexual context without their consent. The law provides for criminal penalties and a civil cause of action, as well as special rules on consent and limited “immunity” protection for infrastructure providers and technology developers when established conditions are met. legislature.mi.gov FOX 47 News Lansing – Jackson (WSYM)

For Muslims, this topic is important because it concerns the protection of honor (حِفْظُ العِرْضِ — preservation of honor), privacy, and morality, as well as the policies of platforms and developers of AI tools. In the “Islamic Code for the Application of AI,” these are direct orientations toward protecting dignity and private life, preventing harm, and ensuring the responsibility of technology operators.

What is a deepfake?
A deepfake is synthetic content (image, video, audio) that realistically imitates a real person; it is created using AI/digital means (including image/video/audio generation models and LLM — large language models — as part of the pipeline). legislature.mi.gov

Context and confirmed facts

What is prohibited. The intentional creation or distribution of intimate deepfakes when: (1) the person knew or reasonably should have known that this would cause physical, emotional, reputational, or economic harm, or acted with the purpose of harassment/extortion/threats/causing harm; (2) the deepfake realistically depicts intimate parts or a sexual act; (3) the person is identifiable. legislature.mi.gov

Criminal liability.

Baseline — misdemeanor: up to 1 year of imprisonment and/or a fine of up to $3,000.

With aggravating circumstances — felony: up to 3 years and/or up to $5,000.
Aggravating factors include, in particular: deriving profit; repeat violation; publication on a website; operating a site/service/app to create/distribute; intent of harassment/extortion/threat/harm; causing financial damage. legislature.mi.gov

Civil cause of action. The victim may file a lawsuit (including confidentially, under a pseudonym) and seek compensation for economic/moral harm, the offender’s profit, as well as a fine of up to $1,000 per day for violation of a court order (TRO/injunctions). legislature.mi.gov

Consent. Consent is not recognized as a defense unless it is documented in writing in plain language, voluntarily signed by the depicted person, and contains a general description of the intimate digital image/audiovisual work. legislature.mi.gov

Technological neutrality and re-uploads. The prohibition covers AI and any digital means, as well as repeated uploads/publications (for example, posting on a website is considered an aggravating factor for criminal qualification). legislature.mi.gov

Platform responsibility and exceptions. Exempted from liability (subject to conditions) are internet providers, networks, and technology providers/developers if the technology is not designed/promoted/deployed for illegal intimate deepfakes and if the service rules prohibit the corresponding content. legislature.mi.gov

Fact of signature and news. Local media and information services reported the signing of the package on August 26, 2025. WDIV FOX 47 News Lansing – Jackson (WSYM) News From The States

Assessment under the Islamic Code for the Application of AI

Compliance with the Code

Protection of honor and privacy (حِفْظُ العِرْضِ). Criminalizing intimate deepfakes directly supports the protection of personal, family, and private dignity. This follows from the objectives of the Sharia and the Code’s principles on preserving lineage and honor.

Human and operator responsibility. Introducing criminal/civil liability aligns with the Code’s requirement to ensure redress for harm and personal responsibility of participants in the AI lifecycle.

Prohibition of deception (غَرَر) and prevention of evil (مُنْكَرٌ). Intimate deepfakes are a form of deception and harm; their suppression is consistent with the Code’s norm of “preventing means to sin” and the priority of harm elimination.

Divergences/non-compliance

Immunities for providers/developers. The law allows exceptions for infrastructure actors and creators of technologies when conditions are met. In the logic of the Code, proactive barriers, regular audit, and reporting are expected, even if formal immunity is possible. legislature.mi.gov

Consent standards. The law clearly requires written consent, but in Islamic ethics the bar may be higher: avoid doubt (shubha), ensure clarity and verifiability, including through independent recording in processes. legislature.mi.gov

Conditional compliance

Activity may be considered compliant if processes/controls are in place:

moderation logs and takedown policies;

deepfake detectors, digital watermarks, and hash banks for re-uploads;

an internal SLA for removing illegal content within 24–72 hours after a verified complaint (this is a compliance/Code recommendation, not Michigan law);

procedures for interaction with law enforcement and support for victims. legislature.mi.gov

Compliance matrix

Thesis/fact — Code clause(s) — Status — Comment/control
Criminalization of intimate deepfakes — Preservation of honor/privacy; prevention of harm — Compliance — Supports حِفْظُ العِرْضِ and private life.
Civil action and compensation — Responsibility, harm redress — Compliance — Including offender’s profit and fine for violating injunctions. legislature.mi.gov
Partial “immunity” of platforms/tech providers — Responsibility of developers/operators — Partial/conditional — Proactive measures, audit, reporting required. legislature.mi.gov
Requirement of explicit written consent — Transparency, respect for the person — Compliance — Emphasizes the need for written form and description. legislature.mi.gov

Practical conclusions and actions (checklist)

AI developers/providers

Include a prohibition on generating intimate deepfakes in ToS/rules and apply filters.

Enable synthetic-content labels/watermarks, connect hash banks; store moderation logs ≥ 12 months.

Appoint those responsible for compliance and moderation; conduct risk assessment and regular audit.

Implement an internal RRT process: targeted removal of clear violations ≤ 24–72 hours after a confirmed complaint (Code/best-practice recommendation, not a requirement of Michigan law).

Platforms/social networks

Connect hash banks and cross-platform signature sharing.

Open a confidential complaint channel; pause dissemination during verification.

Publish quarterly moderation reports (transparency).

Imams/community leaders

Explain the sinfulness of creating/distributing such content and ways to help victims.

Organize confidential consultations; help with evidence preservation.

Users

Do not create or share dubious materials; consider criminal liability.

Upon discovery — complain to the platform and record the URL/screenshots/metadata.

Risks/limitations and what must not be done

Extraterritoriality. State law applies within its jurisdiction; international dissemination requires interaction with other legal regimes.

False accusations. Fast expert procedures and protection against defamation are needed.

Over-blocking. Risk of excessive filtering of lawful content — careful detector tuning and appeals are required.

Check outside the Code

No significant new circumstances contradicting Islam (الشَّرِيعَةُ) and not reflected in the current version of the Code have been identified. We monitor the robustness of labels/watermarks against evasion and standards of evidence for courts and Sharia councils.

Candidates for updating the Code

SLA and platform response protocol (timelines, notification format, log retention, interaction with authorities).

Requirements for synthetic-content labels (interoperability, robustness, open specifications).

Digital evidence (methods of collection/storage/transfer for judicial and religious expertise).

What’s next: monitoring and preparation

Track law enforcement (first cases) and the compatibility of state requirements with global platform policies. WWMT

Align internal policies with the Code’s norms on privacy, responsibility, and harm prevention; conduct a repeat audit.

Follow federal initiatives on synthetic-content labeling and the NO FAKES Act (a U.S. Congress-level initiative — in progress). americanbar.org Right of publicity

Section disclaimer

This “AI News” section material is informational-analytical, prepared in the logic of the “Islamic Code for the Application of AI,” and is not a fatwa. For religious-legal decisions, consult competent ʿulamā’. Technical and legal measures require adaptation to your jurisdiction and project.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top