Free Will and Personal Responsibility: Clause 2.1.3 of the Islamic AI Code

Artificial intelligence influences human choices. Final responsibility for decisions remains with humans. This walkthrough explains how clause 2.1.3 applies in practice.

Understanding free will supports trust and safety. Roles for people, teams, and institutions become clear. Users see the boundaries حُدُود (boundaries). Organizations build processes that can be audited.

Link to clause 2.1.2 about القَدَرُ (al‑qadar, divine predestination): belief in destiny is compatible with personal responsibility. Automation does not cancel تَكْلِيفٌ (taklīf, the duty to act and be accountable).

Short Takeaways

  • Decision‑making should be left to humans in religiously significant scenarios.
  • A person responsible should be appointed for each AI function.
  • Logs should be kept and the model’s limitations should be published.
  • A user appeal point of contact should be defined.
  • Shifting blame to an algorithm’s “autonomy” is not acceptable.

Code Text (unchanged in meaning)

2.1.3. Free Will and Personal Responsibility
Each person is granted free will and bears full responsibility for decisions related to the use of AI. It is unacceptable to transfer blame or the final decision to automated systems.

Scriptural Sources and Reasoning

Theological foundations

Islam affirms human choice إِخْتِيَارٌ (ikhtiyār, choice) and responsibility مَسْؤُولِيَّةٌ (masʾūliyyah, accountability). Qurʾanic verses emphasise choice and requital: 18:29 states that truth is from the Lord and choice remains, but accountability follows. 76:3 sets the path before people. They choose gratitude or ingratitude. 53:39 stresses personal effort. 37:39 links recompense to deeds.

Sunnah perspective

The Sunnah defines the scope of accountability. The hadith “Each of you is a shepherd” كُلُّكُمْ رَاعٍ assigns role‑based responsibility. The hadith “The pen is lifted” رُفِعَ الْقَلَمُ clarifies exceptions: a minor, a sleeping person, and one without reason. These texts mark who is accountable and when.

Conditions of taklīf

تَكْلِيفٌ (taklīf) requires three conditions: عَقْلٌ (ʿaql, reason), بُلُوغٌ (bulūgh, majority), and اِسْتِطاعَةٌ (istiṭāʿa, capacity). In AI work, teams and users remain مُكَلَّفٌ (mukallaf, legally responsible). They choose the architecture, risk thresholds, and datasets. A tool can assist, but it does not form نِيَّةٌ (niyya, intention).

Kasb and causality

Classical concepts كَسْبٌ (kasb, acquisition of acts) and أَسْبابٌ (asbāb, causes) link intent and outcome. Causes are considered, yet they do not excuse harm. In AI, causes appear as architecture, pipelines, metrics, and moderation. These elements create the causal chain. Offloading blame to “algorithmic autonomy” contradicts kasb.

Procedural inferences

Verses on oversight and caution (17:36; 2:286) ground procedure. One should not assert without knowledge. No one is burdened beyond capacity. Legal maxims apply: اليقين لا يزول بالشك (certainty is not removed by doubt) and الضرر يزال (harm must be removed). For AI this means verification, testing, error logs, and harm mitigation.

Alignment with the Code

These foundations align with the Code. Clause 2.1.8 requires verification of religious information. Clause 2.1.6 affirms عَدْلٌ (ʿadl, justice). Clause 2.2.31 places indirect responsibility on developers and operators. Clause 2.2.32 limits full automation in aḥkām. Clause 2.2.33 secures choice and appeal. Clause 2.2.35 requires expert review of religious data.

Historical practice

Sharīʿa has long engaged tools. Judges used scribes and reference works. Muftis checked answers against lexical works and ḥadīth corpora. The tool remains a witness, not a judge. The principle سَدُّ الذَّرائِعِ (blocking the means to harm) justifies technical limits. The principle المَصالِحُ (masāliḥ, welfare) supports deployments when harm is minimised.

Conclusion

AI participates in سَبَبِيَّةٌ (sababiyya, causality). It is not a bearer of legal responsibility. Sharīʿa evaluates human intention and choice. Hence traceability, clear roles, and the option to review should be in place. These measures strengthen free will and identify the real accountable party.

Practice: Processes and Controls

Roles and accountability

  • A person responsible should be appointed for each relevant AI function.
  • Approval logs and model change logs should be kept.
  • A point of contact for Islamic questions should be defined.

Human control

  • Decision‑making should be left to humans in scenarios of religious and social significance.
  • For LLM (large language model) workflows, a Human‑in‑the‑Loop process should be in place.

Transparency and wording

  • The model’s limitations should be published in clear language.
  • Interface claims such as “the AI decided for you” are not permitted.
  • Scenarios where the algorithm only assists should be described.

Data and checks

  • Data sources should comply with 2.2.27 (honesty/“halal” sourcing).
  • Religious content should undergo expert review per 2.2.35.
  • Logs of refusals, escalations, and user appeals should be kept.

Examples of correct application

  • Fatwa assistant. The model drafts answers. Decision‑making should be left to a human mufti. Decision logs are kept. Limitations are published.
  • Islamic banking. A recommender proposes options. A product owner should be appointed. Users are granted a right to appeal.
  • Education. A lesson generator flags uncertainty. Source logs should be kept. A teacher approves the final result.

Debatable examples

  • Autonomous moderator of Sharia content. Fully automated blocking with no right to appeal conflicts with 2.2.33. Acceptability without human review ⟦?⟧.
  • Automatic issuance of fatāwā. The algorithm issues final rulings. This conflicts with 2.2.32.

Typical mistakes and how to avoid them

  • Blaming “the algorithm.” Each decision should have an appointed responsible person. Clear roles and logs prevent this.
  • No appeals. A contact point and a review procedure should be defined.
  • Marketing “full autonomy.” Limitations and the human role should be published.
  • Mixing advisory and deciding functions. In religious matters AI remains a supporting tool.

Mini‑FAQ

Can AI be trusted for everyday choices?
Yes, when limitations and the human role are clear. Logs should be kept and an appeal path should be available.

Who is responsible for harm from AI recommendations?
Under 2.2.31, developers and operators bear responsibility for the scenarios they design.

Where is the line between a hint and a decision?
Decision‑making should be left to humans. AI proposes options; a human accepts or rejects them.

Disclaimer
This walkthrough is explanatory and is not a fatwa; the priority lies with the Code’s text and the fatwas of official scholars.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top