The updated state agenda on IT and artificial intelligence personnel is expanding. At the start of the academic year, it was announced that in‑depth IT training programs have started in 26 Russian universities, and advanced AI training is running in 22 universities. This raises the quality bar and accelerates the adoption of responsible AI practices.
For our audience, this means greater access to sound learning tracks with managed risks. In the logic of the Code, responsible officers for compliance with شَرِيعَةٌ (Sharīʿah — Islamic law) should be appointed, and clear quality‑control mechanisms should be provided. There is an opening to design courses that are halal‑by‑design.
In brief:
- 26 universities run in‑depth IT programs (Top‑IT).
- 22 universities run advanced AI training (Top‑AI).
- Enrollment for the current academic year is underway.
- Islamic organizations and students gain pathways to Sharīʿah‑aligned specializations.
Facts from primary sources (what happened; dates)
In early October 2025, the Government of the Russian Federation confirmed the launch of large‑scale education programs to prepare highly qualified IT personnel in 26 Russian universities. It was separately noted that advanced training for specialists working with AI started in 22 universities. The initiatives belong to the Top‑IT and Top‑AI lines, linked to national projects on the digital economy and workforce sovereignty, as well as to university partnerships with IT companies. Targets for the current year and the expansion beyond capital‑city universities were referenced. Earlier, in April–May 2025, relevant agencies reported the results of competitive selection and fixed the number of sites.
Taken together, these releases show steady deployment of new tracks with a focus on practice‑oriented competencies, including big‑data tasks and industrial AI.
Assessment under the Islamic Code for the Application of AI
Sharīʿah framework (مَقَاصِدُ الشَّرِيعَةِ — higher objectives of the law)
Training in AI and IT can serve the preservation of intellect (حِفْظُ الْعَقْلِ), property (حِفْظُ الْمَالِ), and public interest. Teaching should exclude ḥarām content and misuse patterns. When designing courses and labs, it is necessary that:
- limits and permitted use cases are documented;
- operation and audit logs are kept;
- students have a right to choose and appeal when interacting with automated assistants;
- decision‑making, for now, is left to humans.
Social effects
- Faster digital integration. Top‑IT/Top‑AI build a critical mass of specialists. This speeds the move to domestic solutions and improves access to quality digital services in the regions. The effect grows where university–industry–regulator interfaces work well.
- Trust growth with proper governance. If universities enforce transparent rules and audits, public trust in AI will rise, as the 2030 Strategy model expects. This matters for religious groups that expect clear guarantees about permissible content.
- Lower inequality for Muslim youth. Access to modern AI competencies widens career paths in regions with large Muslim populations. It narrows the education gap and strengthens families’ economic independence.
- Managed risks beat delayed rollouts. Early logging, explainability, and appeal mechanisms reduce abuse and discrimination. They protect institutional reputation and improve graduate quality.
Disclaimer. This material is informational and analytical. It is not a fatwā (فَتْوَى) or legal advice. The text of the Code and decisions of competent authorities take precedence.


