Summary of Draft AI Guidelines and Rules
- Tech Reg Forum
- Nov 28
- 6 min read
Portions of interest from India’s Draft AI Guidelines, alongside the Draft AI Rules proposed by the Ministry of Electronics and Information Technology (“MeitY”) in October 2025. The DPDP framework establishes obligations for lawful, purpose-specific processing of personal data, with clear consent requirements, enforceable rights for individuals, fiduciary responsibilities for organisations, and special safeguards for children and cross-border transfers. The Draft AI Rules focus on synthetically generated content, requiring platforms to implement labelling, metadata, user declarations, monitoring, and risk mitigation measures to prevent misuse such as misinformation, deepfakes, and threats to public safety.
Draft AI Guidelines
India’s AI Governance Guidelines propose a robust governance framework designed to foster cutting-edge innovation while ensuring AI is safely developed and deployed for all, mitigating risks to individuals, society, and critical systems.
The framework is structured around four interlinked components:1. Seven Guiding Principles (Sutras) for Ethical and Responsible AI
Trust: AI systems should operate reliably and predictably, instilling confidence in users.
People-First Design: AI must enhance human decision-making and preserve human dignity.
Innovation over Restraint: Encourages experimentation and creative solutions while maintaining safety and accountability.
Fairness: AI should prevent bias and discrimination; datasets and models must be representative.
Accountability: Clear assignment of responsibility for AI outcomes, including harms or failures; includes grievance redressal and impact assessment.
Understandability: AI outputs and functioning should be interpretable for users and regulators.
Safety and Resilience: Systems should be robust, secure, and able to withstand errors, attacks, or unintended consequences.
2. Key Recommendations Across Six Pillars of AI Governance
Infrastructure: Promote access to computational resources, national datasets, and integration with Digital Public Infrastructure (DPI) to support AI innovation.
Capacity Building: Develop AI literacy among public servants, developers, regulators, and citizens, with special focus on underserved areas.
Policy and Regulation: Use a risk-based, sector-specific approach, leveraging existing laws while allowing for targeted amendments rather than a sweeping AI-specific law.
Risk Mitigation: Implement risk-assessment frameworks, robust validation, testing, and monitoring; emphasize ethical use, privacy, fairness, and human-centered design.
Accountability: Establish grievance redressal, audits, and oversight mechanisms for high-risk AI systems; enforce responsibility for errors or harms.
Institutional Mechanisms: Set up multi-stakeholder bodies for governance, including oversight committees, advisory groups, and safety institutes.
3. Action Plan Mapped to Short, Medium, and Long-Term Timelines
Short-term: Establish governance structures, define risk frameworks, create awareness programs, and develop initial grievance mechanisms.
Medium-term: Pilot regulatory sandboxes, implement standards for AI design, integrate AI with DPI systems, and scale up capacity-building initiatives.
Long-term: Consider formal AI-specific regulations if necessary, expand safety testing and audit frameworks, foster international collaboration, and continuously update guidelines based on evolving AI risks and innovations. ([Feb 2024 PDF, pp. 26–29]; [Nov 2025 PIB PDF, pp. 16–19])
4. Practical Guidelines for Industry, Developers, and Regulators
Developers: Implement privacy-by-design, fairness checks, bias mitigation, and robust testing; ensure transparency of outputs.
Industry/Organizations: Adopt internal governance mechanisms, maintain documentation for accountability, and follow ethical standards aligned with the seven guiding principles.
Regulators: Monitor compliance, assess high-risk AI systems, and engage in multi-stakeholder consultations; promote safe deployment without stifling innovation.
Synthetic/Generative AI: Ensure proper labeling, metadata embedding, traceability, and safeguards against misuse or harmful outputs.
The Guidelines emphasize human-centered design, inclusive innovation, and alignment with international best practices, while remaining adaptable to India’s socio-economic and digital context.
Overall, the framework balances innovation and risk mitigation, providing high-level ethical guidance as well as actionable measures for safe, transparent, and accountable AI development and deployment across sectors.
Draft AI Rules
In October 2025, MeitY proposed draft amendments to the IT (Intermediary) Rules, 2021 colloquially referred to as the ‘Draft AI Rules’. The Draft Rules focus on synthetically generated content, defined as any text, audio, visual, or audiovisual information created, modified, or generated using AI or computational models that may appear authentic or real. They apply to all intermediaries and digital platforms, with enhanced obligations for Significant Social Media Intermediaries (“SSMI”).
Key Provisions
Portions of interest from India’s Draft AI Guidelines, alongside the Draft AI Rules proposed by the Ministry of Electronics and Information Technology (“MeitY”) in October 2025. The DPDP framework establishes obligations for lawful, purpose-specific processing of personal data, with clear consent requirements, enforceable rights for individuals, fiduciary responsibilities for organisations, and special safeguards for children and cross-border transfers. The Draft AI Rules focus on synthetically generated content, requiring platforms to implement labelling, metadata, user declarations, monitoring, and risk mitigation measures to prevent misuse such as misinformation, deepfakes, and threats to public safety.
Draft AI Guidelines
India’s AI Governance Guidelines propose a robust governance framework designed to foster cutting-edge innovation while ensuring AI is safely developed and deployed for all, mitigating risks to individuals, society, and critical systems.
The framework is structured around four interlinked components:1. Seven Guiding Principles (Sutras) for Ethical and Responsible AI
Trust: AI systems should operate reliably and predictably, instilling confidence in users.
People-First Design: AI must enhance human decision-making and preserve human dignity.
Innovation over Restraint: Encourages experimentation and creative solutions while maintaining safety and accountability.
Fairness: AI should prevent bias and discrimination; datasets and models must be representative.
Accountability: Clear assignment of responsibility for AI outcomes, including harms or failures; includes grievance redressal and impact assessment.
Understandability: AI outputs and functioning should be interpretable for users and regulators.
Safety and Resilience: Systems should be robust, secure, and able to withstand errors, attacks, or unintended consequences.
2. Key Recommendations Across Six Pillars of AI Governance
Infrastructure: Promote access to computational resources, national datasets, and integration with Digital Public Infrastructure (DPI) to support AI innovation.
Capacity Building: Develop AI literacy among public servants, developers, regulators, and citizens, with special focus on underserved areas.
Policy and Regulation: Use a risk-based, sector-specific approach, leveraging existing laws while allowing for targeted amendments rather than a sweeping AI-specific law.
Risk Mitigation: Implement risk-assessment frameworks, robust validation, testing, and monitoring; emphasize ethical use, privacy, fairness, and human-centered design.
Accountability: Establish grievance redressal, audits, and oversight mechanisms for high-risk AI systems; enforce responsibility for errors or harms.
Institutional Mechanisms: Set up multi-stakeholder bodies for governance, including oversight committees, advisory groups, and safety institutes.
3. Action Plan Mapped to Short, Medium, and Long-Term Timelines
Short-term: Establish governance structures, define risk frameworks, create awareness programs, and develop initial grievance mechanisms.
Medium-term: Pilot regulatory sandboxes, implement standards for AI design, integrate AI with DPI systems, and scale up capacity-building initiatives.
Long-term: Consider formal AI-specific regulations if necessary, expand safety testing and audit frameworks, foster international collaboration, and continuously update guidelines based on evolving AI risks and innovations. ([Feb 2024 PDF, pp. 26–29]; [Nov 2025 PIB PDF, pp. 16–19])
4. Practical Guidelines for Industry, Developers, and Regulators
Developers: Implement privacy-by-design, fairness checks, bias mitigation, and robust testing; ensure transparency of outputs.
Industry/Organizations: Adopt internal governance mechanisms, maintain documentation for accountability, and follow ethical standards aligned with the seven guiding principles.
Regulators: Monitor compliance, assess high-risk AI systems, and engage in multi-stakeholder consultations; promote safe deployment without stifling innovation.
Synthetic/Generative AI: Ensure proper labeling, metadata embedding, traceability, and safeguards against misuse or harmful outputs.
The Guidelines emphasize human-centered design, inclusive innovation, and alignment with international best practices, while remaining adaptable to India’s socio-economic and digital context.
Overall, the framework balances innovation and risk mitigation, providing high-level ethical guidance as well as actionable measures for safe, transparent, and accountable AI development and deployment across sectors.
Draft AI Rules
In October 2025, MeitY proposed draft amendments to the IT (Intermediary) Rules, 2021 colloquially referred to as the ‘Draft AI Rules’. The Draft Rules focus on synthetically generated content, defined as any text, audio, visual, or audiovisual information created, modified, or generated using AI or computational models that may appear authentic or real. They apply to all intermediaries and digital platforms, with enhanced obligations for Significant Social Media Intermediaries (“SSMI”).
Key Provisions
Labelling and Metadata: Platforms must ensure all synthetic content is clearly labeled or embedded with metadata identifying it as AI-generated. Labels for visual content must be visible, while audio content must carry an audible identifier. Metadata must be permanent and enable traceability, including when users modify content.
User Declarations: Platforms should collect user confirmations when posting synthetic content and implement technical measures to verify compliance with labelling requirements.
Accountability and Monitoring: Intermediaries are required to establish internal monitoring and grievance redressal mechanisms. Acting in good faith, including removal of harmful content, preserves safe-harbor protections under the IT Act, 2000.
Risk Mitigation: The rules aim to prevent misuse of AI-generated content, including misinformation, deepfakes, impersonation, defamation, or threats to the electoral process. Platforms are responsible for detection, labelling, and traceability to mitigate societal risks.
_edited.jpg)



Comments