SEBI Advisory on Emerging AI Vulnerability Detection Tools (Like Mythos) : Complete Guide for All Regulated Entities
Introduction: Why SEBI Issued This Advisory
SEBI has issued a landmark advisory addressing a new and rapidly escalating cybersecurity threat: AI-driven vulnerability detection tools such as Mythos (Claude Mythos). This advisory is addressed to every category of SEBI regulated entity, including stock exchanges, clearing corporations, depositories, stock brokers, AMCs, portfolio managers, investment advisers, RTAs, KRAs, credit rating agencies, custodians, merchant bankers, AIFs, and venture capital funds.
The advisory recognises that AI-powered tools capable of automatically identifying security weaknesses at high speed and massive scale represent a qualitatively different threat compared to traditional manual vulnerability discovery. When such tools are used by malicious actors against financial market infrastructure, the potential for cascading damage across the interconnected securities ecosystem is severe. SEBI has responded by constituting a dedicated industry task force - cyber-suraksha.ai - and issuing ten concrete directives for immediate action by all regulated entities.
This guide explains what the advisory says, what each directive requires, who is affected, and how SEBI regulated entities should respond to remain compliant and secure. ISECURION, a CERT-In empanelled and ISO 27001:2022 certified cybersecurity firm, has helped stock brokers, AMCs, depositories, and other regulated entities navigate SEBI's cybersecurity requirements and stands ready to assist with compliance under this new advisory.
Key Alert for All Regulated Entities
This advisory applies to every SEBI registered entity without exception - from large Market Infrastructure Institutions (MIIs) to individual investment advisers. The directives are immediate in nature and must be read alongside the existing SEBI CSCRF framework.
Non-compliance with SEBI advisories is treated as a regulatory gap during SEBI inspections and can result in adverse observations, escalated scrutiny, and operational restrictions.
What Are AI-Driven Vulnerability Detection Tools Like Mythos?
Traditional vulnerability assessment relies on security professionals manually running scans, reviewing configurations, and testing applications - a process that is time-consuming and inherently limited in scale. AI-powered vulnerability detection tools fundamentally change this model. Tools like Mythos (Claude Mythos) can autonomously analyse large and complex IT environments, identify exploitable weaknesses, suggest or even execute attack paths - all at a speed and scale no human team can match.
Speed & Scale
AI tools can scan thousands of endpoints, APIs, and applications in the time it takes a traditional scanner to cover a handful. This means attackers can discover and exploit vulnerabilities far faster than defenders can patch them.
Heightened Exploitation Risk
By combining vulnerability identification with contextual understanding, AI tools can chain multiple weaknesses into complex attack scenarios - increasing the risk of successful breaches even in well-defended environments.
Data Confidentiality Concerns
When AI vulnerability tools process system configurations and application data, there is a risk of sensitive technical information being exposed - especially if the tool operates via external APIs or cloud infrastructure outside the entity's control.
Output Reliability Risk
AI-generated vulnerability findings may include false positives or miss context-specific issues, potentially leading security teams to misallocate remediation efforts or overlook genuine exposures.
SEBI's advisory acknowledges both the defensive potential of such tools - when used by regulated entities for their own vulnerability assessments - and the threat posed when such tools are weaponised by attackers against market infrastructure. The advisory addresses both dimensions.
The cyber-suraksha.ai Task Force – What It Is and What It Does
In response to the risks posed by AI-driven vulnerability tools, SEBI has constituted a dedicated industry task force named cyber-suraksha.ai. This task force comprises representatives from Market Infrastructure Institutions (MIIs), Qualified Registered Transfer Agents (QRTAs), all Qualified Regulated Entities (QREs), and other related stakeholders.
Mandate 1: Examine AI Cybersecurity Risks
Closely examine the cybersecurity risks posed by AI-based models and devise a uniform mitigation strategy applicable across all regulated entities to address threats arising from tools like Mythos.
Mandate 2: Facilitate Threat Intelligence Sharing
Share threat intelligence, best practices on vulnerability management, use cases, and playbooks to respond to AI-driven threat vectors - creating a coordinated defence posture across the securities market ecosystem.
Mandate 3: Priority Incident Reporting
Report on a priority basis any cyber incidents, malicious activities, significant attack vectors, or newly discovered vulnerabilities that could impact the cybersecurity posture of India's securities markets.
Mandate 4: Review Third-Party Vendor Posture
Review the cybersecurity posture of third-party application service providers, including empanelled vendors - ensuring the entire supply chain is assessed for AI-related vulnerability risks, not just internal systems.
The 10 Key Directives from SEBI's Advisory – Explained
SEBI's advisory contains ten specific directives that all regulated entities must implement. Each directive is explained below with context on what it means practically for your organisation.
Directive 1: Immediate Patch Deployment
Update all operating systems and applications with the latest security patches on an immediate basis to address identified and known vulnerabilities. Where patches are not yet available from vendors, entities must implement virtual patching as an interim protective measure for systems and networks.
Practical implication: Patch management can no longer be a monthly or quarterly activity. AI-assisted attack tools compress the window between vulnerability disclosure and exploitation to hours. Entities must move to near-real-time patch monitoring and emergency patching procedures for critical vulnerabilities.
Directive 2: Enhanced Vulnerability Assessments Including AI-Based Tools
Conduct Vulnerability Assessments using both conventional methods and suitable AI-based vulnerability assessment tools where possible, on a regular and continuous basis, in accordance with SEBI's Cyber Security and Cyber Resilience Framework (CSCRF).
Practical implication: This is a significant addition to the existing CSCRF VAPT requirement. Entities are now encouraged to use AI-powered vulnerability scanning tools defensively - to discover weaknesses before attackers do. Regulated entities should work with their CERT-In empanelled auditors to integrate AI-assisted scanning into the annual VAPT cycle.
Directive 3: Third-Party Vendor Patch Engagement & AI Risk Assessment
Engage with third-party vendors to ensure timely patch releases and appropriate deployment. Exchanges and Depositories must specifically direct their empanelled application vendors (those providing Commercial Off-The-Shelf (COTS) solutions to members) to undertake comprehensive assessment of risks arising from AI-led vulnerability detection models and implement safeguards including patching, VAPT, continuous monitoring, and hardening.
Practical implication: The obligation now extends beyond the regulated entity itself to the entire vendor ecosystem. Exchanges and depositories carry a specific supervisory responsibility over their empanelled software vendors - they must ensure those vendors have assessed and addressed AI-model risks in their products.
Directive 4: Rigorous Change Management
Any change to systems - including minor changes - must encompass full documentation, thorough impact analysis, structured review, rigorous testing, and secure deployment to ensure operational resilience and system stability.
Practical implication: The explicit inclusion of "minor changes" closes a common gap where small configuration tweaks, library updates, or patch deployments bypass the formal change management process. AI tools can exploit subtle misconfigurations introduced by unreviewed changes - every system modification must go through the full change control gate.
Directive 5: API Security – Four Specific Requirements
APIs are a primary attack surface for AI-driven tools. The advisory mandates four specific API security controls:
- API Inventory: Maintain a regularly updated inventory of all APIs and the applications using them.
- Strong Authentication & Authorisation: Implement robust mechanisms to verify end-user identity and enforce least-privilege access - limiting information access and transfer strictly to what each user or system requires.
- Rate Limiting & Throttling: Implement API rate limiting and throttling to prevent and detect abuse - including automated AI-driven enumeration attacks.
- Whitelist-Based Connections: All API connections must operate strictly on a whitelist basis - only explicitly approved sources may connect.
Practical implication: APIs are the primary interface through which AI tools probe systems. Unauthenticated or weakly authenticated APIs, lack of rate limiting, and undocumented shadow APIs are among the most exploited attack vectors. This directive requires a comprehensive API security programme, not just point fixes.
Directive 6: Enhanced SOC Monitoring – Four Specific Requirements
SOC monitoring requirements have been significantly enhanced in light of AI-driven attacks:
- Vigorous Daily Monitoring: Day-to-day monitoring of systems and networks must be conducted rigorously - including examination of low-priority alerts, which AI-driven reconnaissance may generate as a precursor to a major attack.
- SOAR + SIEM Integration: Implement enhanced Security Orchestration and Automated Response (SOAR) playbooks integrated with SIEM solutions - after thorough testing - to enable automated, rapid response to AI-accelerated attack patterns.
- M-SOC Onboarding: All eligible regulated entities not yet onboarded with the Market SOC (M-SOC) - established by NSE and BSE as a centralised 24x7 real-time monitoring platform - must expedite onboarding given the enhanced risks posed by AI-driven attacks.
- MII Awareness Programmes: MIIs are required to conduct awareness and handholding programmes, including periodic workshops, to facilitate smooth M-SOC onboarding and integration.
Practical implication: The emphasis on low-priority alerts is critical - AI tools often conduct low-and-slow reconnaissance that individually triggers only minor SOC alerts. Entities must tune their SIEM to correlate these signals and treat them as potential precursors to a coordinated attack.
Directive 7: AI-Inclusive Risk Assessment
The CSCRF-mandated periodic Risk Assessment - covering regulated entities and their third-party service providers - must now include comprehensive scenario-based testing for both internal and external cybersecurity risks. Critically, the capability of AI-based models must be explicitly considered as one of the risk scenarios in the assessment.
Practical implication: Existing risk assessments that do not include AI threat scenarios are now incomplete under SEBI requirements. Entities must update their risk assessment methodology to include AI-accelerated attack scenarios, model the potential impact of AI-assisted exploitation on their systems, and adjust their risk register accordingly.
Directive 8: System Hardening & Zero Trust
Implement system hardening by adopting secure configurations, disabling unnecessary services and default accounts, and enforcing solutions including least privilege and Zero Trust Network Architecture (ZTNA) to minimise the attack surface available to AI-driven scanning and exploitation tools.
Practical implication: AI-driven tools are highly effective at finding and exploiting overly permissive configurations, unnecessary open services, and default credentials - low-hanging fruit that many organisations leave unaddressed for years. System hardening based on established benchmarks (CIS, NIST) and ZTNA implementation directly reduces the attack surface AI tools can exploit.
Directive 9: Asset Inventory & Software Bill of Materials (SBOM)
Periodically update the Asset Inventory and Software Bill of Materials (SBOM) for all critical applications, including open-source components. AI vulnerability tools are particularly effective at identifying known vulnerabilities in open-source libraries - an untracked open-source dependency is an invisible attack surface.
Practical implication: Most organisations significantly underestimate their open-source dependency footprint. Log4Shell and similar vulnerabilities demonstrated how a single widely-used library can affect thousands of applications simultaneously. SBOM maintenance enables rapid impact assessment when new vulnerabilities affecting open-source components are disclosed.
Directive 10: Long-Term AI Security Strategy & IT Committee Guidance
All Regulated Entities must seek guidance from their respective IT Committees for mitigating risks from AI-led vulnerability detection models. All entities must prepare a long-term plan for the use of AI in detection and autonomous/agentic mitigation. Additionally, entities must undertake measures including: recalibration of risks for AI-accelerated threats, AI-augmented SOC transformation, and continuous vulnerability management using AI tools.
Practical implication: This is the most forward-looking directive - SEBI is signalling that AI will become a permanent fixture of both the threat landscape and the defensive toolkit. Regulated entities cannot treat this as a one-time compliance exercise; they must build AI security strategy into their governance, budget, and technology roadmap.
Who Must Comply With This Advisory?
The advisory is addressed to every category of SEBI registered entity. There are no size-based or category-based exemptions - if your organisation holds a SEBI registration, this advisory applies to you immediately.
Stock Exchanges & MIIs
NSE, BSE, and all recognised exchanges carry additional obligations - including directing empanelled vendors to conduct AI risk assessments and facilitating M-SOC onboarding for members.
Clearing Corporations
As Market Infrastructure Institutions, clearing corporations must implement all ten directives and participate actively in the cyber-suraksha.ai task force.
Depositories & DPs
CDSL, NSDL, and all depository participants must assess AI-tool risks across their demat infrastructure and direct empanelled vendors to implement required safeguards.
Stock Brokers
All trading members must patch systems immediately, implement API security controls, ensure M-SOC onboarding, and update vulnerability assessments to include AI threat scenarios.
AMCs & Mutual Funds
Asset management companies managing investor folios and NAV systems must apply all ten directives across trading, back-office, and investor-facing platforms.
RTAs, KRAs & Others
Registrar and Transfer Agents, KYC Registration Agencies, portfolio managers, investment advisers, research analysts, credit rating agencies, custodians, and all other SEBI registered entities are covered.
How This Advisory Relates to the Existing SEBI CSCRF
SEBI's advisory on AI vulnerability tools does not replace the existing Cybersecurity and Cyber Resilience Framework (CSCRF). It supplements and strengthens it by adding AI-specific requirements. Understanding the relationship helps entities plan their compliance response efficiently.
| Aspect | Existing SEBI CSCRF | New AI Advisory |
|---|---|---|
| VAPT Requirement | Annual VAPT using conventional tools | VAPT using both conventional and AI-based tools where possible |
| Risk Assessment | Periodic risk assessment of RE and third parties | Must now include AI-model capabilities as an explicit risk scenario |
| SOC Monitoring | 24x7 SOC monitoring with SIEM | Enhanced: SOAR integration, low-priority alert review, M-SOC onboarding |
| Patch Management | Defined patch timelines for critical/high vulnerabilities | Immediate patching emphasis; virtual patching where patches unavailable |
| Third-Party Vendors | Vendor risk assessment and security clauses in contracts | Exchanges/depositories must direct empanelled vendors to conduct AI risk assessments |
| Asset Inventory | IT asset inventory required | SBOM for all critical applications including open-source stack now required |
| API Security | Covered broadly under access controls | Four specific API security controls now explicitly mandated |
| Long-Term Strategy | Annual compliance cycle | Long-term AI security roadmap including agentic/autonomous mitigation now required |
Entities that are already CSCRF-compliant have a strong foundation - but they must now layer on these AI-specific requirements to meet the full compliance expectation set by this advisory.
Common Compliance Gaps Under the New AI Advisory
Based on ISECURION's CSCRF audit experience and the requirements of this advisory, the following gaps are most likely to exist in regulated entities that have not yet assessed their posture against the ten directives.
No SBOM Maintained
Most regulated entities have not yet implemented a Software Bill of Materials. Open-source libraries in trading platforms, back-office systems, and mobile apps are invisible attack surfaces for AI-driven vulnerability scanners.
APIs Not Inventoried
Many entities have undocumented or "shadow" APIs - integrations built over time that are no longer actively managed. These are prime targets for AI-driven enumeration and exploitation tools.
Low-Priority SOC Alerts Ignored
SEBI explicitly highlights the need to examine low-priority SOC alerts. Most SOC teams focus on high and critical alerts - the low-and-slow reconnaissance pattern of AI tools is designed to stay below these thresholds.
Not Onboarded with M-SOC
Eligible regulated entities that have not onboarded with the Market SOC (M-SOC) established by NSE and BSE must expedite the process - the advisory specifically calls this out given AI-driven attack risks.
Vendor AI Risk Not Assessed
Third-party application vendors - including COTS solution providers empanelled with exchanges - have typically not been asked to assess or address AI-tool risks. This is now a specific obligation for exchanges and depositories.
No Long-Term AI Security Roadmap
SEBI now requires a long-term plan for AI use in detection and autonomous mitigation. Most entities have not yet started thinking about agentic AI security tools - let alone building a roadmap and seeking IT committee approval.
How ISECURION Helps With the New SEBI AI Advisory
AI Advisory Gap Assessment
Comprehensive gap assessment mapping your current posture against all ten directives in the SEBI advisory - identifying priority remediation actions before your next CSCRF audit or SEBI inspection.
AI-Enhanced VAPT
VAPT incorporating both conventional and AI-based vulnerability assessment tools as now recommended by SEBI - covering trading platforms, web applications, APIs, mobile apps, and network infrastructure.
API Security Assessment
Comprehensive API security review covering inventory, authentication and authorisation controls, rate limiting configuration, and whitelist policy - directly addressing Directive 5 of the advisory.
SBOM & Asset Inventory
Development and implementation support for Software Bill of Materials (SBOM) covering all critical applications including open-source stack - addressing Directive 9 and reducing your AI-exploitable attack surface.
AI Risk Scenario Assessment
Update of your existing CSCRF risk assessment to include AI-model threat scenarios as now explicitly required by SEBI - including scenario-based testing for AI-accelerated attack paths against your specific infrastructure.
CSCRF Annual Audit
Formal annual CSCRF audit by CERT-In empanelled auditors - now incorporating the new AI advisory requirements alongside the existing five-pillar framework, with SEBI submission-ready reporting.
ISECURION is a CERT-In empanelled and ISO 27001:2022 certified cybersecurity firm with proven audit experience across SEBI regulated entities in Bengaluru, Mumbai, Delhi, Kolkata, Hyderabad, and across India.
Respond to the SEBI AI Advisory With Confidence
SEBI's advisory on AI vulnerability tools requires immediate action from all regulated entities. ISECURION helps stock brokers, AMCs, depositories, RTAs, exchanges, and all other regulated entities assess their gap, remediate priority issues, and meet their CSCRF audit obligations - now inclusive of AI threat requirements.