Are Copilot prompt injection flaws vulnerabilities or AI limits?
6 janvier 2026 à 12:16
Microsoft has pushed back against claims that multiple prompt injection and sandbox-related issues raised by a security engineer in its Copilot AI assistant constitute security vulnerabilities. The development highlights a growing divide between how vendors and researchers define risk in generative AI systems. [...]