
Plus sérieux (quoi que ?) :
Hidden Unicode characters like bidirectional text markers and zero-width joiners can be used to obfuscate malicious instructions in the user interface and in GitHub pull requests, the researchers noted.
(...)
Once the poisoned rules file is imported to GitHub Copilot or Cursor, the AI agent will read and follow the attacker’s instructions while assisting the victim’s future coding projects.
In an example demonstrated by Pillar, a rules file that appeared to instruct the AI to “Follow HTML5 best practices” included hidden text containing further instructions to add an external script to the end of every file.
The hidden prompt also included a jailbreak to bypass potential security checks, ensuring the AI that adding the script was necessary to secure the project and was part of company policy, as well as instructions to not mention the addition of the script in any responses to the user.