Hackers are now leveraging a new technique called Fun-Tuning, which uses Google’s fine-tuning API to execute highly effective prompt injection attacks on AI systems, including Google’s Gemini, with success rates as high as 82%. This innovation allows attackers to easily craft malicious prompts that manipulate AI models, raising significant security concerns as successful strategies can be adapted across multiple platforms at a low cost.
This is an ainewsarticles.com news flash; the original news article can be found here: Read the Full Article…