UK’s AI Safety Institute easily jailbreaks major LLMs
In a surprising flip of occasions, AI techniques may not be as secure as their creators make them out to be — who noticed that coming, proper? In a new report, the UK authorities’s AI Security Institute (AISI) discovered that the 4 undisclosed LLMs examined had been “extremely weak to primary jailbreaks.” Some unjailbroken fashions … Read more