The scientists are working with a technique called adversarial education to prevent ChatGPT from permitting people trick it into behaving terribly (known as jailbreaking). This work pits several chatbots in opposition to one another: just one chatbot plays the adversary and attacks another chatbot by producing text to power it to buck its typical c