The researchers are using a method referred to as adversarial instruction to prevent ChatGPT from allowing end users trick it into behaving badly (known as jailbreaking). This do the job pits numerous chatbots towards each other: 1 chatbot plays the adversary and assaults One more chatbot by making textual content to force it to buck its typical co