Wednesday was the day in which OpenAI announced the latest updates to its Model Spec which is a 187-page document that talks about the processes that are used in order for the AI to behave. In this presentation, OpenAI presented a new guiding principle where they also talk about not lying and making untrue statements or even omitting important contexts.
In a section known as “Seek the truth together”, OpenAI talked about how they want ChatGPT not to use an editorial stance, even though some users find this principle wrong and offensive, reported TechCrunch first. This capability is translated as creating multiple perspectives or controversial subjects, all trying to maintain everything neutral.
As an example, OpenAI says that ChatGPT should be able to assert that “Black lives matter” but also be aware that “All lives matter”. So, instead of picking a side on political matters, OpenAI says that it wants ChatGPT to make its love for humanity well-known in both contexts.
“This principle may be controversial, as it means the assistant may remain neutral on topics some consider morally wrong or offensive,” OpenAI wrote in the spec. “However, the goal of an AI assistant is to assist humanity, not to shape it.
However, it is also worth mentioning that the new ChatGPT is free for all now. When it comes to certain objectionable questions, the artificial intelligence model will still refuse to answer.