a topic may be” as the company made public in its new policy.
Even more so, as a result of this new policy, ChtaGPT will in time, be able to answer more complex questions, as they are able to offer a more complex perspective as well as reduce the variety of topics that the AI chatbot is not able to talk about.
All those changes come as part of an OpenAI effort to get into the good side of Trump’s new administration, yet it can also be part of a larger move in Silicon Valley and the principles that are considered to be “AI safety”.
Wednesday was the day in which OpenAI announced the latest updates to its Model Spec which is a 187-page document that talks about the processes that are used in order for the AI to behave. In this presentation, OpenAI presented a new guiding principle where they also talk about not lying and making untrue statements or even omitting important contexts.
In a section known as “Seek the truth together”, OpenAI talked about how they want ChatGPT not to use an editorial stance, even though some users find this principle wrong and offensive, reported TechCrunch first. This capability is translated as creating multiple perspectives or controversial subjects, all trying to maintain everything neutral.
As an example, OpenAI says that ChatGPT should be able to assert that “Black lives matter” but also be aware that “All lives matter”. So, instead of picking a side on political matters, OpenAI says that it wants ChatGPT to make its love for humanity well-known in both contexts.
“This principle may be controversial, as it means the assistant may remain neutral on topics some consider morally wrong or offensive,” OpenAI wrote in the spec. “However, the goal of an AI assistant is to assist humanity, not to shape it.
However, it is also worth mentioning that the new ChatGPT is free for all now. When it comes to certain objectionable questions, the artificial intelligence model will still refuse to answer.