ccording to a filing with the IRS, OpenAI is planning to support academic research about the algorithms that can predict the moral judgments of humans
based on artificial intelligence. So, OpenAI Inc., which represents the company’s nonprofit organization recently revealed that it provided Duke University with a major fund in order to develop research about “AI Ethics and morality”.
After the ChatGPT owner was contacted for further comment, the OpenAI spokesperson declared that this support is part of a three-year grant worth about $1 million for Duke professors to study “making moral AI”.
More details about this “AI ethics ” research are for now unknown, but we know for sure that the OpenAI ethics research will end in 2025. Walter Sinnott-Armstrong, the professor of practical ethics at Duke who is also the principal investigator in the study, stated that he is not able to give further details about the whole project.
The co-investigator of this project, Jana Borg along with Armstrong, has developed until now various projects and a book about artificial intelligence and its potential to function as a moral navigator which can help humans to make better decisions and judgments. But by becoming part of a bigger team they successfully created an algorithm that is “morally aligned” in order to help based on artificial intelligence algorithms, in very difficult situations when humans couldn’t decide for themselves.
Subscribe to our newsletter
According to the press release, the OpenAI ethics research goal is to train the algorithms to anticipate the decisions made by humans in various scenarios such as conflicts or other judgments made in medicine, law, or even business.
The ChatGPT owner, along with the supported researchers are indeed facing now complex challenges because morality is a more broad and subjective term. Because morality and ethics are not just some easy terms that can easily defined. For hundreds and thousands of years, various philosophers tried to debate the validity of different ethical theories, but for now, the internationally accepted structure remains indefinable.
It remains to be seen what this study will have to offer after its ending. Because AI ethics and morality is currently a term that has not been so researched, we don’t know what to expect from it.
Stay tuned for more updates on this OpenAI ethics research!
By
Raluca Matei
•
November 25, 2024 4:00 PM