rom talking to Amazon Alexa to watching self-driving cars hit the road, and even robots who are performing surgeries, artificial intelligence (AI) is
becoming more prevalent in our daily lives – and we’ve all seen these advancements. But before AI and machine learning, these concepts seemed like just ideas that were living in the imagination of science fiction creators.
Ever since the early days of AI technology development, we’ve all been focused on creating an AI chatbot, virtual assistant, or voice assistant that could complete tasks in a human-like manner. We achieved this through RPA applications, Generative AI, and even a human-like robot (we all remember Sophia the Robot, right?). However, as RPA and Generative AI started being integrated into our society, ethical concerns also began to rise.
Indeed, artificial intelligence has its benefits, but let us not get ahead of ourselves as AI also brings a lot of ethical challenges. As this technology continues to evolve, some people worry about the potential misuse of AI, while others are questioning the issues regarding bias, privacy, and even accountability. Either way, it’s essential to understand the consequences of the possibility of overusing artificial intelligence. After all, we wouldn’t want to end up in the same scenario as in Terminator 2.
Let’s face it – the ethical considerations surrounding AI have evolved over time!
As AI technology became a substantial part of our lives, as people, we started seeing beyond its benefits. With the growing popularity of ChatGPT, AI chatbots, and voice assistants such as Amazon Alexa, Siri or Google Home we see how people are exploiting it way too much, to the point where students are using it to write their papers. Cheating has always been a problem in schools and universities, and Generative AI technology is making it possible even more to perpetuate this toxic cycle.
In fact, at Furman University, a philosophy professor caught a student who turned in a paper that was generated by an AI. The professor started questioning the truthfulness of the student’s essay when he saw how well-written it was and, simultaneously, how wrong it was. Later after that, more and more professors realized that their students were doing the same.
Even journalists and content writers have started taking advantage of the use of AI and AI chatbots or virtual assistants when doing their jobs. From processing and analyzing data to generating reports and, in some cases, even writing a whole article. But is it ethical? The answer is subjective. But one thing is clear, regardless of what everyone says, ChatGPT and other generative AI technologies are here to stay. But be smart about it!
Subscribe to our newsletter
This issue also brings up questions regarding privacy.
Have you ever reflected on what happens to all the information you give out online? Of course, you have – especially with the rise of docu-series that explore this matter. Yes, we’re talking about the Social Dilemma.
Well, with the rise of Artificial Intelligence and the increasing amount of personal data that is being collected by companies and governments, there is also a growing concern regarding who has access to this data. And especially in what ways they are used. After all, we all know that nowadays, the most valuable asset is our data. To be honest – this seems a little scary. If all that personal data falls into the wrong hands, it could be used for some really unfortunate things.
And it did! In fact, the Cambridge Analytica scandal reflected just that – how the use of personal data can influence us to take certain actions. The political consulting firm, Cambridge Analytica, obtained personal data from millions of Facebook users without their consent to manipulate the 2016 US Presidential election, all of the users being persuaded into voting for Donald Trump. This only shows how easily AI can change people’s opinions and beliefs.
It’s not just politics that are affected by this. The use of artificial intelligence is being questioned in every field. Since these systems rely on large amounts of data, people are afraid that this data may contain sensitive information that some of us may not be comfortable with sharing or are even aware that we’re sharing it.
But when it comes to Artificial Intelligence, who is in charge?
This is a question that we need to ponder and answer if we expect AI to be developed in a safe, reliable manner. Because let’s be truthful. Things can go very wrong with this technology, and we need to ensure that someone is held accountable when this materializes.
So, who is going to take the blame when things go sideways with AI? Well, it depends. The designers, developers, and users of AI systems all have a part to play in ensuring that these technologies are used responsibly. Designers and developers are the ones who construct this technology, and they will have to rigorously check that the AI is ready to be shared with the whole world.
But in reality, the company in itself is answering for violating our human rights. Look at TikTok, for example! In April 2023, TikTok was fined £12.7 million ($15.9 million) for illegally misusing the data of 1.4 million children under the age of 13. According to the Information Commissioner’s Office, TikTok did not do enough to prevent them from using their app, and they were held accountable for using their data without parental consent.
Artificial intelligence will continue to advance, and we will be part of it – whether we like it or not. And as users, we are responsible for how we use AI technology. This implies that we need to understand the potential repercussions of our actions and make sure that we are using AI systems in a way that aligns with our values and the values of society as a whole.
By
Bill O'Neill
•
September 2, 2024 8:00 AM