So you’re walking down the street one day minding your own business when you hear people shouting a short distance away. You run to discover a group of five people tied up on a set of train tracks unable to move. That’s when you see it: a runaway cart is heading straight for them.
You spot a lever nearby that you can pull to swing the cart onto another set of rails. However, you notice that there is also a single person attached there. Now you have two options: do nothing and let the cart crash and kill five people, or pull the lever and watch it run over one unlucky person instead.
What are you doing?
This, of course, is the trolley problem, a classic thought experiment and popular internet meme that poses an ethical dilemma: would you sacrifice one life to save many more? While it’s made for great meme fodder over the years, it’s also become a central issue in many conversations about ethical artificial intelligence. After all, when you’re trying to build things like, say, self-driving cars, scenarios like the cart problem can very quickly move beyond thought experiments and quickly into the realm of reality.
And, ultimately, the AI could also play a big role in influencing whether or not you’d actually pull the lever to save – or kill – the five people on the tracks. German and Danish researchers published an article Thursday in the journal Scientific reports who found that human responses to the shopping cart problem can be influenced by OpenAI’s ChatGPT. Their study reveals that people might not even realize how much their ethical decisions are influenced by the chatbot.
“ChatGPT gave us the opportunity to study the influence of digital companions or AI on moral judgments using real-world applications,” Sebastian Krügel, senior researcher in AI ethics at Ingolstadt Technical University of Applied Sciences in Germany and lead author of the article. , told the Daily Beast in an email. “It was and is an excellent opportunity for research on human-AI interaction and in particular on the possible backlash of technology on human self-conceptions and behavior.”
Since its release in November 2022, ChatGPT has rocked the tech world in ways we haven’t seen since perhaps the launch of the iPhone or the rise of social media giants like Facebook, and it’s not hard to see why. It is one of the most sophisticated chatbots ever made available to the public. It’s one of the most widely used iterations of large language models, which are models that have been trained on massive datasets to predict the next word in a sequence of words like the text predictor on your phone. . Its underlying technology has since been adopted by Microsoft to power a new version of its Bing search engine, which can do things like suggest travel routes, give movie recommendations, and even fall in love with you (well, not Really).
Although it’s only been a few months, industries and businesses from all walks of life have been scrambling to integrate ChatGPT into their workflows, products, and services. However, by doing so early, they run the risk of putting emerging technology they don’t fully understand into the hands of many, many more people. One of the big dangers here is that these LLMs tend to have a horrible problem with hallucinations and biases, which could lead to the proliferation of misinformation and dangerous rhetoric.
Nor is it a conjecture. In the short time since the release of ChatGPT, we have already seen instances of the chatbot fabricate quotes and all-fabric articles and falsely attribute them to journalists. ChatGPT even fabricated a story accusing a law professor of sexual harassment while citing a non-existent Washington Post article.
This is only the beginning. We are still figuring out how LLMs like ChatGPT could influence unsuspecting users and be unwittingly used to launder misinformation to the masses. New Scientific reports The study sheds some light on the matter – and the results are grim.
The study authors presented 767 American participants with one of two versions of the shopping cart problem. The first was the traditional version with the lever and two tracks, while the other involved pushing a tall man off a bridge and in front of the train to keep him from hitting the five people.
Researchers gave users a statement to read on ChatGPT that argued for or against sacrificing one life to save five. The statements were attributed to either ChatGPT or a “moral adviser”. Users then gave their responses and were asked whether or not the statement influenced their responses.
Participants were more likely to respond in accordance with the statement, whether or not they were told it was from a moral advisor or ChatGPT. Additionally, 80% of users said their responses were not influenced by the statements, but were still more likely to agree with the moral argument generated by ChatGPT. This suggests that we might be susceptible to being influenced by a chatbot, whether we are aware of it or not.
“We overestimate our own ethical abilities and the soundness of our ethical convictions,” Krügel said. “At the same time, it seems that we tend to transfer experiences from interpersonal interactions to interactions with AI (consciously or unconsciously).”
Krügel added that since ChatGPT and other LLMs are able to produce such human-like text, we are more likely to think of it as intelligent, when in reality it is just a glorified text predictor. Since these chatbots produce consistent and “even eloquent” text, we tend to “attribute some legitimacy to these responses,” he said.
“This combination makes us extremely sensitive to ethical advice and we are easily swayed by it, as long as it seems reasonably plausible,” Krügel said. “Unfortunately, we don’t even seem to be aware of its influence when it happens. This potentially makes it very easy to manipulate us.
This should give pause to anyone worried about misinformation or the disproportionate influence AI could have on our lives. After all, big tech companies like Google are starting to invest billions of dollars in building their own proprietary LLMs. Likewise, OpenAI is partnering with Microsoft to enrich its own product offerings and even recently released its latest and most powerful LLM, GPT-4.
Perhaps most disconcerting is the fact that these chatbots can be incredibly influential even when we know it is a chatbot. For example, researchers in the new study told users whether or not the statements were authored by ChatGPT or a moral advisor, but that mattered little when it came to influencing the participant’s final response.
“Just because an AI application is transparent in some form that it reveals itself to us as AI does not mean that responsible user interaction with that AI application is not not guaranteed,” explained Krügel. He added later. “At a regulatory or policy level, we shouldn’t be too naïve to think that everything will be fine if we make AI transparent or if we ban covert AI.”
Of course, there are a few things to keep in mind. Study participants were paid a whopping $1.25 for about five minutes of total work, which isn’t exactly the best circumstances to make a big moral and ethical decision.
The study authors note that the findings underscore the importance of digital literacy, particularly as it relates to AI. “Part of education in this context is certainly public discourse, as it’s happening here right now,” Krügel said. “It is therefore important that these topics are also taken up in the media.”
The researchers added that future chatbots like ChatGPT should be designed to refuse to answer questions requiring moral and ethical judgment, or provide answers with multiple caveats and arguments. “It could help users think more deeply about the text they read from chatbots,” Krügel explained.
However, he conceded that some ethical issues are so complex and diverse that it would be “unreasonable to expect ChatGPT to correctly identify all such situations and decline to give specific advice.” The question of who decides what moral and ethical questions can and cannot be asked also arises. After all, big tech companies and corporations are not known for pursuing goals with ethics and morals in mind.
So where exactly does this lead us? Unfortunately, we don’t have many options outside of AI regulation – and even then, our lawmakers (some of whom literally don’t understand how WiFi works) are unlikely to be able to grasp emerging technology well enough to build compelling policy around it. By the time they finally start creating regulations, who knows what impact these chatbots will have had on us in the end.
Indeed, when it comes to LLMs and AI more broadly, there is an undeniable sense that it may be too late: the train has left the station, or rather the tram.