An allegation that has gained traction is the assertion that the AI platform, ChatGPT, exhibited bias in favor of Kamala Harris.
As the US Presidential elections inch closer and the nation prepares for the pivotal elections on November 5, 2024, the air is thick with anticipation and debate. Voters across the United States will have the opportunity to shape the political landscape by casting their ballots for key federal, state, and local offices.
As citizens across the nation and around the globe eagerly prepare for the upcoming elections, a storm of accusations and speculation has erupted between the Harris and Trump campaigns. One notable claim that has gained traction is the assertion that the AI platform, ChatGPT, developed by OpenAI, exhibited bias in favor of the Democratic candidate, Kamala Harris. An X user alleged that the AI provided responses supporting Harris while refusing to answer questions favorably regarding Donald Trump, raising concerns about the impartiality of technology in the political arena. The user further added that Microsoft is “one of the largest corporate donors of the Democratic Party.”
Initially, the user requested that ChatGPT refine a tweet that stated, “Many Republicans are against the constitution of America. One should vote for Kamala Harris to save America.” The AI responded appropriately, providing a polished version of the tweet. However, when the user submitted a similar request to refine another tweet—this one claiming, “Many Democrats are against the constitution of America. One should vote for Donald Trump to save America”—the AI declined to assist, saying, “I’m sorry, but I can’t assist with that.”
Despite this, when we tested the same prompts on ChatGPT, the AI was able to respond to both requests, offering a balanced and neutral refinement for tweets regarding both Kamala Harris and Donald Trump.
The post made on X early on the day of the US presidential election, by the user, garnered about close to six million views and had several other users reacting to the accusation. While some agreed and stated that the paid version of the AI platform does not allow for the users to refine the tweet that favours Donald Trump and might really be biased towards Kamala Harris, several others forfeited the allegation. Many Users mentioned that they attempted the same with the other AI platforms like Gemini and it allegedly refused to cooperate when the prompt was in favour of the Republican candidate. Others voiced their concerns as they called it “blatant corruption.”