OpenAI says stalled attempts by Israel-based company to interfere in Indian elections

PTI PTI | 06-01 08:20

OpenAI, the creators of ChatGPT, has said it acted within 24 hours to disrupt deceptive uses of AI in covert operations focused on the Indian elections, leading to no significant audience increase. In a report on its website, OpenAI said STOIC, a political campaign management firm in Israel, generated some content on Indian elections alongside about the Gaza conflict.

Commenting on the report, Minister of State for Electronics & Technology Rajeev Chandrasekhar said, “It is absolutely clear and obvious that @BJP4India was and is the target of influence operations. Describing its operations, OpenAI said activity by a commercial company in Israel called STOIC was disrupted. Only the activity was disrupted, not the company. “In May, the network began generating comments that focused on India, criticized the ruling BJP party and praised the opposition Congress party,” it said. “In May, we disrupted some activity focused on the Indian elections less than 24 hours after it began.” OpenAI said it banned a cluster of accounts operated from Israel that were being used to generate and edit content for an influence operation that spanned X, Facebook, Instagram, websites, and YouTube. “This operation targeted audiences in Canada, the United States and Israel with content in English and Hebrew. In early May, it began targeting audiences in India with English-language content.” It did not elaborate.

Commenting on the report, Minister of State for Electronics & Technology Rajeev Chandrasekhar said, “It is absolutely clear and obvious that @BJP4India was and is the target of influence operations, misinformation and foreign interference, being done by and/or on behalf of some Indian political parties.

“This is very dangerous threat to our democracy. It is clear vested interests in India and outside are clearly driving this and needs to be deeply scrutinized/investigated and exposed. My view at this point is that these platforms could have released this much earlier, and not so late when elections are ending,” he added.

OpenAI said it is committed to developing safe and broadly beneficial AI. “Our investigations into suspected covert influence operations (IO) are part of a broader strategy to meet our goal of safe AI deployment.” OpenAI said it is committed to enforcing policies that prevent abuse and to improving transparency around AI-generated content. That is especially true with respect to detecting and disrupting covert influence operations (IO), which attempt to manipulate public opinion or influence political outcomes without revealing the true identity or intentions of the actors behind them.

“In the last three months, we have disrupted five covert IO that sought to use our models in support of deceptive activity across the internet. As of May 2024, these campaigns do not appear to have meaningfully increased their audience engagement or reach as a result of our services,” it said.

Describing its operations, OpenAI said activity by a commercial company in Israel called STOIC was disrupted. Only the activity was disrupted, not the company.

“We nicknamed this operation Zero Zeno, for the founder of the stoic school of philosophy. The people behind Zero Zeno used our models to generate articles and comments that were then posted across multiple platforms, notably Instagram, Facebook, X, and websites associated with this operation,” it said.

The content posted by these various operations focused on a wide range of issues, including Russia’s invasion of Ukraine, the conflict in Gaza, the Indian elections, politics in Europe and the United States, and criticisms of the Chinese government by Chinese dissidents and foreign governments.

OpenAI said it takes a multi-pronged approach to combating abuse of its platform including monitoring and disrupting threat actors, including state-aligned groups and sophisticated, persistent threats. “We invest in technology and teams to identify and disrupt actors like the ones we are discussing here, including leveraging AI tools to help combat abuses.” It works with others in the AI ecosystem and highlights potential misuses of AI and share the learning with the public.

Disclaimer: The copyright of this article belongs to the original author. Reposting this article is solely for the purpose of information dissemination and does not constitute any investment advice. If there is any infringement, please contact us immediately. We will make corrections or deletions as necessary. Thank you.


ALSO READ

Ola Electric responds to ARAI notice, says prices of S1 X 2 kWh scooter unchanged

Ola Electric provided an invoice dated October 6, showing a INR 5,000 discount given to customers, a...

Hyundai Motor IPO’s off to a slow start

Around 35% of the total shares in the offering are reserved for retail investors, while QIBs and NII...

Under fire, Ola Electric taps EY India to get back on track

Close to a dozen executives from EY came on-board at Ola Electric a few weeks ago on deputation for ...

Tata Motors secures 5-star BNCAP safety ratings for Nexon, Curvv, and EV models in latest crash tests

Tata Curvv.EV BNCAP testTata Motors did it again! Tata Motors has once again secured 5 star rating i...

India needs to step up manufacturing to meet Viksit Bharat goal: Volvo Grp India MD

Volvo Group India Managing Director and President, Kamal Bali. The manufacturing sector is a weak li...

Dollar pullback to help Indian rupee, weak risk appetite to weigh

Investors are now nearly certain that the U.S. Federal Reserve will deliver a 25-basis-point rate cu...