OpenAI Removes Ban on Military Use

OpenAI Removes Ban on Military Use and Signs a Controversial Deal

Open AI, the inventor of  chatGPT chatbot removed the clause that forbade military use from the terms of use page of its website. This sparked offshock among members of society who worry about open-endedAI.

OpenAI Removes Ban on Military Use Sparks Controversy

On Tuesday, January 16th, Bloomberg reported that the U.S. Defense Department and OpenAI, one of the nation’s leading AI companies, signed a significant deal.

It comes after The Intercept reported that OpenAI had quietly removed mentions of a ban on “military and warfare” from its “usage policies” last week. Among the changes, the revised webpage states that the policies will be “clearer” and “more readable.”

The company’s spokesperson, Niko Felix, told The Intercept, “Using our technology, including the military, to [develop] or [use] weapons, [injure] others or [destroy] property, or [engage] in unauthorized activities that violate the security of any service or system, is disallowed.”

As part of a Bloomberg talk at the World Economic Forum held in Davos, the company’s global affairs vice president, Anna Makanju, discussed the company’s reversal in its willingness to work with the military. “Because we previously had what was essentially a blanket prohibition on the military, many people thought that would prohibit many of these use cases, which people think are very much aligned with what we want to see in the world,” she said, as quoted by Bloomberg.

The company came under fire for its decision to remove the ban on military use, which has sparked a heated debate within the AI and Technology community and beyond. Many argue that OpenAI’s decision undermines its commitment to ethical development and could potentially contribute to the misuse of AI for military purposes. Critics contend that OpenAI’s AI systems could be used in applications such as targeting or facial recognition, raising concerns about potential human rights and privacy concerns.

In addition, the famous ChatGPT has always been in the headlines for being misused by nation-state threat actors to perform cyber attacks.

OpenAI’s Response to the Removal of Ban on Military Use

OpenAI’s Justification:

OpenAI justifies the action by supporting the removal of military use from their usage policies. It seems that this breakout was unanticipated by the company; however, defending the actions, officials said, “Our policy does not allow our tools to be used to harm people, develop weapons, for communications surveillance, or to injure others or destroy property. There are, however, national security use cases that align with our mission.”

Adding to that, “For example, we are already working with DARPA to spur the creation of new cybersecurity tools to secure open source software that critical infrastructure and industry depend on. It was not clear whether these beneficial use cases would have been allowed under ‘military’ in our previous policies. So the goal with our policy update is to provide clarity and the ability to have these discussions.” – According to a Fox News report,

OpenAI Signs a Major Deal with U.S. Department of Defense/Pentagon

Despite the controversy and concerns, the company has defended its decision to remove the ban on military use and sign a deal with the Pentagon. The organization argues that it is necessary to ensure the safety and development of AI technologies and that collaborating with government agencies can help promote responsible AI development.

The firm likewise insists that its AI systems will be used only for nonoffensive purposes, such as logistics and data analysis.

Moreover, the Pentagon deal was also soon to be followed by others. These facts have caused critics to question exactly what the company was up to and who it was answered to. Could the Pentagon deal presage a scenario in which AI technologies developed by OpenAI end up in the hands of the US Department of Defense and deployed in an offensive war scenario?

Still, many remain sceptical of the company’s arguments. They note that the company has been private about the deal and has lifted the ban on use in warfare, which raises the spectre of the technology’s misuse. They also point out that the company should be about ‘developing AI to benefit humanity, instead of entering into partnerships that undermine it’.

All things considered, OpenAI’s move to drop a ban on military work and sign a contract with the Pentagon has sparked a bitter discussion within the AI community and the research community at large. It has highlighted just how contentious the development of ethical AI can be, as well as the significant dangers of applying AI technologies to military ends. In this regard, it is crucial for organisations to work towards ethically grounded decision-making and foster meaningful dialogues to ensure that AI technologies are developed and deployed in a socially and culturally responsible way.

For more cybersecurity news and updates, follow us on Cybersecurity – The SOC Labs.


Disclaimer: This report is based on internal and external research obtained through various means. The information provided is for reference purposes only, and users bear full responsibility for their reliance on it. The SOC Labs assumes no liability for the accuracy or consequences of using this information.


Discover more from The SOC Labs

Subscribe to get the latest posts sent to your email.

Leave a Reply

Your email address will not be published. Required fields are marked *