This month California-based AI-related company OpenAI has announced the final release of “GPT-2” Model, the release of which was previously postponed as too dangerous.

The trained AI model GPT-2 is capable of producing alarmingly human-like texts based on just a small piece of the text sample. Cornell University researchers have assigned 6.91 out of 10 credibility scores for the released version, which makes it incredibly hard to distinguish AI-produced text from the product of human creativity.

In February OpenAI has decided to postpone the full release, “due to concerns about malicious applications of the technology”, and released “a much smaller model”, so researchers could test the system. Since then the company was releasing more and more complex models, arriving to the full capacity version.

Middlebury Institute of International Studies’ Center on Terrorism, Extremism, and Counterterrorism (CTEC) has warned that the AI Algorithm can be used to general “synthetic propaganda”. Of course, the model could be also used for good old spam, or to undermine the general value of the text in general.

OpenAI developers admit that the “content-based detection” of AI-written text remains a challenge, and predict that it will only become more challenging. The company invites researchers for joint efforts of creating countermeasures against misuse of technology, both technical and non-technical, so AI could be “beneficial” to humanity.

In the midst of other concerns with AI-created videos, audios, photos, and now texts, researchers hope that the public will become “more skeptical” of what they see on the internet and social media. 

The programmers also highlighted that the AI model that was trained on texts, produced by humans, has inherited the language biases, that reflect gender, race, and religious cognitive biases. OpenAI has released the code, so other developers can contribute to making an AI system less biased.