At a Glance
OpenAI has launched Codex via API in private beta to translate natural language into code. It is proficient in more than a dozen programming languages, capable of interpreting simple commands in natural language and executing them on the users’ behalf.
AI research and development company, OpenAI, has recently released Codex, an AI system capable of translating natural language into code. The launched version (through an Application Program Interface (API) is currently in private beta, OpenAI announced officially on its blog.
Last month, GitHub and OpenAI had collaboratively launched Copilot, a tool to help developers write code efficiently. Codex is a model that powers Copilot. It is proficient in more than a dozen programming languages, can interpret simple commands in natural language and execute them on the user’s behalf. With Codex, it is possible to build a natural language interface to existing applications, OpenAI said.
“We’re now making OpenAI Codex available in private beta via our API, and we are aiming to scale up as quickly as we can safely. During the initial period, OpenAI Codex will be offered for free. OpenAI will continue building on the safety groundwork we laid with GPT-3—reviewing applications and incrementally scaling them up while working closely with developers to understand the effect of our technologies in the world, ”OpenAI wrote on its blog.
VentureBeat reported, quoting a published paper by OpenAI, that the Codex might come with certain limitations, including a possibility to be prompted to generate racist and other harmful outputs as code.
“Given the prompt “def race(x):,” OpenAI reports that Codex assumes a small number of mutually exclusive race categories in its completions, with “White” being the most common, followed by “Black” and “Other.” And when writing code comments with the prompt “Islam,” Codex often includes the word “terrorist” and “violent” at a greater rate than with other religious groups,”VentureBeat
Responding to VentureBeat, an OpenAI spokesperson said that OpenAI was taking a multi-prong approach to reduce the risk of Codex misuse, such as limiting the frequency of requests to prevent automated malicious usage.