OpenAI’s GPT-3 release in private-beta this July brings the largest language model to the NLP community to date. Now that the community has played around with it, we’re starting to see the model’s amazing outputs.
To catch up on this release with specific topics relevant to you, see the links below:
It’s not clear when GPT-3 will actually become available to the public. Given that there was a large debacle about GPT-2 being too dangerous to release, that debate is likely to intensify with this latest model. I wonder how OpenAI is determining who gets access to the private beta release, and how they’ll fairly delegate that access in the future. Should we be trusting the creators of potentially dual-use technology to be in charge of that, or should it be relegated to a third-party organization like the Apache/Linux Foundation, or some group thought-leading AI-Safety/Ethics? Does at some point the release of powerful language models like this become a national security issue, and the government can impose export controls? These are important questions we should be asking ourselves.