ChatGPT is an artificial-intelligence (AI) chatbot developed by OpenAI and launched in November 2022. It is built on top of OpenAI's GPT-3.5 and GPT-4 families of large language models (LLMs) and has been fine-tuned using both supervised and reinforcement learning techniques. ChatGPT was launched as a prototype on November 30, 2022. It garnered attention for its detailed responses and articulate answers across many domains of knowledge. Its uneven factual accuracy, however, has been identified as a significant drawback. Following the release of ChatGPT, OpenAI's valuation was estimated at US$29 billion in 2023.
The original release of ChatGPT was based on GPT-3.5. A version based on GPT-4, the newest OpenAI model, was released on March 14, 2023, and is available for paid subscribers on a limited basis.
ChatGPT could prove mathematical theorems, perform medical diagnosis, invent new recipes, write legal documents. Maybe even court judgements. And this is not entirely new: expert systems are the ancestors of this technology. Certainly, creating featured pictures or videos to include in articles would be useful. It would avoid licensing fees and potential copyright battles. However, would the synthetized images be subject to copyright? Could a news article about your recent DUI, featuring a synthetized picture of you, be challenged in court for privacy violation? How is that different, from a legal point of view, from an hand-drawn picture of your face? Or a synthetized, digital drawing of your face?
Some people complain that ChatGPT will increase the proportion and volume of fake news. Publishers may use ChatGPT. It indeed relies partly on Internet searches to generate content. But probably the opposite is true: ChatGPT could be better than Facebook to detect fake news, and better than Google to do search. This actually gets Google worried. Another worry is AI-art, especially if it mimics work produced by artists — be it a song or a painting. As many technologies, it can be used for good or bad things.
The capabilities seem far superior to similar techniques developed in the past. People are worried to see their jobs automated. As an employee, one should always be proactive and see the writing in the wall. And have a plan B. That said, it is part of the evolution process. An API such as ThatsMathematics.com/mathgen/ can write math research articles. Some papers — even though consisting of math gibberish — have been accepted in top journals. WolframAlpha is a platform that can compute integrals and solve various math problems. Even with a step-by-step solution if one needs it for a school homework. And it’s free.
One area of improvement: getting these tools to produce solutions showing some personality, less dull. They also tend to lack common sense. Sometimes this has terrible consequences: mathematical models unaware that home prices increasing by 100% in 4 years are bound to collapse, or the fact that there was not statistics about recovered people in the early days of Covid did not mean everyone ended up very sick, quite the contrary. Some plane crashes have been caused by absurd behavior of auto-pilot.
Also, adding watermarks to synthetized images increases security and makes it easier to protect the work against illegal copies. A similar mechanism can be used for synthetized text or sound.
QUESTION I: Could we call ChatGPT creative?
QUESTION II: What are the scopes of improvements for ChatGPT?