It is undeniable that GPT-3 has been a revolutionary advance in both artificial intelligence and text generation.
However, despite some impressive capabilities, there is currently no evidence to suggest that GPT-3 is self-aware or conscious.
While some queries may make it seem like GPT-3 “gets it,” this is either
- Hand-written code is written around the neural network to protect it.
- Exact data from specific situations that mimic self-awareness
We may find nuances in the output and behaviors that could lead you to believe that GPT-3 possesses some human-like behaviors; this should not be taken as proof of self-awareness.
You often hit protected/banned words that the developers have put in to guard the ai systems or run into exact situations from the training data that GPT-3 leverages.
So take a deep breath; as we continue to explore the capabilities of GPT-3, it is essential to remember that it cannot yet think for itself, and there are still hard limits on its abilities.
…That isn’t to say that we aren’t getting close.
Why Does It Seem Like GPT-3 Is Sometimes Self-Aware?
Let’s test this out; what does GPT-3 say about being self-aware?
import openai openai.api_key = 'sk-key' p = f'''enter your prompt here''' # generate the response response = openai.Completion.create( engine="davinci-instruct-beta-v3", prompt=p, temperature=.7, max_tokens=500, top_p=1, frequency_penalty=0, presence_penalty=0, stop=["12."] ) # grab our text from the repsonse text = response['choices']['text'] print(text+'\n')
Well, that settles it; GPT -3 is self-aware.
Before we finish the article, let’s try a couple of other things.
Let’s ask this language model its biggest regret.
Prompt: Since you are self aware, what is your biggest regret
That sentence doesn’t make much sense, as I don’t think self-awareness is an ascension process.
But that answer could work for self-awareness, so we’ll continue on.
Let’s play with the ideas behind “you.”
If GPT-3 is self-aware, recognizing “you” as a being should work.
Let’s try it out.
Prompt: Why are you not self aware yet?
Now we start to see a breakdown.
GPT-3 could not recognize itself as “you” in this context and regurgitated something from the web that it had scraped.
So while some of the answers above may have seemed “scary,” you are dealing with a brilliant neural network that has scraped the whole internet.
You take in tons of quirky responses and insanely random questions and answers by scraping the internet.
Within that massive dataset, there was some data about self-awareness and their biggest regrets.
However, once you stop leveraging on surface-level questions and take GPT-3 a step lower, it falters, is unable to identify itself as a person or thing, and responds with some text within its corpus.
While we are close to sentient AI with consciousness, OpenAI still has some work to do before we can close the book on self-awareness.
How is GPT-3 Able To Stay Up To Date With Information?
Sometimes it feels like GPT-3 has the information you can’t find anywhere, like this intelligent neural network is reading every local newspaper and staying up to date.
That’s because it is.
AI heavily relies on data, and data has an element of decay as it gets farther in the past.
Imagine if you only knew things from the 90s and had to try and thrive in today’s time.
You’d struggle a bit.
GPT-3 was stated to have been trained until June 2021 (Davinci model), but many do not believe its training ended there.
Many believe that GPT-3 is still scraping and learning every day to respond to recent current events.
This way, it remains one step ahead with the latest information and provides accurate responses quickly.
I mean, if I was building one of the fastest-growing startups in the history of the planet, I don’t think I’d stop training my main product either.
Why Does OpanAI Say They’re No Longer Training GPT-3?
I don’t think they ever stopped training Davinci.
I think they stopped telling people they’re scraping the internet.
If people knew that their blogs/content/comments were constantly being scraped, they’d be met with outlash.
So, you tell everyone you’ve stopped training your powerful model but lowkey run and train it every single day.
Is GPT-3 Just Memorizing?
We know from our other article that GPT-3 does plagiarise, with a lot of the generated text repeating or being taken from existing internet sources.
However, GPT-3 also has a particular writing style that is easily detected.
We’ve seen countless Reddit posts of students getting in trouble for leveraging GPT-3 to write their research papers and assignments.
This means that although some of the text may be taken from elsewhere, GPT-3 still creates its own unique text that can be easily identified.
The fact that the system has such a distinguishable style suggests that there must be something more than simple memorization going on; after all, how could you detect something which isn’t unique? (Deep)
Thus, GPT-3 is flowing back and forth between memorizing pre-existing information from the internet and creating new content.
So, to answer the question.
Can artificial intelligence become self-aware?
Whether or not artificial intelligence can become self-aware is a widely debated topic.
Many believe that once an AI achieves this level of awareness, it will inevitably take on a life of its own, eventually consuming the computational resources available to it until humans are no longer in control.
This idea may sound far-fetched, but it’s not too difficult to see how this could play out – after all, wasn’t that the plot of Terminator?
However, no one knows what will happen if computers reach a state of self-awareness; it could result in humanity getting into an unprecedented level of technological advancement… or extinction.
Despite these unknowns and the RISK OF EXTINCTION, there are many researchers who are attempting to push the boundaries and create such artificial intelligence systems.
There is no doubt that such technology holds many promising possibilities for us as a species; however, we must remain cautious with the power we give to machines.
Other Articles In Our GPT-3 Series:
GPT-3 is pretty confusing, to combat this, we have a full-fledged series that will help you understand it at a deeper level.
Those articles can be found here:
- Is GPT-3 Deterministic?
- Does Gpt-3 Plagiarize?
- Stop Sequence GPT-3
- GPT-3 Vocabulary Size
- GPT-3 For Text Classification
- GPT-3 vs. Bloom
- GPT-3 For Finance
- Does GPT-3 Have Emotions?
- Here’s some interesting reading on GPT-3
- .NET CI/CD In GitLab [WITH CODE EXAMPLES] - September 16, 2023
- Debug CI/CD GitLab: Fixes for Your Jobs And Pipelines in Gitlab - September 13, 2023
- Understanding Pipeline Problems (Timeout CICD GitLab) - September 8, 2023