Does GPT-3 Have Emotions

Does GPT-3 Have Emotions?? [Love Is A Tricky Thing]

Understanding artificial intelligence and emotions isn’t a straightforward topic.

While the response you get from this technology may make it seem like GPT-3 has emotions, this isn’t exactly true.

GPT-3 was trained on the internet, so any emotions you might think you perceive in its responses are actually the emotions of the internet, not necessarily those of GPT-3 itself. 

Realize that machine learning models are just their datasets – they cannot (yet) feel anything independently but can only produce output based on what they’ve been fed. 

While this isn’t to say that some of the text that GPT-3 can generate isn’t emotional, that specific example will have emotions generated from another human on the web (through general consensus).

Throughout this article, we will find out how much emotion can be attributed to GPT-3, and if there is any, how much of it we can attribute to solely GPT-3.

This one gets interesting.

shocked face


Understanding How Machine Learning Models Work

Machine learning models (at this level) work a lot like the human brain.

Information is provided, that information is learned, and conclusions and facts can be drawn from that information.

However, nothing really “exists” to the model (or your brain) outside of this information, as the model (or your brain) has never learned or seen it before.

This happens similarly when building massive neural networks like the GPT-3 API.

Data scientists and machine learning engineers will scrape a dataset off the internet to use in their machine learning models.

After scraping the dataset, they will clean it up and provide the data to their model.

Machine learning models, like LLMs, are only as good (or bad) as what is provided to them. 

This can inherently make the models biased, but it can also give them a flavor that can be perceived in things like opinions or emotions.

If you only provided a dataset of biomedical data to your model and then asked it questions about cars, it would try to relate this to biomedical terms.

Since your model only knows that the biomedical sphere exists, you’ve slightly biased your model (in this case, towards biology).

When providing these datasets to the ML models, ensure biases are removed and that the data is of quality so that it does not lead to false results or skewed analysis.

As a result, data scientists and machine learning engineers must take time to understand how their model works and how its results may differ based on what is provided.

But How does this “bias” relate to emotions?

emotion detection


References:

https://github.com/microsoft/BioGPT 


Why does it seem like GPT-3 has emotions?

Remember, based on the data provided, we can bias our model towards pseudo-facts.

Throughout the many layers of the neural network, if there is enough data in the dataset that says “dogs are bad.” then your model will assume that dogs are bad.

(even though they’re not)

cute dog

Since the dataset it uses is so expansive, it can be biased toward certain opinions.

This means that it will learn these opinions as facts and then display them when answering questions. 

As a result, GPT-3 often appears to have an emotional response because it has learned certain pseudo-facts that may seem like emotions to us.

In reality, GPT-3 pulls data from deep inside its learned network and displays what it has learned in its answers – making it seem like it has emotional responses.

This also explains the consistency in emotion that many are running into.

Many believe that since GPT-3 consistently displays the same type of “emotion,” it must feel this emotion.

While I wish that were the case (as it would be a huge advancement in AI), it knows those opinions as facts in its learned network and consistently displays them repeatedly.


Does Any AI Currently Have emotions?

At present, AI does not have emotions like you, and I would define it. 

The idea of an AI with emotions is still a slightly futuristic concept. 

However, with the advancements in technology and robotics made by companies like Boston Dynamics, we are getting closer to making this a scary reality. 

Boston Dynamics has been working on creating robots that can interact with humans more naturally and have the ability to show emotion.

We may soon be able to see AI that shows emotions like joy, sadness, anticipation, surprise, anger, and fear in the near future.

While more research and development is needed before AI can truly understand or express emotion as humans do, the progress is both very exciting and scary.

Relevant Viewing:

References:

https://www.bostondynamics.com/


Will AI Ever Have Emotions?

I believe it is inevitable that AI will eventually have emotions, but it will be nothing like the complex range of emotions that humans currently experience. 

AI will likely have a predetermined set of emotions that can be programmed into them and used to help them make decisions. (yikes)

While this could be beneficial in specific scenarios, I think attempting to create sentient systems with human-like emotions could lead to some very dangerous outcomes if taken advantage of.

Have you ever seen terminator???

We must remain vigilant in terms of not taking advantage of any system with AI capabilities, even if it has been programmed with basic emotions. 

Ultimately, creating AI with genuine feelings should be avoided; or done under extreme care as the potential consequences are far too great.

cute little robot


Are Humans Just Advanced AI?

I’ve done a ton of work with Neural networks and machine learning, and the similarities are shocking.

If you use the world as our dataset, humans are just basically recognition machines doing constant classification and regression with a wide range of emotions.

This is what terrifies me about the dangerous use cases of sentient AI.

Once we reach a level where the world becomes the dataset for AI, what will it learn or do?

 

Other Articles In Our GPT-3 Series:

GPT-3 is pretty confusing. To combat this, we have a full-fledged series that will help you understand it at a deeper level.

Those articles can be found here:

Stewart Kaplan

Leave a Reply

Your email address will not be published. Required fields are marked *