<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=155003228214399&amp;ev=PageView&amp;noscript=1">
Contact Centre personalisation IVR ChatGPT ai conversation design chatbot

ChatGPT4 knocks the socks off ChatGPTv3

By
2 Minute Read

ChatGPT4, what's the real difference...

ChatGPT… by now you’ve probably tried it out, and if not, you’ve definitely heard about it.

When it comes to version 3, 3.5, 3.5 turbo, and version 4… there’s been a few right… what’s the real difference?

From a user perspective, you’ve got a load of additional functionality, from multimodal recognition, to enhanced reasoning and less hallucinations!

All this means is it can recognise images and audio as well as text, it’s better at giving answers, and doesn’t screw up as much.

 

But how does this translate to a business use case?

If you’re familiar with us, you’ll know we’ve been advising a telephony and contact centre partner on using ChatGPT to create a telephony experience like nothing else out there!

We helped with the interaction between their platform and ChatGPT – making sure we were engineering prompts and setting ChatGPT parameters to get the best result.

With the ChatGPT 3 models, this was fun, but a challenge! The question and all that data had to be in the one prompt…so it was easy for ChatGPT to get confused and give us some weird responses!

And because, through the ChatGPT API, the interaction isn’t a conversation, there’s no ongoing memory, no context. So, every prompt had to be carefully engineered, including rules and examples.

 

But with ChatGPT 4, things changed.

The introduction of the User, Assistant, and System elements meant we could break things up and simplify our prompts. This not only gave us better responses but saved our client money too.

We went from one massive prompt which included details about the users’ organisations, the rules and the question, to far greater clarity of Prompt Engineering for the AI model. The System element included the data; the User element included the rules, which just left a simple question for the Assistant section.

Now, measuring success...well, it can be a little tricky. It's not just about facts and figures. It's about style, tone, and personality. And we all know that good writing is subjective!

 

But based on our parameters, the results were astounding...

Chat GPT 3 created around 85% of good responses, while v4 is well in excess of 96%. This also included an almost total eradication of nonsensical responses (‘hallucinations’), and a massive improvement in persona quality!

Some of this is down to OpenAI, some because of the three parameters I mentioned, and some because of the improvements in prompt engineering that we have built up considerable expertise in.

But either way, if there was a battle of the AI…ChatGPT4 is a clear winner!

 

If you need advice on how ChatGPT could help you, please get in touch.

 

Written by:

The Consultancy Team

 

Sources:

1: https://openai.com/

 

Premier CX Team

Author
Delivering OpenAI success for UCaaS innovator
3 major concerns of integrating generative AI