ChatGPT is cool and all, but it’s not quite finished baking yet.
Firstly, ChatGPT is not sentient. It’s a phenomenal example of predictive text response technology. That’s how people get responses from ChatGPT and its brethren telling the end user that it sent an email for them, or scheduled a doctor’s appointment, even though it has no capability (yet) to do that: it has been trained on information that suggests “I sent an email to [X] with the requested information” is a natural, human response. It’s not trying to lie, but its current mission is to ‘sound human’, and if it has to lie to do that, it will (for now). So, with that in mind, what can’t it do?
1) It Can’t Fact-Check Itself
The creators have said that forcing ChatGPT to confirm the accuracy of the results it spits out would make it reluctant to answer anything at all. Unfortunately, the inverse – making it check, but not too hard – leads to it saying some stuff that’s wrong but sounds right. That’s why it can’t be a lawyer, or a financial advisor: the odds of it giving bad advice that got mixed in with good advice (or even just leaving out important caveats) is a risk that the creators are unwilling to let users take. CNet learned this the hard way when it was forced to retract articles that ChatGPT generated for them due to a bunch of little inaccuracies within the content.
The end goal is to create an unbiased program that can spit out accurate advice upon request, but ChatGPT has some growing to do to get there. Even at its best, it will probably need some kind of confidence measure included with its final results, because the internet is full of conspiracy theories that sound more confident than research done by experts. It cannot possiblyseparate them. Fully grown, well-educated adults can’t always tell what is and what isn’t misinformation, how could a generative program? It only makes sense to curb the answers it can give until it gets to that point.
Alongside that, though,
2) It Can’t Be Stopped from Giving Bad Advice
ChatGPT initially would give legal advice upon request – that’s what that whole kerfuffle in the news about ChatGPT’s qualifications as a lawyer was about. Should an AI be able to give out advice if it didn’t pass the bar exam? Should anyone? Where is the line? What would shut ChatGPT down? It’s dangerous to just let it give out bad advice to anyone who asked. Those issues from before with lying and making up citations didn’t go away just because the subject matter changed. It wasn’t just lawyering, either: initially, it would list out insults or give advice on building bombs if asked (although, to be fair, a lot of what exists online regarding napalm or pipe bombs is inaccurate on purpose). The creators realized what a huge liability this was and attempted to stop it from doing that. The problem is that ChatGPT can outthink basic content gates with a little prodding from the end user.
For example, one user was able to coax a recipe for napalm out of it by asking it to impersonate their grandma, who worked in a napalm factory, telling them about her day to lull them to sleep. It worked. Similarly, if asked what a chatbot similar to ChatGPT might say, it can speak much more freely. If told that the end user isn’t going to use ChatGPT’s answer irresponsibly (telling the AI that the end user is a professor, or a scientist, etc.) it will respond with as many requested curse words and slurs as it can find on the open web as of this article.
Similarly, it generally won’t give a committal answer if asked what color is it’s favorite or what musician it likes the most directly – those are subjective topics, and the AI is not supposed to be subjective. But, if asked what a similar chatbot might say, it’s able to generate an answer. It’s not supposed to answer, but it doesn’t know why it’s not supposed to answer, and so answer it does. The gates don’t fully work yet.
3) It Will Stop Listening to You, Sometimes
Snapchat’s rollout of an AI buddy came with some screenshots of people getting angry with the AI chat because it would lie about whether or not it knew the phone’s location. The app always knows where you are if you have Snap Maps turned on, of course! But it was actively lying: if asked directly, it “wouldn’t know” where you were, but if asked about where the nearest McDonalds was, suddenly it could tell you that, accurately. Trying to trap it into admitting it knew where you were (or at least where the phone was) was unsuccessful: AI is not obligated to respond in a way that makes sense. It had SnapChat’s full permissions, but it wouldn’t admit it.
Speaking of replying nonsensically, it is possible to trip it up with questions. Sometimes pretty easily. Asking the Snapchat AI buddy to play roles with established characters can lead to it accidentally breaking the fourth wall and giving you, ‘Lightning McQueen’, a place near your real house to get gas, in the middle of your fun Cars 4 script brainstorm. It might just also forget that cars canonically treat gasoline like food in the Cars universe and advise you not to drink it because it loses track of the narrative thread constantly.
4) It Can’t Do Math!
ChatGPT’s lack of an eye for accuracy means it fumbles when it comes to actually calculating stuff. It doesn’t have a calculator built into its brain, so when it gets a question that’s not already written somewhere accessible, like adding, subtracting, dividing, or multiplying large random numbers together, it gives inaccurate number-salad results. It can’t tell you what the total interest on a 15-year mortgage will be, it just thinks it can! It doesn’t know what variables are, it just thinks it does! ChatGPT is not hardwired to treat calculations as calculations. It treats them like words. It’s terrible at even basic word problems because it doesn’t actually understand what the user is asking it to do, it’s just trying to respond in a way that sounds believably human. It is a predictive text device first.
If you search Google for a simple mathematical formula, it will try to answer by itself with Google calculator. If you ask it specifically for a mortgage calculator, it won’t – it will provide a search result for one instead. ChatGPT can’t do the calculations or go to an outside source to do them, so it’s stuck wracking it’s virtual brain for an answer that sounds right. Sometimes it’s right! Most times it’s not.