Plato’s allegory of the cave is a philosophical tool used to describe, roughly, how a person may fail to grasp an idea even if they can describe it. In the allegory, people chained inside a cave are able to see shadows of people and objects passing in front of a light source, but unable to see the people or the objects directly; they can see an outline of an apple, but couldn’t tell you what color it is, or that the apple is food. How could they, when all they see is a shadow? In fact, when they do see an apple for the first time, the actual thing is so different from the shadow they may not even realize that’s what they’re looking at.
A similar concept, the “Chinese Dictionary Room”, is a thought experiment where a person is passed slips of paper under a door in a closed room. The paper has characters on it written in Mandarin Chinese, and the recipient doesn’t speak Mandarin, but they have a large book full of symbols paired together, as questions and answers. When they are able to find the character written on the paper in their book, they can draw a response from the matching line and pass the slip back under the door. No actual information exchange has taken place. If the person in the room is supposed to have learned something from the words on the paper, they haven’t.
There are, of course, multiple ways to interpret these thought experiments, but for the purpose of this article, it’s their applications to education in the modern world, especially ChatGPT and other generative content programs.
In the cave, V=H*pi*r^2, and you solve for one of the given letters by using the other two that are there. Any time a student sees this formula, they know what the shape of the shadow looks like: it’s a formula for the volume of a cylinder. See a picture on a test with two letters labeled and one not, and then solve for the one that isn’t, just like every formula. But if they were to actually see the thing in daylight, they’d understand the value. For a potter to determine how many ounces their mug holds, for example, they could make and re-make mugs, over and over, until they got the number they wanted, or they could do a little bit of math up front and make the number of mugs they want, correctly, right out of the gate.
But, if a student can only grasp the shadow, they will spend time searching for answers and equations that they would have already if they could just apply their knowledge in the way they need to. If the systems in place were fair and focused on a deeper understanding of the subject rather than the memorization required for tests, we’d see more capable adults doing better at their jobs and hobbies no matter what those jobs or hobbies are. But it’s not. Given the number of tests a student has to complete, it would be a waste of time to try and ‘understand’ something when the shadow will do, and when that particular shadow won’t be appearing on later tests. Remember 12th Night, or Othello? Did you revisit those plays in school after the first semester in English class? Understanding is incidental to passing, basically by design.
The testing system we have today reduces children to numbers, and those numbers decide against working like humans. Why practice? Why bother? ChatGPT has only made this a more stubborn problem. Ask a kid to solve a geometric problem anywhere outside of a written, in-person test (ask them how many board-feet of lumber they need for a treehouse, for example), and a lot of them flounder now, because they’re used to ChatGPT, or parents, or teachers, holding their hand and steering them in the right direction. They need to pass a test – the real applications don’t ‘click’. The equation is easy enough, but they don’t know what they’re looking at. They don’t ‘get’ word problems when the word problems aren’t stated to be word problems, so they can’t translate the shadow to the reality. This is only an extension of the larger issue: ChatGPT didn’t create this huge barrier to critical thought, but there is so little intentional higher-level thought exercise that ChatGPT is revealing huge gaps in comprehension. Even less work needs to be done to ‘get by’ now. A very convincing chatbot can fake it well enough that the kid passes. Do some students still do the work? Of course. But the ones who were already kind of drifting along and getting by on shadows alone are still doing that, and just hiding it better instead of failing outright.
The Chinese Dictionary Room is practically already entering existence with ChatGPT. Anecdotally, teachers are growing increasingly worried about how students are relying on new tech to do their thinking for them: a question is asked by the teacher, the student pastes it directly into ChatGPT asking for an answer sometimes before they’ve even read it, and then they give the answer it spits out back to the teacher, again without reading it. The original thought experiment at least figured the language would need to be different before the responses could be fully incomprehensible to the person in the room. That isn’t the case. You cannot make someone comprehend something, and you can’t do it for them. At some point, the easier answer is giving up and facing your parents with your bad grades because you genuinely don’t understand the concepts, and don’t know how to learn how to understand them.
Is AI the cause of these problems? Probably not. But it’s not helping, either. It makes it easy to hide. Kids will remain trapped in the room, or the cave, well into adulthood. The world around them won’t be measurable with numbers because they don’t know what the numbers should be. Things like preemptive research for travel, or accurately considering nutrition labels, are going to become skills instead of a baseline.
Interested in having us manage your business’s IT? Book a call here: ElixisTechnology.com