Michał Sapka's website

ML Is Still a Parlor Trick

Nabil Alouani:

Imagine you lock a newborn child alone inside a library. Let’s call him Loki, and suppose he doesn’t need food, water, sleep, and love. You have Loki watch thousands of books all day, every single day, for 20 years non-stop. You don’t teach him anything about grammar, and you never explain what English words mean.

Now imagine you come back two decades later and, under the library’s locked door, you slip a piece of paper that says, “Hello Loki, what’s your favorite color?”

Do you expect Loki to understand your question?

Loki may recall your question from one of the dialogues he’d previously seen. Remember, Loki doesn’t read words; he sees them the same way you see Japanese/Arab/Hebrew characters without being able to tell what they mean.

I’m glad I am not the only one considering large language models to be nothing more than a parlor trick. When looking at the front page of Hacker News, it seems that we have reached the next level of evolution at the very least.