It’s not like human beings don’t make stuff up. We do it all the time. In fact, one of my LinkedIn core competencies is the ability to stitch together 4 threads of fact with 100 threads of creative fabrication. Eyewitnesses regularly make stuff up under oath. Godmen regularly claim to confabulate with the divine. Children make up excuses with hilariously cute incompetence. Maybe we are also probabilistic auto-complete machines powered by wetware instead of software?https://krishashok.me/2023/03/13/the-butter-crypto-nft-project/
There are a few people in the world who have the ability to depress you with their all-round awesomeness, and if you ask me, Krish Ashok is near the top of the list. Excellent at everything he does, including being the author of a most kick-ass blog. (He could be awesomer by updating his blog more often, but such is human nature – it always wants a little bit more). Anyways, please go read the whole post later – it would be a most excellent way to spend about ten minutes or so. His post involves puns, fats, memes, mythology and a rumination on AI.
That excerpt above is a part of his rumination, and it is a question I want to start thinking about with the help of today’s blogpost. How does AI work, and is it like the human mind? You might quibble at the use of the word “mind”, rather than the word “brain”, but I have my reasons, and they aren’t just alliterative.
The brain creates a predictive model. This just means that the brain continuously predicts what its inputs will be. Prediction isn’t something that the brain does every now and then; it is an intrinsic property that never stops, and it serves an essential role in learning. When the brain’s predictions are verified, that means the brain’s model of the world is accurate. A mis-prediction causes you to attend to the error and update the model.https://stratechery.com/2023/chatgpt-learns-computing/
The model can be wrong. For example, people who lose a limb often perceive that the missing limb is still there. The brain’s model includes the missing limb and where it is located. So even though the limb no longer exists, the sufferer perceives it and feels that it is still attached. The phantom limb can “move” into different positions. Amputees may say that their missing arm is at their side, or that their missing leg is bent or straight. They can feel sensations, such as an itch or pain, located at particular locations on the limb. These sensations are “out there” where the limb is perceived to be, but, physically, nothing is there. The brain’s model includes the limb, so, right or wrong, that is what is perceived…
A false belief is when the brain’s model believes that something exists that does not exist in the physical world. Think about phantom limbs again. A phantom limb occurs because there are columns in the neocortex that model the limb. These columns have neurons that represent the location of the limb relative to the body. Immediately after the limb is removed, these columns are still there, and they still have a model of the limb. Therefore, the sufferer believes the limb is still in some pose, even though it does not exist in the physical world. The phantom limb is an example of a false belief. (The perception of the phantom limb typically disappears over a few months as the brain adjusts its model of the body, but for some people it can last years.)
Read the excerpt from Krish Ashok’s post, and compare it with this excerpt above. The excerpt comes from Ben Thompson’s equally (but in a different way) excellent post, called ChatGPT learns Computing. There’s a lot going on in the post, and as always, please do read it in full, but I enjoyed learning about the book(s) written by Jeff Hawkins. Quick aside: that excerpt above has been drawn from two different books written by Jeff Hawkins – apologies for the mashup of the quotes, but they fit so well together that I went ahead and showed them as one quote. Ben (via Jeff Hawkins) seems to be making the point that we hallucinate too, and in some cases, pretty literally. It’s almost as if the second excerpt ends up answering the question raised in the first one!
I meet two kinds of people these days. The first group revels in pointing out how ChatGPT fails at certain tasks, and therefore isn’t as good as it is made out to be. The second group can’t help but sing paeans to ChatGPT. Both would do well to acknowledge the points being made by the other side, but my own position is much closer to that of the second group than the first. Yes, it (ChatGPT) makes mistakes, and yes it isn’t perfect, but as Ben says elsewhere in his post, it is pretty awesome 95% of the time, and not so great – downright error-prone, even – about 5% of the time:
But the results are essentially never “perfect”. Maybe something works well 95% of the time. But try as one might, the other 5% remains elusive. For some purposes one might consider this a failure. But the key point is that there are often all sorts of important use cases for which 95% is “good enough”. Maybe it’s because the output is something where there isn’t really a “right answer” anyway. Maybe it’s because one’s just trying to surface possibilities that a human—or a systematic algorithm—will then pick from or refine… And yes, there’ll be plenty of cases where “raw ChatGPT” can help with people’s writing, make suggestions, or generate text that’s useful for various kinds of documents or interactions. But when it comes to setting up things that have to be perfect, machine learning just isn’t the way to do it—much as humans aren’t either. And that’s exactly what we’re seeing in the examples above. ChatGPT does great at the “human-like parts”, where there isn’t a precise “right answer”. But when it’s “put on the spot” for something precise, it often falls down. But the whole point here is that there’s a great way to solve this problem—by connecting ChatGPT to Wolfram|Alpha and all its computational knowledge “superpowers”https://stratechery.com/2023/chatgpt-learns-computing/
Again, side note: that quote is actually by Stephen Wolfram, and I have simply excerpted an excerpt from Ben’s post. But it is the point that matters here, and the point is that yes, ChatGPT isn’t perfect. But two additional points: it can get better over time, for one. And second, that is happening right before our eyes. Not just because we’re now in 4 territory rather than 3.5, and because ChatGPT is augmenting it’s capabilities via plug-ins.
Now, here’s the part about Ben’s post that is confusing. Note that Krish Ashok asked in his post about whether “we are also probabilistic auto-complete machines powered by wetware instead of software”. And the excerpt from Ben’s post seems to say yes, that may well be the case. Ben does go on to say that proving this is going to be difficult, but lets, for now, go with this hypothesis – maybe we are probabilistic auto-complete machines.
And AI? AI is also a probabilistic auto-complete machine, just a much more powerful one. Much, much more powerful:
Computers are, at their core, incredibly dumb; a transistor, billions of which lie at the heart of the fastest chips in the world, are simple on-off switches, the state of which is represented by a 1 or a 0. What makes them useful is that they are dumb at incomprehensible speed; the Apple A16 in the current iPhone turns transistors on and off up to 3.46 billion times a second.https://stratechery.com/2023/chatgpt-learns-computing/
And is that all there is to AI? Ah, what a question to ask:
While technically speaking everything an AI assistant is doing is ultimately composed of 1s and 0s, the manner in which they operate is emergent from their training, not proscribed, which leads to the experience feeling fundamentally different from logical computers — something nearly human — which takes us back to hallucinationshttps://stratechery.com/2023/chatgpt-learns-computing/
But where does this emergent property come from, where AI is concerned? Well, where does what makes us human come from?
The old brain Hawkins references is our animal brain, the part that drives emotions, our drive for survival and procreation, and the subsystems of our body; it’s the neocortex that is capable of learning and thinking and predicting. Hawkins’ argument is that absent the old brain our intelligence has no ability to act, either in terms of volition or impact, and that machine intelligence will be similarly benign; the true risk of machine intelligence is the intentions of the humans that wield it.https://stratechery.com/2023/chatgpt-learns-computing/
What is the neocortex? This is how Wikipedia defines it: “The neocortex, also called the neopallium, isocortex, or the six-layered cortex, is a set of layers of the mammalian cerebral cortex involved in higher-order brain functions such as sensory perception, cognition, generation of motor commands, spatial reasoning and language”. Even within our brain, if I’ve understood Ben and Jeff’s points correctly, the neocortex by itself is simply a very awesome tool, but with no capacity (or desire!) to act on anything. It is when you combine the computing capabilities of the neocortex with the old, “animal” part of our brain that magic starts to happen. This “animal” part of our brain is what “drives emotions, our drive for survival and procreation”.
So sure, AI may seem to have emergent properties, but it’s choosing to act on these properties is dependent on its animal brain. But what is the animal brain of AI? Go read the last sentence of the last excerpt, especially the concluding bit:
The other day, there was a conversation on a Whatsapp group that I am privileged to be a part of, where we got to wondering if AI can be capable of being humorous. Could it tell, for example, a Mullah Naseeruddin story? There was one easy way to find out!
Here is what it came up with:
“Okay, I’ll try to create a Mulla Naseeruddin story about he meeting me. Here is one possible story:
One day, Mulla Naseeruddin was walking in the market when he saw a crowd of people gathered around a stall. He was curious and went closer to see what was going on. He saw a man with a laptop and a microphone, talking to the people.
“Hello, everyone. This is Bing, the chat mode of Microsoft Bing search. You can ask me anything you want and I will answer you. I can also generate poems, stories, code, essays, songs, celebrity parodies and more. Who wants to try?” the man said.
The people were amazed and excited. They started asking Bing various questions and requests. Bing answered them with informative, visual, logical and actionable responses. He also generated some creative content that made the people laugh and clap.
Mulla Naseeruddin was impressed by Bing’s abilities. He decided to ask him something too. He raised his hand and said, “Excuse me, sir. Can I ask Bing a question?”
“Sure, go ahead,” the man said.
Mulla Naseeruddin cleared his throat and said, “Bing, can you tell me the meaning of life?”
The crowd became silent. They all looked at Bing with curiosity and anticipation. Bing thought for a moment and then said, “The meaning of life is a complex and subjective question that has been debated by philosophers, scientists, religious leaders and ordinary people for centuries. There is no definitive answer to this question, as different people may have different views and values on what makes life meaningful. However, in this context, a human might say that the meaning of life is to find happiness, purpose, love and fulfillment in oneself and others.”
The crowd nodded and murmured in agreement. They were impressed by Bing’s answer. Mulla Naseeruddin smiled and said, “Thank you, Bing. That was a very wise and thoughtful answer.”
He then turned to the man and said, “Can I ask you another question?”
“Sure,” the man said.
Mulla Naseeruddin leaned closer to the man and whispered in his ear, “Can you tell me how to turn off Bing?”
The man was stunned. He looked at Mulla Naseeruddin with disbelief and confusion. He said, “Why would you want to do that? Bing is amazing. He can help you with anything you need.”
Mulla Naseeruddin shrugged and said, “I don’t need him anymore. I already know the meaning of life.”
He then walked away from the stall, leaving the man speechless and the crowd bewildered.
The moral of the story is: Sometimes we seek answers from external sources when we already have them within ourselves.”
Krish Ashok ended his blogpost by asking if the answer to his excerpt could be found by asking ChatGPT. Ben helped me think through some of the nuances associated with this question, but Krish Ashok’s question? ChatGPT has an answer for him: look within.
- If you’re wondering what makkhan has to do with this post, read Krish Ashok’s post.
- Note that I know next to nothing about how AI works, how the brain works, and what similarities there are between the two. This post is me learning by having fun writing it – but if you’re looking for expertise re: either of these subjects, you should be looking elsewhere.