My Thoughts on Michio Kaku’s Statements from His Recent Interview with Joe Rogan

Sergey Nes
3 min readMay 3, 2023

Disclaimer: This was a great interview, and I enjoyed it, and I’m looking forward to reading Michio’s new book! However, a few claims really bothered me, so I decided to write down my thoughts.

The claims are:

1. LLMs/AI Chatbots are merely plagiarizing machines;
2. LLMs don’t truly understand what is true or false;
3. We need quantum computers to control AI.

I disagree with Michio Kaku’s claim that LLM/ChatGPT is very different from humans in the process of creating writing, images, or music. He says that ChatGPT is like a teenager plagiarizing someone else’s essay. However, if you look closer and think about the process of learning for humans, it’s not so different from an LLM.

An LLM is trained by processing various forms of data, such as books, articles, posts, and tweets, converting this information into weights and creating a model that is similar to neural connections, much like the ones in the human brain. Similarly, humans read books, and articles, watch movies, listen to music, and our brains remember this information, forming neural links. Then, when we talk or write a letter, article, or post, we use rules, patterns, words, and phrases that we’ve remembered and liked. So, all of us, consciously or not, plagiarize. Everyone creates things that are strongly influenced by the works of previous generations. There might be a few geniuses capable of producing entirely original idea or creation without any reference to pre-existing works. “The Seven Basic Plots” by Christopher Booker is a good illustration of my argument. There are only seven archetypal storylines that recur throughout literature, films, plays, and myths.

So, I think most people are not as different from LLM as Michio Kaku believes.

Regarding the claim that LLMs don’t truly understand what is true or false, isn’t that similar to humans? Many people choose a side based on personal preferences, moods, and beliefs, rather than scientifically test and prove the accuracy of their opinions.

On the other hand, the biases and incorrect responses from LLMs can be minimized, even with the help of good prompt engineering! You can ask an LLM to construct a causality chain to explain how it arrived at a particular conclusion, request a proof, and ask for references to sources, such as laws of physics, mathematical formulas, or statistical data. This is very similar to how humans are supposed to prove that their ideas or claims are correct and will withstand the test of reality.

Another claim is that we need Quantum Computing to align and control AI. Although it’s clear that Quantum Computing, much like LLM, represents the next frontier in a technological singularity, the situation is similar to other breakthroughs in technology; hardly anyone can predict the changes we’ll see in our lives, economy, job market, etc., when or if Quantum Computing becomes available.

What if a powerful Quantum Computer itself develops some kind of consciousness? If Roger Penrose and Stuart Hameroff are right with their “Orchestrated Objective Reduction” theory. In short, it says that consciousness arises from quantum processes occurring within microtubules inside brain neurons and that these processes lead to a collapse of the quantum wave function, producing conscious experience. So, by building a powerful quantum computer, we could accidentally summon an AI entity that we might not be able to control at all!

Hopefully, we’ll be able to manage and align AI even without the need for Quantum Computers, enabling AI to help humanity build a better world and more accurately predict the consequences of our actions and inventions!

--

--

Sergey Nes

Android and iOS Expert, LLM Applications Enthusiast. Follow me on LinkedIn for insights: https://www.linkedin.com/in/sergey-neskoromny