Here is the full text of neuroscientist Henning Beck’s talk titled “What is a Thought? How the Brain Creates New Ideas”at TEDxHHL conference. In this insightful talk, Dr. Henning gives insights into thought processes and tells you how you can create new ideas.
Henning Beck – TEDx Talk TRANSCRIPT
What is an idea? What is a thought? And how do we think of these great and new ideas that are worth spreading?
My name is Henning Beck. I’m a brain researcher, and I want to show you what is going on in your mind when you use information to give rise to new thoughts.
And this is important because information is all around us. Many people think it all starts with data. Data, the resource of the 21st century.
Data is everywhere. Companies collect our data, we do data analysis and data correlation. But in fact, data itself is pretty simple: It’s just a collection of letters and numbers, signs you can process electronically, but have no meaning. And you can measure data, but you cannot measure an idea.
Because when are you really creative or innovative? When you have a thousand thoughts, only one, the real game changer. So, maybe information is more important.
We have so many tools nowadays to acquire information. We have smartphones, mobile devices, the internet everywhere. But never mix up information with having an idea or knowledge, because you can Google information, but you cannot Google an idea.
Because having an idea, acquiring knowledge, understanding stuff, this is what is happening in your mind when you use information to change the way you think.
So, what is that kind of thinking? Well, everything you see here is just the surface of what is going on in your mind when you think. Most of the things run subconsciously, which makes it damn hard to investigate but even more interesting.
So, let’s zoom into the brain to check out what is going on when we think. Many people think the brain is something like a supercomputer, like the ultimate calculating engine. It is supposed to be extremely fast, super connected, and highly accurate.
When you have something on your mind, like right now hopefully, a picture, or an image, or something like that, you can see very sharp and precise and switch very easily, much faster than a computer, right?
Because what can you do with this one? This one’s at least stylish, it has appeal on your desktop, but that’s it, right?
Well, look at this again and you see it’s totally the other way around. A computer you can put on your desktop easily calculates 3.4 billion times a second. Brain cells are much slower and only do 500 operations maximum speed.
Computers don’t make any mistakes. A rough estimation is one error in a trillion operations, and brains, you probably know this from your personal life, much more error prone, and make mistakes a billion times as often.
In computers, you can plug into the internet, and you’re connected with the world contrary to the brain. Because the brain is 99% self-oriented, most of nerve fibers never get outside of your skull, most of the brain cells never see what’s going on in reality.
So, from this perspective you have to say “Okay the brain is everything but perfect. It is lame, it is lousy, and it is selfish, and it still works.”
Look around you, working brains wherever I look, more or less. But still, each one of you has the power to outperform every computer system by a very simple experiment. I can show you in a minute.
So, what do you see here? A face, you might say. Totally correct. I could also say it’s just a collection of fruits and vegetables, but you see a face.
And what’s interesting is not that you do it, but how fast you do that. Because when your brain cells are really that slow, you can only do like 20, 30, maybe 40 operations within that split second.
Computer software needs many more steps, thousands, even millions of steps to come to the same result. This leads us to the fundamental principle of how we think. Because it’s totally different from anything we know of in our world.
So, how would a computer approach that kind of problem? Well computers use algorithms. Algorithms are basically stepwise recipes telling you what to do.
So, when a computer faces a certain problem, for instance recognize a face or solve an equation or whatsoever, the basic principle goes like this: You have an input, then you process that input according to the algorithm, finally reaching an output. Input, processing, output. That’s great.
That’s great when you don’t do any mistakes because when you do a mistake at the beginning, you’re screwed at the end. That’s why computers sometimes break down and end up in a blue screen. What a sad face, by the way, poor guy.