An awesome post from The Rationalist Conspiracy:

Consider a self-aware computer, somewhere in the space of minds. It’s smart enough to think about itself. But it can’t have perfect self-knowledge, due to Godelian infinite recursion issues. Hence, some of its parts must remain mysterious upon self-reflection.

The computer, realizing this, needs a label to describe the parts whose behavior can be observed, but whose detailed workings are (to it) inherently mysterious. In humans, this label seems to be “consciousness”.

I grow tired of many of the pontifications on this topic. This one is a gem. If the human brain were simple, we’d be too simple-minded to understand it. I could not have said it any better.