I think I understand what you're saying, but it sounds like you are just making an assertion here. I don't see where the impossibility lies. If you have mutli-level information processing you can do many things at once:Jeff wrote: You can't honestly claim that the brain both symbolizes and perceives 'transparently' beyond it's own states to the world without presupposing intentional mental states,the thing materialism tries to abstract away. It's something like saying that a material object like a road sign knows what it is that it represents or refers to.Intention,a mental property,is a state that is 'about' something.For material to be 'about' something and also BE that something(claiming we are just a brain) it would have to transcends itself.......to itself.Which is nonsense.Something else (mind) is required to bridge from brain to world.
Let's say you have an artificial intelligence program running on a computer, call it Program A. It's basically a Chinese Room: input goes in, calculations occur, input comes out.
Then run another program in parallel, Program B, which has no access to the code behind the Chinese Room and merely registers the input and output and keeps track of correlations between the two. It doesn't know or care where the output comes from, it just appears in its memory. The Chinese Room in turn gets the correlation info to further refine its output. The operation of Program B is consciousness.
If you make a Program C that registers all the input and correlations that's happening NOW and synthesizes it into chunks that can be used by Program A to correlate to inputs from the past in order to predict the future, you have self-consciousness. The space in which Program C is operating is what generates the qualia of existence. Program C's processes make up the act of attention, which is that self-referential system you are talking about. The concept of existence requires a present moment within which to exist. The program doesn't actually make any choices, however, everything is still chugging along based on programming that was set in stone a long time ago.
The qualia of "pleasure" or "pain" or any other behavior-driving opposites are relative in experience, but absolute in terms of goals. Things we are programmed to seek we interpret to be pleasureful, things to avoid are painful. Some people enjoy physical pain and seek it out. That doesn't mean that their experience of pain (as "painful") is different, it just means they are driven to pursue the experience by correlations ingrained in their subconscious (Program B) in the past.
What Program C perceives as "desire" is just another input which Program A, all the way in the background, is using to make a decision. Whatever the decision happens to be, Program C only perceives the desire, the action, and subsequent inputs of the reaction. It seems like it made a choice, but really it didn't. It doesn't actually do anything except act as a source of attention. The one thing it cannot pay attention to is the code making it run. It can only perceive a synthesized abstraction of what that code might look like, generated by Program B.
If you put the hardware running those three programs into a robotic body, and set up the rules in Program A to seek the preservation of that body, you get the confused state of affairs in which many humans live.