Yudkowsky is a postrat, LW isn't

When Yudkowsky talks about postrational concepts, he is not received well by LW. Example: The Bedrock of Morality: Arbitrary?—120 comments but only 16 upvotes.

This is a very postrat quote, below, and I think LW-ers would either fail to understand it, or reject it altogether:

So which of these two perspectives do I choose?  The human one, of course; not because it is the human one, but because it is right. I do not know perfectly what is right, but neither can I plead entire ignorance.

Things are neither well-defined, nor completely unknowable—in line with Meaningness.

And further, in a comment:

It's been suggested that I should have spoken of "p-right" and "h-right", not "p-right" and "right".

But of course I made a very deliberate decision not to speak of "h-right".  That sounds like there is a general license to be human.

It sounds like being human is the essence of rightness. It sounds [but EY is arguing against it —Artyom] like the justification framework is "this is what humans do" and not "this is what saves lives, makes people happy, gives us control over our own lives, involves us with others and prevents us from collapsing into total self-absorption, keeps life complex and non-repeating and aesthetic and interesting, dot dot dot etcetera etcetera".

And from No License to Be Human:

So it's not that reflective coherence is licensed in general, but that it's a good idea if you start out with a core of truth or correctness or good priors. Ah, but who is deciding whether I possess good priors?  I am!  By reflecting on them! The inescapability of this strange loop is why a broken mind can't heal itself—because there is no jumping outside of all systems.

I can only plead that, in evolving to perform induction rather than anti-induction, in evolving a flawed but not absolutely wrong instinct for simplicity, I have been blessed with an epistemic gift.

I can only plead that self-renormalization works when I do it, even though it wouldn't work for Anti-Inductors. I can only plead that when I look over my flawed mind and see a core of useful reasoning, that I am really right, even though a completely broken mind might mistakenly perceive a core of useful truth.

Reflective coherence isn't licensed for all minds. It works for me, because I started out with an epistemic gift.

It doesn't matter if the Anti-Inductors look over themselves and decide that their anti-induction also constitutes an epistemic gift; they're wrong, I'm right.

And if that sounds philosophically indefensible, I beg you to step back from philosophy, and consider whether what I have just said is really truly true.

Begging doesn't work on rationalists. The concept of "do something because it's true, not because you can defend it in an argument" doesn't work on rationalists. (I think it is because there are good reasons people adopt rationality forcefully, and without that forcefulness rationality stops serving as a good defense mechanism.)

And here is Chapman's quote (from here), which goes in line with what Yudkowsky says:

My answer to “If not Bayesianism, then what?” is: all of human intellectual effort. Figuring out how things work, what’s true or false, what’s effective or useless, is “human complete.” In other words, it’s unboundedly difficult, and every human intellectual faculty must be brought to bear.

Deciding what is right and wrong—"this is what saves lives, makes people happy, gives us control over our own lives, involves us with others and prevents us from collapsing into total self-absorption, keeps life complex and non-repeating and aesthetic and interesting, dot dot dot etcetera etcetera"—is a human-complete endeavor. Yudkowsky embraces this, because that is the only way not to fail.