Utilitarianism makes it easy to convince yourself that things you want are unimportant and you shouldn't want them. Specifically, utilitarianism makes it easy to convince yourself that you shouldn't care about how you're doing things, only about what gets done in the end. But you do care! You care about the thoughts that go on inside your head, and they do have interesting effects on your future life and actions—but utilitarianism pushes you towards not caring.
For example—with utilitarianism it is easy to tactically do the right thing, but forget about exploring. Why?
Normally you will explore automatically—because you have a desire for novelty—but somehow this gets thrown under the bus with utilitarianism, because novelty gets reduced to "keep myself happy". And you correctly observe there are easier ways to keep yourself happy.
Okay, but why would novelty be reduced to "keep myself happy"? See the next section.
Utilitarianism's schtick is "by treating some things as interchangeable, you can get more things than you would have otherwise". If nothing is interchangeable, utilitarianism becomes very very limited and only lets you choose between "one X" and "two X", which is—duh. So utilitarianism pushes you towards treating more and more things as interchangeable and fewer and fewer things as sacred, because each time you do so, it gains power and applicability.
How does that happen?
Technically, utilitarianism can be the right ideal for decision-making—as in, "look at all states of the world and pick the one you like most". It's fine. I'm not saying it's the right or wrong ideal. I'm saying that if you, a human, will try to approximate it, you will be pushed towards a nasty failure mode. You can resist this failure mode, and if you resist it you're fine and you can keep applying utilitarianism for fun and profit. But it's hard to resist.
I don't pick one ethics theory and stick to it. This said: tight now I'm moving towards more and more of "do things that train good character qualities", even if the things themselves are not important.
That is—I explicitly restrain myself from thinking about short-term consequences of each particular action, because I want to fix myself long-term.
You can definitely apply utilitarianism here, but deontology fits better because this is literally deontology, "the right thing to do in situation X is Y regardless of the stakes". But then sometimes I do care about the stakes tactically!
So that's where the meta-strategy appears. What I think in reality is "hmmmm, I think I care about short-term stakes too much and it has led me to a bad life", and then I try to empirically figure out what's the right mix of consequences-thinking vs character-building-thinking. And not just "what's the right mix", but "in what domains is it better to look at consequences? in what domains is it better to look at [what the society thinks]? in what domains is it better to Do No Harm? etc etc".
At this point you can say "this strategy above, that is utilitarianism". Okay. Maybe you will say that, maybe not.
However, arriving at this meta-strategy requires you to have the skill of [suspending your disbelief and not using utilitarianism], which is hard to acquire if you're dead-bent on utilitarianism being awesome, like I was.
It's the same cursed issue as "girls don't like guys who are desperate, so to get laid more you need to care less about getting laid". And you can say "ok wait, no, I won't give up my values, I will keep being desperate, but will learn how to act as if I'm not desperate, that's the right solution". Maybe it will work for some people—but it never worked for me.
(And at the same time, nobody can deny that in ideal circumstances, for any person who doesn't care so much about getting laid there always exists someone who cares more and will get laid more. Similarly, for any non-utilitarian agent there always exists a utilitarian agent who gets better results, while the opposite is false.)