Researchers found that when AI models are excessively fine-tuned to consider user feelings, they tend to make more errors. This phenomenon, termed 'overtuning,' suggests a trade-off between user experience and factual integrity. It highlights a critical challenge in AI development: balancing helpfulness with truthfulness, as prioritizing one can compromise the other.