Discussion about this post

User's avatar
Kukuh Noertjojo's avatar

Aaron, always thank you for such a clear analysis and writing. I was wondering if you'd do follow-up in the future when more empirical data is available.

Thanks again Aaron and Kung Hei Fat Choi from Canada!

Richard Van Noorden's avatar

I thought that sycophancy - sucking up to the user at the cost of accurate summarization of the retrieved search results - only applies where the user expresses a position within their query (‘Tell me why vaccines cause autism’). In such cases the LLMs might return the role-play conversation that appears to be desired, rather than push back with an explanation of how vaccines don’t cause autism.

If the query were merely ‘do vaccines cause autism’ then the problem appears to be one of accurate retrieval and summary. Unless there’s a historical context to the conversation where the user’s position is known.

4 more comments...

No posts

Ready for more?