Motivation Diagnosis
Reducing someone's behavior or position to a character flaw or suspect motivation rather than engaging with what they're actually saying.
- "Engineer who was too lazy to write docs before now generates AI slop."
- "They're just trying to protect their job."
- "You only say that because you're invested in the ecosystem."
- "This is just rationalization for not learning the right way."
Why It's Unproductive
Psychoanalyzes intent instead of engaging with the actual practice or argument. Labels someone as "lazy," "biased," or "defensive" which makes them defensive and shuts down conversation. Even if the motivation guess is accurate, it doesn't address whether the position has merit. People often do this when they disagree but can't articulate why, so they attack character instead of engaging with substance.
The Better Move
- "AI-generated docs can miss important nuances that only the developer knows. Has that been your experience?"
- "I'm skeptical that auto-generated docs solve the real problem, which is understanding user needs."
- "The quality depends heavily on review and editing. Are you finding you still need to revise substantially?"
- "What's your process for making sure AI docs are accurate? That seems like the key challenge."
Why It's Better
Focuses on the practice and its effects rather than presumed motivations. You can be skeptical about AI docs without claiming the person is lazy - maybe they're experimenting, maybe it works well for their use case, maybe they're wrong but trying in good faith.
Example
OP: "We started using Claude to generate API documentation and it's been a huge time-saver."
Antipattern reply: "engineer who was too lazy to write docs before now generates ai slop and continues not to write docs, news at 11"
Better: "I'm skeptical - AI-generated docs often miss the nuance of why design decisions were made. Do you find you're doing substantial editing?"