How to Say No to Junk AI: Protecting Research Integrity in the Age of Synthetic
- March 2, 2026
- Posted by: Josh Speyer
- Category: Competitive research
Just because a technology is new doesn’t mean it belongs anywhere near scientific or insights research.
We’ve all watched the cycle: shiny tools appear, executives get excited, and suddenly everyone wants “AI-powered insights” delivered yesterday—preferably without fieldwork, sampling, or cost.
But here’s the part no one says out loud: bad AI doesn’t just produce bad research. It can slowly cut your agency—or your internal research team—out of the insights function entirely.
This isn’t a blog about hating AI. It’s about helping insights professionals push back against the wrong uses of AI, without sounding defensive, old-fashioned, or anti-innovation.
The Problem With Bad AI
Clients asking for quick-fix Gen-AI ideas often create more risk than value. These requests typically come from someone who’s heard a buzzword, seen a flashy demo, or is under pressure to “do something with AI,” but hasn’t thought through what it actually takes to produce reliable insights. The first problem is that these solutions almost always collapse under scrutiny. They look efficient on the surface, but because they aren’t built on validated data or transparent methodology, they can’t withstand basic quality checks. When it comes time to present the insights, you’re left defending something that was flawed from the beginning.
Another issue is that low-quality AI can quietly distort decision-making. It produces answers that look authoritative but rest on shaky foundations. Teams start using those answers to guide product strategy, customer research, or marketing decisions, and suddenly the organization is driving blind. It’s not the kind of mistake people catch early either; the errors accumulate slowly until someone asks why the numbers don’t line up with reality.
There’s also a reputational cost. When researchers accept bad AI requests, they inherit responsibility for whatever comes out of the system. If the insights fall apart later, leadership won’t remember who pushed which shortcut — they’ll remember who delivered the work. That’s how talented analysts end up looking careless, even when the underlying problem was the tool they were pressured to use.
These requests also tend to create unrealistic expectations. Once a client sees something that delivers outputs instantly, they assume that’s the new normal. They start expecting every analysis to be immediate, every dataset to be magically complete, and every nuance to be automated. That mindset devalues the real craft of research and sets up every future project for disappointment.
Finally, low-grade AI tools make your expertise invisible. When bad Gen-AI takes center stage, it replaces thoughtful investigation with generic outputs that anyone could have produced. The more an organization leans on these shortcuts, the less people recognize the unique value an insights professional brings — until they realize too late that the work has lost its depth.
Responding to Common Arguments
You’ll hear it all the time: “AI will save us time and money,” “We just need directional insights,” or “Other companies are using AI panels.” It sounds appealing, but savings vanish when decisions are made on flawed data. Cheap insights are only cheap until they break something.
Directional insights from synthetic data rarely reveal anything new. They confirm biases, smooth out nuance, and hide the subtle patterns that drive real understanding. Popularity doesn’t equal validity either. Regulated industries avoid these methods for a reason—they can’t audit or validate synthetic data.
The first step is understanding what’s driving the request. Are executives chasing cost savings, appearing innovative, or seeking self-service tools? Often, it’s because data is missing or they want analysis without fieldwork. This is your chance to open a conversation and offer real solutions—like analyzing legacy data or using emerging tools that don’t fabricate respondents.
Position yourself as the authority. Market research is a science, and your team are the experts. AI can enhance your work, but it can’t replace judgment, methodology, or critical thinking. Cutting corners risks the credibility of every insight your team produces.
Learn about Responsible AI Tools
In conclusion, staying informed about new and emerging AI technologies allows you to meet client demands while maintaining rigorous research standards. By understanding these tools, you can leverage AI to enhance analysis and efficiency—without compromising the integrity of your insights. The key is using technology thoughtfully, as an aid to research, not a replacement for sound methodology.