Ten ways buyers of market research or polling can get better answers, not just more data

published on 19 April 2026

You paid for the research. You got a 60-slide deck, a weighted crosstab banner, and a summary that hedges every finding with on the other hand. Now you have to make a decision worth millions and you are not sure whether the numbers in front of you predict anything at all. This is the quiet crisis in market research and polling. The people who commission studies, the executives, marketers, policy leads, and campaign managers who actually live with the consequences, have been trained to accept methods that serve the vendor's production line rather than the buyer's decision. The good news is that the tools, techniques, and economics of research have changed enough that you can now demand something better. Here are ten specific asks that will transform the quality of what lands on your desk.

Use AI to unlock open-ended survey questions at scale

Start by rethinking open-ended questions, because AI has quietly rewritten what they can do. For thirty years, researchers steered clients away from open-ends on the grounds that coding 2,000 verbatim responses by hand was slow, expensive, and subjective. Large language models now read those same verbatims in minutes, extract themes, score sentiment, identify emotion, and tag entities with reliability that matches trained human coders. This changes the economics entirely. The first question on any survey you commission should be an unaided open-end asking what comes to mind, fielded before a single closed question primes the respondent. The themes that emerge unprompted are the ones that actually drive behaviour. Insist that your vendor report the top ten themes with prevalence, sentiment direction, and representative quotes, not just a wordcloud.

Replace Likert scales with best-worst scaling for real discrimination

Second, stop accepting five-point agreement scales as evidence of anything. When 80 percent of respondents pick four or five on every attribute you tested, you have learned nothing about priority. Ask your vendor for best-worst scaling, a technique that forces respondents to pick the most and least important item from rotating subsets of four or five attributes. The output is a ratio-scaled importance score with roughly three times the discrimination of a rating scale, and the method has been validated across thousands of commercial and academic studies. If your vendor tells you BWS is too complex for the respondent, find another vendor.

Require choice-based conjoint analysis for pricing and positioning decisions

Third, require a choice-based conjoint with price or cost included whenever the study informs a pricing, feature, or positioning decision. Stated importance without trade-offs is a wish list. A properly designed CBC with eight to twelve choice tasks per respondent, a none option, and a sample of at least 300 per segment produces utilities you can simulate against. You can model what happens if you raise price by 10 percent, drop a feature, or launch a new variant. This is the single highest-leverage method in commercial research and yet most buyers never ask for it because the vendor did not suggest it.

Demand attention checks and data quality metrics in every survey

Fourth, demand attention checks and trap questions in every survey and insist on seeing the fail rate. Online panels are populated by a mix of genuine respondents, professional survey takers, and outright bots. Instructed response items such as please select somewhat agree for this item, combined with logical traps like how many times did you travel to Mars last week, will flag 15 to 25 percent of a typical online sample as unreliable. Ask your vendor to report the fail rate alongside the findings. If they refuse, assume the worst.

Weight survey samples by behaviour, not just demographics

Fifth, push for attitudinal and behavioural weighting, not just demographic weighting. Age, gender, region, and education are proxies for what you actually care about, which is behaviour. If your category has a known benchmark, say the share of households that bought the product in the last year, or actual vote share in the last election, insist that the sample be calibrated to that benchmark after demographic weighting. This corrects for the well-documented overrepresentation of engaged, opinionated respondents in online panels. A 45-year-old woman in Calgary who never buys your category is not interchangeable with a 45-year-old woman in Calgary who buys it weekly.

Use micro-wave fielding to catch survey errors early

Sixth, ask for micro-wave fielding instead of one big drop. Rather than collecting 2,000 completes over two weeks in a single push, have the vendor field 200 per day for ten days and review each day's data before continuing. This catches programming errors, ambiguous questions, and skip logic problems early, before they contaminate the full sample. It also detects news events that shift sentiment mid-field, which matters enormously in politically charged or fast-moving categories. The cost difference is negligible. The quality difference is not.

Cap survey length at twelve minutes and design for mobile first

Seventh, cap the survey at twelve minutes median length and require that the instrument be tested on a real phone before launch. Roughly 70 percent of online survey responses now come from mobile devices, yet most surveys are still designed in a desktop preview window. Grids with seven rows, drag-and-drop rankings, and sliders that work beautifully on a laptop fall apart on a six-inch screen. Every minute past twelve shows measurable degradation in data quality, including straightlining, speeding, and rising item non-response. A shorter, well-tested instrument produces better data than a long one with more questions.

Pre-register the analysis plan to prevent data fishing

Eighth, require a pre-registered analysis plan before fielding begins. The plan should state the hypotheses, the segments of interest, the key comparisons, and the significance thresholds. This is standard practice in clinical trials and academic research, and it is rare in commercial market research for one reason only: it prevents the researcher and the client from fishing through forty crosstabs after the fact and reporting the three that happened to reach statistical significance. If you want findings you can defend in a boardroom or a courtroom, pre-register.

Validate research predictions against external benchmarks

Ninth, demand external validation before you act on the results. If the study predicts vote intent, compare the raw data against the last election in the same geography. If it predicts purchase, hold out ten percent of the sample and test whether the model built on the other ninety percent predicts the holdout within acceptable error. If the numbers do not validate, your vendor should tell you so, in writing, before you make a multi-million-dollar decision. Most will not do this unless you require it in the scope of work.

Replace static decks with interactive decision simulators

Tenth, replace the static PowerPoint deliverable with an interactive simulator. You do not need to know that attribute X has a utility of 0.34. You need to know what happens to your share if you change price, drop a feature, or enter a new segment. A web-based simulator built on the conjoint utilities or the regression coefficients lets you explore scenarios yourself, in real time, during and after the presentation. It also extends the shelf life of the research from one meeting to a year or more of use.

The bottom line for research buyers

These ten asks are not exotic. Every one of them is available today from any competent research firm, and the tools that make them practical, especially AI-driven text analysis and mobile-first design, are now cheap and accessible. What has been missing is buyer-side pressure. Vendors optimize for what clients measure and pay for. If you keep buying decks and crosstabs, you will keep getting decks and crosstabs. If you start buying simulators, pre-registered analyses, and validated predictions, the industry will respond. The research you commissioned is meant to reduce the risk of a decision you are going to make anyway. Make it do that job.

Read more