Over 25 years of serving tech enthusiasts.
TechSpot is the place to go for tech advice and analysis.
Lauren Leek insists that surveys aren’t obsolete, but warns them they face serious challenges due to the decline in participation and the increasing use of AI agents. Leek points out that leading survey companies are already working on innovative solutions to these issues. She says “If we want surveys to survive the twin identified threats, we need to collectively put full effort into increasing data quality,” .
Once the foundation of market research, political polling and public policy, surveys are in a quiet, but profound crisis. According to social data scientists Lauren Leek,
Situation has been driven by two intertwined tendencies: a sharp drop in human response rates, and an increasing influx of artificial-intelligence agents who are completing surveys instead real people.
It is clear that survey participation has dropped dramatically over the past decades. In the 1970s & 1980s, response rates varied between 30 – 50%. Today, response rates can be as low at 5 percent.
However, declining human engagement is only half the problem. Leek illustrates just how accessible survey automation has become by building a simple Python pipeline that enabled her own AI agent to complete surveys on her behalf.
She explains that the process requires only access to a powerful language model – she used OpenAI’s API – a basic survey parser (such as a .txt file or a JSON file from Qualtrics or Typeform), and a persona generator that rotates between different respondent types like “urban lefty,” “rural centrist,” or “climate pessimist.”
The most time-consuming part, she notes, is making the agent interact with the survey interface. “That’s it. With a bit more effort, this could scale to dozens or hundreds of bots. Vibe coding from scratch would work perfectly too,” Leek adds. Although Leek did not deploy her agent on a real platform, she says others have.
These trends have significant downstream effects. Leek explains how many polls in politics rely on statistical weightings to correct for underrepresented group. As response rates decline and AI-generated responses rise, “the core assumptions behind these corrections collapse.”
synthetic agents tend to mimic mainstream opinion found on high traffic internet sources, resulting in models “overfit the middle and underpredict edges.” which lead to stable but systematically bias predictions, missing perspectives of minorities.
The market research industry faces a similar problem. AI-generated responses may be fluent and consistent, but they lack the unpredictable nature of human behavior. “Synthetic consumers will never hate a product irrationally, misunderstand your user interface, or misinterpret your branding,” Leek observes. The result is that products are designed for an imaginary average user and often fail to meet the actual market segment needs, especially those which are underserved, or difficult to model.
The public policy is also in danger. Surveys are used by governments to plan services and allocate resources. If AI-generated answers dominate, vulnerable populations could become “statistically invisible,” leading to under-provisioning of services where it is most needed.
Worse yet, Leek warns about feedback loops: She argues first that surveys need to be redesigned in order to be more engaging. “We need to move past bland, grid-filled surveys and start designing experiences people actually want to complete. That means mobile-first layouts, shorter runtimes, and maybe even a dash of storytelling.”
Leek then discusses the growing toolkit to detect AI-generated responses. The methods include analyzing response patterns, writing styles, and metadata, such as keystroke timing. She recommends integrating more of these tools and adding elements that only humans can complete. For example, requiring prize collection in person. She cautions that
But, “these bots can easily be designed to find ways around the most common detection tactics such as Captchas, timed responses and postcode and IP recognition. Believe me, way less code than you suspect is needed to do this.”
Leek calls for more intelligent, dynamic incentives to draw real participants, particularly from underrepresented group. She notes that “If you’re only offering 50 cents for 10 minutes of mental effort, don’t be surprised when your respondent pool consists of AI agents and sleep-deprived gig workers,”
Lastly, Leek urges an broader rethinking of how organizations gather insight about people. She argues that surveys are not the only tools available. Digital traces, behavior data, and administrative documents can provide a richer understanding, even if they are messy. “Think of it as moving from a single snapshot to a fuller, blended picture. Yes, it’s messier – but it’s also more real,” She says.