What’s Happening Nowadays With Survey Samples? (Part 1)

What is The Op4G / Slice MR Scandal?

Op4G (Opinions4Good) and its offshoot Slice were US-based market research companies whose senior leaders were indicted in April 2025 for selling fake market research over the course of a 10-year period, generating $10M in fraudulent revenue.  While they marketed their business model of maintaining “a quality, engaged membership panel” of individuals eligible to participate in surveys, they began recruiting in 2014 certain individuals called “ants” to complete surveys to increase revenue despite producing fabricated market data.  Companies that purchased survey data from Op4G or Slice between 2014 and 2024 are encouraged to contact the U.S. Attorney’s office.   The scheme opens up questions on how much these fraudulent market data has permeated the industry, especially when Op4G and Slice presented their survey findings as high quality backed by ISO certification.  It brings to light the importance of upholding transparency and accountability in the market research industry despite the availability of certain shortcuts to cut cost and time.  

Image: jesben

What is Enshittification?

The Op4G / Slice MR scandal is perhaps emblematic of the enshittification of platforms.  Popularized by Canadian writer Cory Doctorow in a 2022 blog post, Wikipedia defines enshittification as “a process in which two-sided online products and services decline in quality over time.”  JD Deitch, who cited in a Greenbook podcast Doctorow’s article as inspiration for writing his ebook, described enshittification as “what happens in platforms when they start to seek yield and profitability and growth.”   Together with Lenny Murphy on that Greenbook podcast, JD touched on how enshittification compounds the long-standing issues in the sample market when it comes to producing high quality and reliable market data: those of participant engagement and polling representivity.  The participant experience has been neglected and treated as an afterthought by the industry for so long that attracting a wide and diverse pool of engaged and relevant respondents has remained a constant challenge.  When participants aren’t incentivized enough to engage with the survey experience, the quality of the data and insights produced risk falling short of their true potential.  And when you simply aren’t attracting enough respondents or even give a reason to change the minds of those who aren’t really inclined to participate in surveys, you’re missing out on the opportunity of tapping into subsets of the population that could’ve given new and interesting perspectives.   The emergence of AI exacerbates issues and attitudes towards the participant experience.  When client companies have not just years but decades worth of survey data and studies, they could simply shift spending away from participant-driven research to developing AI that could produce synthetic data from their stock.  And when research market companies don’t own or have access to such kind of survey information, desperate firms might resort to taking shortcuts like programmatic sampling or like in the case of Op4G and Slice, fraudulent means to generate survey data and revenue.   The quality of the synthetic data being produced from all that past data and studies comes to mind, too.  Yes, it would depend on the quality of the training data Large Language Machines (LLMs) is fed.  Excellent synthetic data would enable scaling and efficiency.  However, excellent synthetic data would be tethered to the subject matter it excels on; deviation from the subject matter might produce less than desired outputs and far from potential breakthroughs or new discoveries.  And despite AI’s best attempts to optimize based on what it was trained on, there’s also always the risk of it hallucinating.  When one cares enough to understand, working or investing with flawed data is simply intolerable.  

Image: Tumisu

Featured Image: andibreit

Top Image: Tima Miroshnichenko