top of page
  • egonzalez267

The EU Rule on AI: How it Impacts Market Research

Updated: Dec 15, 2023

Anyone who is paying attention over the last year has seen a radical increase in the use of AI in market research, which is only likely to accelerate. Although the US and other countries have been fairly lax in regulating the use of AI, the European Union has been working on a policy for AI safety for the last few years.  Last Friday (Dec 8), the European Parliament and Council arrived at a deal that merged their two different proposals.  That deal has the potential to significantly impact our industry going forward.



The regulation is quite lengthy, so we read through it.  Here are the key things we think insights leaders need to know:


Breadth of Scope

  • The new EU rule is a supplement to other rules that protect individual data privacy and personal data, as well as other protective regulations like labor laws and personal protections – those rules all still apply.

  • The current draft rule applies to AI use within the EU and also to a significant extent outside the EU.  The rule both prohibits export from the EU of banned systems and also the use of AI systems in third countries whose outputs are intended to be used in the EU.  Thus, processing data outside the EU does not protect from fines.

  • Any system which uses an AI component, and could not function as intended without it, is considered an AI system.  That means that if an AI model is a piece of a market research tool, then the market research tool is itself an AI system.

  • There are some exclusions – scientific research, open source, military – but market research doesn’t appear to qualify for any of these (though specific applications might).

 

Specifically Forbidden Activities Relevant to Market Research


BANNED: Use of certain AI systems with the objective to or the effect of materially distorting human behavior, whereby physical or psychological harms are likely to occur.


This implies it would impact research using tools that incorporate AI systems, and which are designed to measure subliminal or subconscious thought if the research tools in any way modify those thoughts or behaviors.


While the ban seems to be primarily targeted against active AI systems that influence specific individuals (like social media algorithms that unconsciously influence people) it does seem to apply to AI and biometric systems (like facial or emotion recognition) that measure the effectiveness of unconscious messaging or imagery for the purpose of designing ad campaigns which then influence people. Questions around the use of these tools may revolve around immediacy (if the tools are designed to provide general learning about how to subconsciously influence people or are designed to serve up content that implements that learning) and primacy of purpose. This one could get interesting.


BANNED: AI systems that categorize natural persons by assigning them to specific categories, according to known or inferred sensitive or protected characteristics are particularly intrusive, violate human dignity and hold great risk of discrimination.


This could imply that AI systems (including machine learning tools) designed for market segmentation could be impacted, especially if they include protected characteristics like gender and ethnicity.  Practitioners should consider limiting the inputs they use to train these models.


Notably, even if models do not explicity use protected categories as training inputs, they may be a risk if they are highly correlated with a protected category and do not otherwise serve an important purpose. That is, variables that are effectively proxies for a protected category could violate this rule (as well as other rules in countries like the US in areas like housing finance and banking).


BANNED: AI systems providing social scoring of natural persons for general purpose may lead to discriminatory outcomes and the exclusion of certain groups.


This probably applies more to marketing teams than insights folks, but it does suggest that AI systems which might effectively exclude or discriminate against certain groups for targeting of offers, deals, memberships or similar benefits could be at risk.


Additionally, fraud scoring tools are given particular mention in this context.  If AI systems are used for fraud detection in surveys (and thus restrict access to rewards), specific care should be taken that these systems do not rely at all on data on protected groups.


There are a lot of other banned applications (like emotion recognition in the workplace, specific uses of biometrics, facial recognition in public places, etc.), but fortunately these do not seem to apply to market research.

 

Additional Provisions for High-Risk AI and Foundational Models (including LLMs)


Fortunately, market research does not fall into the high-risk domains identified in Article III, but many MR tools increasingly make extensive use of generative AI tools like LLMs and image generators. These tools are called “Foundation Models” and are given special consideration and requirements.


If you are using a foundation model from a third party (through an API, for example), that third party will need to provide documentation around data training sources and register with the EU.  If you are building or training your own LLM (as a research supplier or company), this may apply to you as well.



The regulatory burden for foundation models could be quite high.  This includes examining training data sources for bias, applying mitigations, ensuring output does not violate other EU rules, and much more.  Given how inherently biased LLMs today are (because they’re trained on the internet), this could be interesting. There are additional requirements around copyrights in training data and even energy usage.


A Stanford analysis of prior rules observed that none of the exiting in-market LLMs are currently compliant.  We’re waiting for a new analysis on the updated rule, but based on our initial read, it seems that OpenAI and other foundation model providers will need to invest significantly in order for their LLMs to comply.  Luckily, we still have time – the rule won’t go into effect until at least 2025.

 

What Does it All Mean?


At the end of the day, what does this mean for most market research suppliers and clients?  A few key points stand out:


  • If you do anything in the EU, this will probably affect you.

  • If you’re building your own LLM, be prepared to do a lot of compliance work.  If you’re using an API, make sure the provider is prepared to put in the work to comply with the rule.

  • Offline applications for LLMs (like coding data or summarizing findings) are probably OK, but be a little cautious about using LLMs and image generators for real time applications, like directly interacting with customers using chatbots or auto-probes.  Dynamically generating content in real time for a discussion or survey could be held to a higher standard.  At a minimum, the system may need to identify itself as an AI tool and take measures to avoid bias, insensitivity, and any appearance of manipulating the customer.

  • Be careful in using biometric or AI in tools to measure the effectiveness of unconscious messaging or imagery,  especially if the output is likely to be used to manipulate people’s thoughts or behaviors.  Certain areas like advertising measurement may be affected, but the scope is still unclear.

  • Avoid using sensitive data on protected categories as inputs to pretty any machine learning model – even segmentation models or fraud scoring tools.


On the whole, the regulation is reasonably friendly to market researchers and probably less disruptive than prior regulations like GDPR. As we learn more, we’re hopeful that the new rule will help us leverage the best aspects of new AI tools while avoiding their pitfalls.


Do you have questions about this article? Want to get in touch with the Intuify team to discuss AI in market research? Click here to contact us!

bottom of page