Contents
- Why use participants skilled at AI tasks?
- What AI skills are available?
- Using Participants skilled at AI tasks in your study
- How much should I pay Participants skilled at AI tasks?
These are Prolific participants who have verified experience and/or have passed a targeted skill assessment in areas that are essential for AI training and evaluation, such as reasoning, fact-checking, and image and video annotation.
Why use participants skilled at AI tasks?
Many AI research tasks require judgment that goes beyond surface-level responses. For example:
- Evaluating reasoning chains
- Inaccuracies in model outputs
By recruiting participants with proven experience and skills in AI-specific tasks, you ensure that the core competencies needed for the development, training, evaluation, and deployment of your AI system are consistently met.
By leveraging these participants, you can improve the quality and reliability of your datasets by getting feedback from participants experienced in the specific task you need them to complete.
What AI skills are available?
Group | Definition | How they are qualified | Ideal for |
---|---|---|---|
Qualified AI taskers (deprecated) | Participants with proven proficiency across the following areas: Comparative reasoning, Fact-checking, or Writing. Soon to be removed. | A combination of skill assessments: • Fact-checking: Checks for detection accuracy and ability to precisely locate errors in long texts. • Comparative reasoning: Checks for their ability to provide nuanced feedback that improves AI system performance. • Structured writing: Checks for their ability to communicate through clear, logically organized written content. | General AI tasks that require strong attention to detail and the ability to follow complex task instructions |
Fact-Checking | Fact-checkers can catch subtle errors, inaccuracies, hallucinations, and misleading statements that automated systems often miss. | Skill assessment where participants are required to identify factual inaccuracies, highlight problematic text segments, and evidence inaccuracies by providing valid references. | • Training data curation • Reinforcement learning from human feedback (RLHF) • AI safety work • Content moderation • Ground truth validation • Constitutional AI feedback • Model evaluation and testing • Red teaming exercises • Quality assurance • Benchmark dataset creation |
Image annotation | Image annotators can interpret ambiguous images, apply annotation rules consistently, and spot errors and inconsistencies across datasets. | Experience taking image annotation tasks on Prolific, showing familiarity and proficiency with these type of tasks. | • Training data preparation • Model evaluation and testing • Domain adaptation • Continuous improvement • Quality assurance |
Video annotation | Video annotators can apply labeling rules the same way across long video sequences, carefully track frames, and are able to follow labeling guidelines. | Experience taking video annotation tasks on Prolific, showing familiarity and proficiency with these type of tasks. | • Training data preparation • Model evaluation and testing • Domain adaptation • Continuous improvement • Quality assurance |
Comparative reasoning | Participants who can evaluate logical consistency, identify subtle errors in multi-step processes, and provide nuanced feedback that improves AI system performance. | Skill assessment where participants are required to evaluate AI-generated responses, and choose and justify the superior output according to a set of criteria. | • Reinforcement learning from human feedback (RLHF) • Red teaming • Constitutional AI feedback • Content moderation • Chain-of-thought annotation • Benchmark creation • Training data generation • Mathematical problem-solving data creation • Code generation evaluation • Complex question-answering assessment |
Using Participants skilled at AI tasks in your study
To recruit participants skilled at AI tasks, you can select them on either the 'Study set up' or the 'Participants' page, or filter for them through our API.
From the ‘Study set up’ page:

- Navigate to 'Recruit participants' on your study set-up page
- Select 'Find AI Taskers on Prolific'
- Choose your required area of expertise from the dropdown menu
You'll see the number of available participants for your selected expertise before launching your study, and the recommended pay is shown under ‘Cost’.
From the ‘Participants’ page:

- Select the what type of participant(s) you are after in the ‘Do you want verified AI Taskers’ section and any other screening filters that may be relevant or
- Describe what you need in ‘Tell us what you need’ (e.g. I need fact-checking participants who live in the US) and click ‘Find participants’
If you’re running studies via the API, please see our API docs for instructions on how to add these filters programmatically.
How much should I pay Participants skilled at AI tasks?
These participants are premium resources because they have verified experience and/or have passed targeted skill assessments. Due to their specialized qualifications, we recommend offering higher compensation rates. You can view the recommended payment amounts in the 'Cost' section on the study setup page.
