Back to Home

Carrier Documentation

Carrier is a browser-based platform for designing, deploying, and monitoring controlled studies of mixed-agent, multimodal group interaction. This guide covers everything you need to design and run experiments.

What is Carrier?

Carrier is a browser-based experimental research platform for designing, deploying, and monitoring controlled studies of mixed-agent, multimodal group interaction. Researchers use the Experiment Builder to configure study conditions, participant compositions, and interaction sequences, then monitor live sessions through the Experimenter Dashboard. The platform supports real-time text and audio communication between any mix of human participants, LLM-powered AI assistants, and rule-based scripted agents.

Who Participates

  • Human participants
  • AI assistants (LLM-driven)
  • Scripted agents (rule-based)

What They Do

  • Communicator (primary interactant)
  • Mediator (group-facing broadcasts)
  • Processor (draft-time assistant)

How They Interact

  • Text chat
  • Audio conversation
  • Segment activities (votes, rankings, tasks)
Concept Description
Experiment Top-level container for a complete research study
Chamber Line A condition track (e.g., treatment vs control)
Chamber Container for sequential segments where matched participants interact
Segment A single activity within a chamber timeline
Participant Types Humans, AI assistants, and scripted agents
Roles Functional capabilities: communicator, mediator, processor
Triggers Condition-response rules for agent automation
Run / Session A participant's end-to-end journey through an experiment
Chatroom Runtime instantiation of a chamber
Matching Process of grouping participants into chatrooms
Properties Key-value pairs for conditional logic
Surveys Questionnaires at global, chamber, or segment level

Key Concepts

Experiments are organised using a small set of composable building blocks. Understanding their hierarchy helps you navigate the rest of this guide.

Experiment
Top-level container — global settings, surveys, and one or more condition tracks. Set up your first experiment
Chamber Line
A condition track (e.g., treatment vs control). Each participant is assigned to exactly one. Set up conditions
Chamber
A matched-group container. Participants are grouped once and stay together across all segments. Configure chambers
Segment
A single activity: chat, vote, ranking, survey, instruction slide, and more. Explore segment types
Participants
Humans, AI assistants, or scripted agents — each with a role (communicator, mediator, processor). Configure participants

Communication Channels

Text Audio Video (coming soon)

Your First Experiment

This walkthrough takes you from zero to a working two-person text chat experiment. You can adapt it for more complex designs later.

1
Create an experiment

Open the Experiment Builder, click New Experiment, give it a name and description.

2
Add a chamber line

A chamber line represents one experimental condition. For your first experiment, one line is enough.

3
Add a chamber with a chat segment

Inside your chamber line, add a chamber. Add a chat segment — this is where participants will interact in real time.

4
Add 2 human participant slots

In the chamber's participant configuration, add 2 human communicator slots. Participants will be matched automatically.

5
Deploy and activate the experiment

Change the experiment status from draft to active. This enables participant access. Copy the participant URL (/experiment/[your-experiment-id]) and send it to participants. They'll be matched and placed in a chatroom.

Inviting Collaborators

Share your experiment with other researchers so they can help design, configure, and monitor it.

From the Experiment Builder, use the collaborators panel to add researchers by email. They'll get full edit and dashboard access.

Action Owner Collaborator
View & edit experiment Yes Yes
View dashboard & data Yes Yes
Add/remove collaborators Yes No
Delete experiment Yes No
Only the experiment owner can manage collaborators and delete the experiment.

Set Up Conditions

Chamber lines represent between-subjects conditions (e.g., treatment vs control). An experiment can have multiple chamber lines, and each participant is assigned to exactly one based on the experiment's assignment method.

Method Description When to Use
random Uniform random assignment Default for most studies
counterbalance Balances participant counts across conditions When equal group sizes matter
survey-based Uses a survey response to determine assignment When conditions depend on participant characteristics
fixed Always assigns to a specific chamber line For pilot testing or single-condition studies
Configure the assignment method in the experiment's global settings.

Add Chambers & Segments

A chamber is a container where matched participants interact together. Each chamber holds one or more segments — the individual activities participants complete in sequence. Participants are matched once at chamber start and stay grouped across all segments within it.

Segment Types

Type Description AI/Bot Compatible
chat Real-time text or audio conversation Yes
selection Multiple choice voting Yes
ranking Drag-and-drop ranking Yes
input Free text input No
slide Display static or dynamic content No
instruction Markdown instructions with continue button No
media Audio/video playback No
timer Countdown or waiting period No
task Custom interactive task Yes
survey Embedded mini-survey No
attention-check Face/survey-based participant verification No

Chamber Parameters

Field Type Description
chamberId String Unique identifier within the experiment
name String Display name
communicationChannel Enum
text audio video
segments [Segment] Ordered array of segment activities
participants [ParticipantConfig] Slot definitions for humans, AI assistants, and agents
maxParticipants Number Total participant slots
preSurvey [SurveySchema] Survey shown before this chamber
postSurvey [SurveySchema] Survey shown after this chamber

Embed Activities in Chat

The displayMode setting on a segment controls how it appears to participants:

  • Standalone (default) — replaces the current view entirely.
  • Embedded — renders as an overlay on top of a running chat segment. Chat continues beneath.

Non-chat segments placed after a chat segment in the timeline become embedded children of that chat. They activate while the parent chat is running, appearing as overlays that participants interact with without leaving the conversation.

Completion Behaviours

Behaviour Description
dismiss Overlay disappears, chat continues
end-chat Parent chat segment also ends
lock Content stays visible but becomes read-only
minimize Collapses to a small indicator
Use embedded mode for activities during a conversation (e.g., a quick vote mid-discussion). Use standalone for full-screen activities that need the participant's full attention.

Configure Timing & Transitions

Timing Fields

Field Type Description
timing.duration Number (ms) Total segment length. null = unlimited
timing.minDuration Number (ms) Minimum time before participants can advance
timing.warningTime Number (ms) Warning shown before auto-advance

Transition Modes

Mode Description When to Use
auto Advances when duration expires Timed activities with fixed length
manual Participant clicks to advance Self-paced reading or tasks
sync Waits for all participants Group coordination points
host Experimenter advances from dashboard Researcher-controlled pacing

Additional fields: transition.countdown (ms before advance) and transition.allowEarlyAdvance (boolean, whether participants can skip ahead).

Add Surveys

Surveys can be placed at three levels in the experiment structure:

  • Global pre/post — shown once at the start or end of a participant's session. Use for demographics, consent, or debrief.
  • Chamber pre/post — shown before matching or after a chamber ends. Use for manipulation checks or mood measures.
  • Segment survey — embedded in the chamber timeline as a segment of type survey. Use for in-context questionnaires.

Surveys use the Survey.js JSON format. You can design surveys visually in the Survey.js creator and paste the JSON into the experiment builder.

How survey responses drive logic:
  • Chamber line assignment — the survey-based method reads a specific field to determine which condition a participant enters.
  • Property rules — assign participant properties from responses (see Properties).
  • Prompt interpolation — use {{fieldName}} in agent system prompts to inject survey values.
  • Completion redirectcompletionRedirectUrl sends participants to an external URL with their ID appended.

Add Human Participants

Define human participant slots in each chamber. Participants are real people interacting through the browser. Each slot specifies how the participant's identity is determined.

Source Description
user_input Participant chooses their own display name and avatar
configured Researcher pre-sets the display name
auto_generated System generates a random name

When participants join an active experiment, they enter a matching queue. The system groups them into chatrooms based on each chamber's slot requirements. See Property-Based Matching for advanced configuration.

Add an Agent

An agent is an automated participant whose behaviour is defined by its triggers. Triggers can produce responses in three ways:

  • Scripted responses — fixed messages or random selection from an array. No API calls.
  • LLM-driven responses — the llm-driven trigger type sends context to an LLM for a dynamic response.
  • Mixed — combine both: keyword triggers with fixed replies plus an llm-driven fallback.
The Builder presents two paths — “LLM Agent” and “Scripted Bot” — as a convenience. An LLM Agent pre-configures botMode: 'llm' with response logic settings, while a Scripted Bot starts with an empty trigger list. Both are the same underlying agent type.

LLM Configuration

Field Type Description
provider Enum
openai anthropic google custom
aiModel String Model ID (e.g., gpt-4, claude-3-haiku)
systemPrompt String System prompt (supports {{fieldName}} interpolation)
temperature Number LLM temperature (default: 0.7)
maxTokens Number Max response tokens (default: 1000)
contextWindow Number Recent messages included (default: 10)
responseDelay {min, max} Simulated typing delay in ms
responseLogic.triggerOnFirstMessage Boolean Respond to first human message
responseLogic.respondToEveryMessage Boolean Respond to every message
responseLogic.timeoutTrigger Object Auto-respond after silence
responseLogic.initialSalute Object Greeting on chamber start
responseLogic.respondOnMention Boolean Only respond when mentioned
chainEnabled Boolean Enable multi-step LLM chain

LLM agents respond with {content, rationale, actions}. content is the message (null = stay silent). rationale is logged for research. actions is mediator-only. See Simulation for trigger details.

Set Up a Mediator

The mediator role observes all messages and broadcasts information to the group. Mediators can control chat flow by disabling and enabling chat or prompting specific participants to respond.

Field Type Description
broadcastMode Enum
sequential aggregated triggered
synthesizeResponses Boolean Synthesize participant responses before broadcasting
synthesisPrompt String Prompt for synthesis
broadcastFrequency Number How often to broadcast
triggerKeywords [String] Keywords that trigger a broadcast
Mediators have special trigger types (periodic, aggregate, topic-detected, discussion-phase) and can issue actions (disable_chat, enable_chat, prompt_participant). See Simulation for details.

Set Up a Processor

The processor role assists communicators privately during draft time. Processor feedback is only visible to the paired communicator — other participants do not see it.

Field Type Description
targetCommunicators [Number] Which communicator slots to assist
feedbackVisibility Enum
private public
phases [Phase] Ordered processing phases

Phase Definition

Field Type Description
phaseId String Unique identifier
mode Enum
review generate real-time-assist disabled
transitionTrigger.type Enum
on-start message-count time-elapsed keyword participant-event manual on-end
transitionTrigger.value Mixed Trigger-specific threshold
aiConfig Object {provider, model, systemPrompt, temperature, maxTokens}
contextLevel Enum
none partial full
reviewSettings Object {trigger, pauseTimeout, feedbackFormat, mandatory, maxRounds}

How Triggers Work

A trigger is a condition-response pair that fires when something happens in the chatroom. Every trigger has three parts:

  • Condition — what to watch for (a trigger type + value).
  • Response or Action — what to do when matched (send a message or submit to a segment).
  • Lifecycle controls — cooldown, maxTriggers, priority.

Trigger Definition Fields

Field Type Description
triggerId String Unique identifier (required)
enabled Boolean Whether active (default: true)
condition.type Enum One of the trigger types
condition.value Mixed Type-specific match value
condition.caseSensitive Boolean Case-sensitive matching (default: false)
condition.matchMode Enum
any all
— for multi-value conditions
condition.senderFilter Enum
human specific
— filter by sender
priority Number Higher evaluated first (default: 0)
cooldown Number Min ms between firings (default: 0)
maxTriggers Number Max times this can fire (null = unlimited)
Triggers can be scoped to specific segments or active globally across the chamber.

Trigger Types Reference

Message-based

Type Description condition.value
keyword Matches keywords/phrases in message text String or [String]
regex Matches a regular expression pattern String (regex pattern)
sequence Fires when messages match an ordered sequence [String]

Counting

Type Description condition.value
message-count Fires after N total messages Number
participant-message-count Fires after a participant sends N messages {target, count, countMode}

For participant-message-count: target can be 'any', 'human', 'bot', or a specific name. countMode: 'total', 'consecutive', 'since-reset'. Supports resetOnTrigger.

Time-based

Type Description condition.value
time Fires after delay (ms) from segment/chamber start Number (ms)
activity-timeout Fires after a period of inactivity Number (ms)

Event-based

Type Description condition.value
participant-action Fires on join, leave, etc. String (event name)
after-bot-message Fires after another bot sends a message String (bot name)
event-monitor Monitors chatroom events and chains triggers Object
participant-count Fires based on participant count thresholds {operator, count}

LLM

Type Description condition.value
llm-driven Sends context to LLM for dynamic response Object (LLM config)

Mediator-specific

Type Description condition.value
periodic Fires at regular intervals Number (interval ms)
aggregate Fires after collecting N messages Number (message count)
topic-detected Fires on topic/keyword pattern String or [String]
discussion-phase Fires at specific discussion phases String (phase name)

Meta

Type Description condition.value
chain-only Only fires when chained from another trigger

Trigger Responses

When a trigger fires, it can send a message to the chatroom.

Response Fields

Field Type Description
response.message String Single response message
response.messages [String] Array — one selected randomly
response.delay Number Delay in ms before sending
response.probability Number Chance of firing (0–1, default: 1.0)

LLM-driven path: For llm-driven triggers, context is sent to the LLM which returns {content, rationale, actions}. content is the message (null = stay silent). rationale is logged only. actions is mediator-only (see Mediator Actions).

Segment Actions

Beyond sending messages, triggers can submit to interactive segments via the segmentAction field. This enables bots to participate in votes, rankings, and other interactive activities.

Supported segment types: selection (including slider mode), ranking.

Data Modes

Mode Description
static Hardcoded values (index-based selection)
random Pick randomly from options (optional weighting)
referenced React to human submissions using a strategy

Referenced Strategies

Strategy Description
match-first-human Copy the first human's choice
match-majority Copy the most popular choice
oppose-majority Pick the least popular choice
random-different Pick something different from humans

Submission Metadata

Field Type Description
countTowardTotal Boolean Bot's submission counts toward completion
showInResults Boolean Submission appears in results display
tagAsBot Boolean Show bot indicator in UI
In embedded segments, action triggers activate while response triggers stay disabled. After the embedded segment ends, parent triggers resume.

Mediator Actions

Mediator-role agents can issue actions that control the chat flow, returned in the actions array of their JSON response.

disable_chat

Mutes a participant's input. Include release conditions to control when chat is re-enabled.

Condition Description
timeout Re-enable after N milliseconds
participant_message After the target sends a message
message_count After N total messages
all_others_responded After all others have sent a message
keyword_mentioned When a keyword appears
mediator_release Only mediator can re-enable

Release logic: type: "any" (first condition met) vs type: "all" (all conditions must be met).

enable_chat

Re-enables a muted participant. Clears all restrictions and pending timers.

prompt_participant

Sends a private message visible only to the target participant.

Example Action

{
  "action": "disable_chat",
  "target": "participant_name",
  "rationale": "reason (logged)",
  "release_conditions": {
    "type": "any",
    "conditions": [
      { "type": "timeout", "value": 30000 },
      { "type": "keyword_mentioned", "value": "ready" }
    ]
  },
  "on_release": {
    "notify": true,
    "message": "You can now type again."
  }
}

Chaining & Lifecycle

Triggers can be linked together to create multi-step behaviour:

  • chainTrigger — fires a follow-up trigger after the current one completes.
  • chain-only — a trigger type that only fires when chained from another trigger.
  • after-bot-message — fires when another bot sends a message (cross-bot coordination).
  • event-monitor — monitors events and chains based on observations.

Lifecycle Controls

Control Type Description
cooldown Number Min ms between firings (default: 0)
maxTriggers Number Max total fires (null = unlimited)
priority Number Higher evaluated first (default: 0)
enabled Boolean Activate/deactivate without removing

Participant Properties

Properties are key-value pairs that accumulate on participants throughout their session, used for matching, chamber visibility, and dynamic assignment.

Initial Properties

Assigned at chamber line assignment. Each property uses a strategy to determine its value.

Strategy Description
fixed Always assigns fixedValue
random Picks randomly from options array
counterbalance Picks least-used option (random tie-breaking)

Property Rules

Evaluated after survey responses. Each rule checks a survey field and assigns a property when the condition is met.

Field Type Description
ruleId String Unique identifier
source Enum
global-pre-survey chamber-pre-survey segment-survey
surveyField String Survey question name to evaluate
condition.operator Enum
eq neq in gt gte lt lte between
condition.value Any Comparison value
assigns.key String Property key to set
assigns.value Any Value to assign

Condition Operators

Operator Description
eq Equality (string)
neq Inequality (string)
in Value in target array
gt Greater than (numeric)
gte Greater than or equal
lt Less than
lte Less than or equal
between Between min and max inclusive
Properties persist for the entire session.

Route by Survey Response

Two mechanisms let you route participants based on survey answers:

Survey-based Assignment

Set the experiment's assignment method to survey-based and specify the surveyField. The global pre-survey response determines which chamber line the participant enters.

Property Rules (more flexible)

Assign a property from any survey, then use it for visibility or matching. Works with chamber and segment surveys too — not just the global pre-survey. See Participant Properties for rule definitions.

Show/Hide Chambers

Each chamber can define visibilityConditions — an array of property conditions that must ALL be satisfied (AND logic) for the chamber to appear in a participant's run plan.

Visibility Condition Fields

Field Type Description
key String Property key to check
operator String Condition operator
value Any Target value
If no conditions are set, the chamber is visible to everyone. Use visibility conditions to create branching flows where different participants see different chambers.

Property-Based Matching

Chamber slots can define requiredProperties. The matching algorithm uses most-constrained-first ordering: slots with the fewest eligible candidates are filled first, with FIFO within tiers.

Required Property Fields

Field Type Description
key String Property key required
operator String Condition operator
value Any Required value
A chamber has two human slots. Slot 1 requires donorType: 'high', slot 2 has no requirements. The matcher fills slot 1 first (more constrained), then slot 2 from remaining candidates.

Launch Your Experiment

When your experiment design is ready, change its status from draft to active in the Experiment Builder. This enables participant access and starts the matching system for your experiment.

Data Collection Mode

Choose between two data collection modes when activating:

Mode Description
testing Data is marked as test data and can be filtered out during export. Use this for pilot runs and debugging.
live Production data collection. Use this when running with real participants.

Sharing the Participant URL

Share the participant URL with your participants: /experiment/[experimentId]. When the first participant joins, they progress through: session initialization, identity setup, global pre-survey, chamber line assignment, and into the matching queue.

External Platform Integration

When recruiting from external platforms (e.g., Prolific, MTurk), append participant IDs via query parameters. The parameter name is configurable via participantIdParam (defaults to pid). For example: /experiment/[experimentId]?pid=PROLIFIC_PID

Content Filtering

For experiments using LLM agents, content filters are available in experiment settings. These include profanity filtering, PII detection, and harmful content filters to help maintain safe interactions.

Monitor Live Sessions

The Experimenter Dashboard provides real-time visibility into your running experiment. Access it from /dashboard/experiment/[experimentId].

Dashboard Features

The dashboard displays active sessions with real-time participant status, chatroom activity, and connection information. You can see which participants are in which chambers, their current segment, and how long they have been active.

Matching Queue

View who is waiting in the matching queue, how long they have been waiting, and which experiment they belong to. This helps you identify bottlenecks when participants are waiting too long for a match.

Alerts

The dashboard surfaces alerts for situations that may need attention: disconnected participants, long wait times in the matching queue, participant drop-outs, and idle sessions where no activity has occurred for an extended period.

Manage Participants

From the dashboard you can take action on individual participant sessions.

Session Actions

Pause a session to temporarily suspend a participant's progress. Resume a paused session to let them continue. End a session to terminate a participant's run early. These actions are available per participant from the dashboard session list.

Participant Status

View each participant's current phase (e.g., identity setup, pre-survey, chamber execution), their chamber progress within the run plan, and their connection status (online, offline, or disconnected).

Chatroom Inspection

Inspect any active chatroom to view its contents: the live chat history, participant list, and current segment. This is useful for verifying agent behaviour and monitoring the quality of interactions during a live session.

Export Data

Export your experiment data for analysis from the dashboard.

Formats

Data can be exported in JSON or CSV format.

Data Types

Type Description
participants Participant records with status, properties, and session metadata
chatrooms Chatroom records including full chat history and participant lists
responses Survey responses from global, chamber, and segment-level surveys
all Combined export of all data types

Completion Redirect

Configure a completionRedirectUrl in global settings to redirect participants to an external URL when they finish. The configurable participantIdParam is appended to the redirect URL so external platforms can match completions back to their records.

Two-Person Text Chat

The simplest experiment — two humans chatting in real time.

1
Create an experiment with one chamber line

Open the Builder, create a new experiment, and add a single chamber line.

2
Add a chamber with a chat segment

Inside the chamber line, add a chamber and give it a chat segment. See Chambers & Segments.

3
Add 2 human communicator slots

Configure the chamber with 2 human communicator participant slots. See Human Participants.

4
Activate and share

Set the experiment to active and share the participant URL. See Launch Your Experiment.

Human-AI Collaboration

One human paired with an LLM-driven agent for a conversational study.

1
Create an experiment

Create a new experiment with one chamber line and a chamber containing a chat segment.

2
Add 1 human + 1 agent slot

Add one human communicator slot and one agent communicator slot. See Add an Agent.

3
Configure the agent

Set the agent's systemPrompt and response logic (trigger on first message, respond to every message, typing delay).

4
Add a post-chat survey

Attach a chamber post-survey to collect participant feedback after the interaction. See Surveys.

5
Activate

Set the experiment to active. Each human participant will be paired with an AI agent automatically.

Multi-Condition Study

A between-subjects design with 3 conditions: control, treatment-A, and treatment-B.

1
Create 3 chamber lines

Add three chamber lines — one for each condition (control, treatment-A, treatment-B). See Conditions & Chamber Lines.

2
Use counterbalance assignment

Set the chamber line assignment method to counterbalance to distribute participants evenly across conditions.

3
Configure each line

Give each line the same chamber structure but with different agent configurations (or no agent for the control condition).

4
Add global pre/post surveys

Add global pre-survey and post-survey to collect demographics and outcome measures across all conditions.

5
Track condition assignment

Use property rules to record which condition each participant was assigned to for later analysis. See Properties & Rules.

Mediated Group Discussion

Four humans with an AI mediator managing the conversation flow.

1
Add a chamber with a chat segment

Create a chamber with a chat segment for the group discussion.

2
Add 4 humans + 1 mediator

Add 4 human communicator slots and 1 agent mediator slot. See Mediator Role.

3
Configure mediator behaviour

Set up the mediator with periodic broadcasts and the disable_chat action to manage turn-taking. See Mediator Actions.

4
Add an embedded voting segment

Add an embedded selection segment for mid-discussion voting. See Embed Activities.

5
Set segment actions for the mediator

Configure segment actions so the mediator bot also participates in the voting segment. See Segment Actions.

Experiment

Top-level container for a complete research study. Holds global settings, surveys, chamber line definitions, and bot templates. Each experiment has an owner and optional collaborators.

FieldTypeDescription
nameStringExperiment name (required)
descriptionStringDetailed description
statusEnum
draftactivepausedcompletedarchived
versionNumberConfiguration version (default: 1)
globalSettings.timezoneStringTimezone for timestamps (default: UTC)
globalSettings.dataRetentionDaysNumberDays to retain data (default: 90)
globalSettings.chamberLineAssignment.methodEnum
randomcounterbalancesurvey-basedfixed
globalSettings.completionRedirectUrlStringURL to redirect participants after completion
globalSettings.participantIdParamStringQuery parameter name for participant ID (default: pid)
globalSettings.dataCollectionModeEnum
testinglive
globalPreSurvey[Survey]Survey.js surveys shown before any chambers
globalPostSurvey[Survey]Survey.js surveys shown after all chambers
experiment.chamberlines[ChamberLine]Array of chamber line configurations
experiment.botTemplates[BotTemplate]Reusable bot/AI configurations

Chamber Line

A condition track comprising an ordered sequence of chambers. Participants are assigned to one chamber line based on the experiment's assignment method.

FieldTypeDescription
nameStringDisplay name (e.g., control, treatment-a)
chambers[Chamber]Ordered array of chamber configurations

Chamber

A container for one or more sequential segments that matched participants progress through together. At runtime, becomes a chatroom. See Add Chambers & Segments.

FieldTypeDescription
chamberIdStringUnique identifier within the experiment
nameStringDisplay name
communicationChannelEnum
textaudiovideo
segments[Segment]Ordered segments
participants[Config]Slot definitions for humans, AI, and agents
maxParticipantsNumberTotal participant slots
preSurvey[Survey]Survey shown before this chamber
postSurvey[Survey]Survey shown after this chamber
visibilityConditions[Condition]Property conditions for visibility

Segment

A single activity within a chamber timeline. See Add Chambers & Segments and Embed Activities.

FieldTypeDescription
segmentIdStringUnique identifier within the chamber
nameStringDisplay name
typeEnum
chatselectionrankinginputslideinstructionmediatimertasksurveyattention-check
orderNumberPosition in the chamber timeline
config.displayModeEnum
standaloneembedded
timing.durationNumberDuration in ms (null = unlimited)
timing.minDurationNumberMinimum time before advance
timing.warningTimeNumberWarning before auto-advance
transition.modeEnum
automanualsynchost
transition.countdownNumberCountdown before advance (ms)
transition.allowEarlyAdvanceBooleanWhether participants can skip ahead
agentOverrides[Override]Per-segment agent behaviour overrides

Participant

An entity in a chamber. See Add Human Participants and Add an Agent.

FieldTypeDescription
participantIdStringUnique identifier
participantTypeEnum
humanagent
roleEnum
communicatormediatorprocessor
displayNameStringName shown in chat (max 30 chars)
avatarStringAvatar image URL or identifier
connectionStatusEnum
offlineonlinedisconnected
matchingStatusEnum
not_readyready_for_matchingwaiting_for_matchmatchedin_chatroomcompleted
statusEnum
activecompleteddropped_outpaused
propertiesMapKey-value properties for conditional logic

Roles

Functional capabilities assigned to any participant. See Mediator and Processor for role-specific configuration.

RoleDescriptionVisibility
communicatorPrimary interactant. Sends and receives messages directly.All participants
mediatorObserves all messages, broadcasts to group, controls chat flow.All participants
processorPrivate draft-time assistant for paired communicators.Paired communicator only

Triggers

Condition-response rules for agent automation. See How Triggers Work, Trigger Types, Segment Actions, and Mediator Actions.

FieldTypeDescription
triggerIdStringUnique identifier (required)
enabledBooleanWhether active (default: true)
condition.typeEnumTrigger type (see reference)
condition.valueMixedType-specific match value
response.messageStringResponse message (or .messages for random selection)
response.delayNumberDelay before sending (ms)
response.probabilityNumberChance of firing (0–1)
segmentActionObjectSegment submission config
cooldownNumberMin ms between firings
maxTriggersNumberMax fire count (null = unlimited)
priorityNumberEvaluation order (higher first)
chainTriggerStringtriggerId to fire after this one

Run / Session

A single participant's end-to-end journey through an experiment. See Launch Your Experiment.

FieldTypeDescription
runIdStringHuman-readable unique identifier
experimentIdObjectIdParent experiment
participantIdStringReference to participant
assignedChamberLineStringWhich chamber line this run follows
currentPhaseEnum
initializationidentity_setupglobal_pre_surveychamber_line_executionglobal_post_surveycompletedterminated
currentChamberIndexNumberPosition in chamber sequence
statusEnum
activepausedcompleteddroppedterminated
runPlanObject{chamberLineId, chambers: [{chamberId, order, status}]}
surveyResponsesObject{globalPreSurvey, globalPostSurvey, chamberSurveys}

Chatroom

Runtime instantiation of a chamber, created when participants are matched. See Monitor Live Sessions.

FieldTypeDescription
chatroomIdStringUnique identifier
experimentIdObjectIdParent experiment
chamberIdStringSource chamber template
statusEnum
waitingreadyactivepausedcompletedclosed
participants[Entry]{participantId, slot, role, joinedAt, isActive}
chatHistory[Message]All messages with sender info, type, timestamps
processorInteractions[Record]Review/generate/suggestion records
settingsObject{allowParticipantChat, maxMessageLength, chatDuration, enableReactions}
Message senderType: participant, human, system, mediator, agent, mediator_bot.
Message messageType: text, system, broadcast, bot_response, ai_response, processor_suggestion.

Matching

The runtime process that groups participants into chatrooms. Runs every 5 seconds. See Property-Based Matching.

StrategyDescription
Simple FIFOFirst-in-first-out — matches in queue order
Chatroom-basedMatches based on chamber slot requirements (human count, roles)
ConditionalMatches based on property conditions on slots
Participants enter the queue by emitting ready-for-matching. The matching manager creates a chatroom when enough participants are available to fill the chamber's required slots.

Annotator Documentation

The Annotator is a batch LLM annotation engine for processing text data at scale. Upload a CSV, configure LLM annotators, and download structured results.

What is the Annotator?

The Annotator is a batch LLM annotation engine. Upload a CSV, configure one or more LLM annotators with prompt templates, run the task at scale, and download structured results.

Common use cases include text classification, sentiment analysis, content coding, and replicating published annotation schemes from peer-reviewed research.

Key Concepts

Concept Description
Task Top-level container holding CSV data, LLM configs, and processing settings
Row One CSV record, processed independently
LLM Config A provider + model + prompt template combination
Repetition Running each row through each config multiple times for reliability
Template Reusable annotation configuration that can be shared
Work Unit One row × one config × one repetition = one API call

Your First Annotation Task

Get started in four steps:

1
Upload a CSV with a text column

Your CSV should contain the text you want annotated. Column names become template variables.

2
Add an LLM config with a classification prompt

Choose a provider and model, then write a prompt template using {{columnName}} syntax to reference your data.

3
Run the task

Start processing. The engine sends each row through your LLM config and stores the results.

4
Download results

Export your annotated data as CSV, Excel, or JSON.

Providing API Keys

The Annotator requires API keys for the LLM providers you use: OpenAI, Anthropic, and/or Google.

User-level keys are set in your account settings and reused across all your tasks. Per-task keys can be provided when creating or editing a task and override user-level keys for that task only.

API keys are never visible to collaborators. Each user must provide their own keys.

Upload & Preview CSV Data

Upload a CSV file (max 10 MB). After upload you can preview the headers and the first rows of data. Column names become {{columnName}} template variables for use in your prompt templates.

Configure LLM Annotators

Add one or more LLM configurations to a task. Each configuration specifies a provider (OpenAI, Anthropic, or Google), a model, and prompt templates. You can add multiple configs to compare models or prompt strategies side by side.

Each config supports temperature and maxTokens settings to control response variability and length.

Write Prompt Templates

Each LLM config has a system prompt and a user prompt. Use {{columnName}} syntax to insert values from each CSV row into the prompt.

Tips for effective prompts: request structured output (e.g., JSON or a single label), define clear categories with descriptions, and provide examples of expected classifications in the system prompt.

Set Repetitions

Set between 1 and 20 repetitions per row per config. Multiple repetitions let you measure reliability and use majority voting to determine final labels.

The total number of work units (API calls) is: rows × configs × repetitions.

Estimate Costs

Before running a full task, use the cost estimator. It runs a sample of up to 10 rows, measures the tokens consumed, and extrapolates to give you an estimated cost for the complete task.

Standard Processing

Standard mode streams results in real time using 1–20 parallel workers. Failed requests are retried automatically with exponential backoff. Processing is crash-safe — results are saved per row, so progress is never lost.

Batch Processing

Batch mode uses the OpenAI and Anthropic batch APIs for approximately 50% cost savings with a 24-hour turnaround. Google requests fall back to standard processing automatically.

Batch jobs cannot be paused or resumed. Use standard mode if you need fine-grained control over execution.

Pause, Resume & Cancel

In standard mode, you can pause processing at any time. All completed results are preserved. Resume picks up where you left off. Cancel stops the task permanently but keeps all results that were completed before cancellation.

Pause and resume are only available in standard processing mode. Batch jobs run to completion or can only be cancelled.

Use Research Templates

The Annotator includes 25+ peer-reviewed annotation presets from published research. Select a template to pre-fill your LLM configs with validated prompt designs.

Authors Configs Domain
Gilardi et al. (2023) 7 annotators Text classification
Rathje et al. (2024) 6 annotators Psychological text analysis
Bhatia et al. (2025) 3 annotators Choice dilemma annotation
Bojic et al. (2025) 5 annotators Latent content analysis
Kumar et al. (2026) 4 annotators Empathic communication evaluation

Create Custom Templates

Save any task configuration as a reusable template. Custom templates are private by default and available only to you. They capture the full LLM config including prompts, model settings, and repetition count.

Share Templates

Submit a custom template for public review. An administrator reviews and approves or rejects the submission. Approved templates become available to all users. Usage is tracked so you can see how often your shared templates are being used.

Monitor Progress

A progress bar shows real-time completion status. Each task follows a status lifecycle: pendingprocessingcompleted or cancelled. In standard mode, a paused state is also available.

Download Results

Export results in CSV, Excel, or JSON format. You can download partial results while the task is still running — useful for spot-checking quality before the full run completes.

Understanding Output Format

Results use a flattened format with one row per input record. Columns include all original input data, the rendered prompts, and response columns for each config and repetition combination.

For analysis in R, read the CSV directly with read.csv(). In Python, use pandas.read_csv(). In Excel, open the Excel export for automatic column formatting. Response columns follow the naming pattern [configName]_rep[N].