Carrier Documentation
Carrier is a browser-based platform for designing, deploying, and monitoring controlled studies of mixed-agent, multimodal group interaction. This guide covers everything you need to design and run experiments.
Getting Started
What is Carrier?
Carrier is a browser-based experimental research platform for designing, deploying, and monitoring controlled studies of mixed-agent, multimodal group interaction. Researchers use the Experiment Builder to configure study conditions, participant compositions, and interaction sequences, then monitor live sessions through the Experimenter Dashboard. The platform supports real-time text and audio communication between any mix of human participants, LLM-powered AI assistants, and rule-based scripted agents.
Who Participates
- Human participants
- AI assistants (LLM-driven)
- Scripted agents (rule-based)
What They Do
- Communicator (primary interactant)
- Mediator (group-facing broadcasts)
- Processor (draft-time assistant)
How They Interact
- Text chat
- Audio conversation
- Segment activities (votes, rankings, tasks)
| Concept | Description |
|---|---|
| Experiment | Top-level container for a complete research study |
| Chamber Line | A condition track (e.g., treatment vs control) |
| Chamber | Container for sequential segments where matched participants interact |
| Segment | A single activity within a chamber timeline |
| Participant Types | Humans, AI assistants, and scripted agents |
| Roles | Functional capabilities: communicator, mediator, processor |
| Triggers | Condition-response rules for agent automation |
| Run / Session | A participant's end-to-end journey through an experiment |
| Chatroom | Runtime instantiation of a chamber |
| Matching | Process of grouping participants into chatrooms |
| Properties | Key-value pairs for conditional logic |
| Surveys | Questionnaires at global, chamber, or segment level |
Getting Started
Key Concepts
Experiments are organised using a small set of composable building blocks. Understanding their hierarchy helps you navigate the rest of this guide.
Communication Channels
Getting Started
Your First Experiment
This walkthrough takes you from zero to a working two-person text chat experiment. You can adapt it for more complex designs later.
Open the Experiment Builder, click New Experiment, give it a name and description.
A chamber line represents one experimental condition. For your first experiment, one line is enough.
Inside your chamber line, add a chamber. Add a chat segment — this is where
participants will interact in real time.
In the chamber's participant configuration, add 2 human communicator slots. Participants will be matched automatically.
Change the experiment status from draft to active. This enables participant access.
Copy the participant URL (/experiment/[your-experiment-id]) and send it to participants.
They'll be matched and placed in a chatroom.
Getting Started
Inviting Collaborators
Share your experiment with other researchers so they can help design, configure, and monitor it.
From the Experiment Builder, use the collaborators panel to add researchers by email. They'll get full edit and dashboard access.
| Action | Owner | Collaborator |
|---|---|---|
| View & edit experiment | Yes | Yes |
| View dashboard & data | Yes | Yes |
| Add/remove collaborators | Yes | No |
| Delete experiment | Yes | No |
Study Structure
Set Up Conditions
Chamber lines represent between-subjects conditions (e.g., treatment vs control). An experiment can have multiple chamber lines, and each participant is assigned to exactly one based on the experiment's assignment method.
| Method | Description | When to Use |
|---|---|---|
| random | Uniform random assignment | Default for most studies |
| counterbalance | Balances participant counts across conditions | When equal group sizes matter |
| survey-based | Uses a survey response to determine assignment | When conditions depend on participant characteristics |
| fixed | Always assigns to a specific chamber line | For pilot testing or single-condition studies |
Study Structure
Add Chambers & Segments
A chamber is a container where matched participants interact together. Each chamber holds one or more segments — the individual activities participants complete in sequence. Participants are matched once at chamber start and stay grouped across all segments within it.
Segment Types
| Type | Description | AI/Bot Compatible |
|---|---|---|
| chat | Real-time text or audio conversation | Yes |
| selection | Multiple choice voting | Yes |
| ranking | Drag-and-drop ranking | Yes |
| input | Free text input | No |
| slide | Display static or dynamic content | No |
| instruction | Markdown instructions with continue button | No |
| media | Audio/video playback | No |
| timer | Countdown or waiting period | No |
| task | Custom interactive task | Yes |
| survey | Embedded mini-survey | No |
| attention-check | Face/survey-based participant verification | No |
Chamber Parameters
| Field | Type | Description |
|---|---|---|
| chamberId | String | Unique identifier within the experiment |
| name | String | Display name |
| communicationChannel | Enum |
text audio video
|
| segments | [Segment] | Ordered array of segment activities |
| participants | [ParticipantConfig] | Slot definitions for humans, AI assistants, and agents |
| maxParticipants | Number | Total participant slots |
| preSurvey | [SurveySchema] | Survey shown before this chamber |
| postSurvey | [SurveySchema] | Survey shown after this chamber |
Study Structure
Embed Activities in Chat
The displayMode setting on a segment controls how it appears to participants:
- Standalone (default) — replaces the current view entirely.
- Embedded — renders as an overlay on top of a running chat segment. Chat continues beneath.
Non-chat segments placed after a chat segment in the timeline become embedded children of that chat. They activate while the parent chat is running, appearing as overlays that participants interact with without leaving the conversation.
Completion Behaviours
| Behaviour | Description |
|---|---|
| dismiss | Overlay disappears, chat continues |
| end-chat | Parent chat segment also ends |
| lock | Content stays visible but becomes read-only |
| minimize | Collapses to a small indicator |
Study Structure
Configure Timing & Transitions
Timing Fields
| Field | Type | Description |
|---|---|---|
| timing.duration | Number (ms) | Total segment length. null = unlimited |
| timing.minDuration | Number (ms) | Minimum time before participants can advance |
| timing.warningTime | Number (ms) | Warning shown before auto-advance |
Transition Modes
| Mode | Description | When to Use |
|---|---|---|
| auto | Advances when duration expires | Timed activities with fixed length |
| manual | Participant clicks to advance | Self-paced reading or tasks |
| sync | Waits for all participants | Group coordination points |
| host | Experimenter advances from dashboard | Researcher-controlled pacing |
Additional fields: transition.countdown (ms before advance) and
transition.allowEarlyAdvance (boolean, whether participants can skip ahead).
Study Structure
Add Surveys
Surveys can be placed at three levels in the experiment structure:
- Global pre/post — shown once at the start or end of a participant's session. Use for demographics, consent, or debrief.
- Chamber pre/post — shown before matching or after a chamber ends. Use for manipulation checks or mood measures.
-
Segment survey — embedded in the chamber timeline as a segment of
type
survey. Use for in-context questionnaires.
Surveys use the Survey.js JSON format. You can design surveys visually in the Survey.js creator and paste the JSON into the experiment builder.
- Chamber line assignment — the
survey-basedmethod reads a specific field to determine which condition a participant enters. - Property rules — assign participant properties from responses (see Properties).
- Prompt interpolation — use
{{fieldName}}in agent system prompts to inject survey values. - Completion redirect —
completionRedirectUrlsends participants to an external URL with their ID appended.
Participants & Roles
Add Human Participants
Define human participant slots in each chamber. Participants are real people interacting through the browser. Each slot specifies how the participant's identity is determined.
| Source | Description |
|---|---|
| user_input | Participant chooses their own display name and avatar |
| configured | Researcher pre-sets the display name |
| auto_generated | System generates a random name |
When participants join an active experiment, they enter a matching queue. The system groups them into chatrooms based on each chamber's slot requirements. See Property-Based Matching for advanced configuration.
Participants & Roles
Add an Agent
An agent is an automated participant whose behaviour is defined by its triggers. Triggers can produce responses in three ways:
- Scripted responses — fixed messages or random selection from an array. No API calls.
-
LLM-driven responses — the
llm-driventrigger type sends context to an LLM for a dynamic response. -
Mixed — combine both: keyword triggers with fixed replies plus
an
llm-drivenfallback.
botMode: 'llm' with
response logic settings, while a Scripted Bot starts with an empty trigger list. Both are
the same underlying agent type.
LLM Configuration
| Field | Type | Description |
|---|---|---|
| provider | Enum |
openai anthropic google custom
|
| aiModel | String | Model ID (e.g., gpt-4, claude-3-haiku) |
| systemPrompt | String | System prompt (supports {{fieldName}} interpolation) |
| temperature | Number | LLM temperature (default: 0.7) |
| maxTokens | Number | Max response tokens (default: 1000) |
| contextWindow | Number | Recent messages included (default: 10) |
| responseDelay | {min, max} | Simulated typing delay in ms |
| responseLogic.triggerOnFirstMessage | Boolean | Respond to first human message |
| responseLogic.respondToEveryMessage | Boolean | Respond to every message |
| responseLogic.timeoutTrigger | Object | Auto-respond after silence |
| responseLogic.initialSalute | Object | Greeting on chamber start |
| responseLogic.respondOnMention | Boolean | Only respond when mentioned |
| chainEnabled | Boolean | Enable multi-step LLM chain |
LLM agents respond with {content, rationale, actions}.
content is the message (null = stay silent).
rationale is logged for research.
actions is mediator-only.
See Simulation for trigger details.
Participants & Roles
Set Up a Mediator
The mediator role observes all messages and broadcasts information to the group. Mediators can control chat flow by disabling and enabling chat or prompting specific participants to respond.
| Field | Type | Description |
|---|---|---|
| broadcastMode | Enum |
sequential aggregated triggered
|
| synthesizeResponses | Boolean | Synthesize participant responses before broadcasting |
| synthesisPrompt | String | Prompt for synthesis |
| broadcastFrequency | Number | How often to broadcast |
| triggerKeywords | [String] | Keywords that trigger a broadcast |
periodic, aggregate,
topic-detected, discussion-phase) and can issue actions
(disable_chat, enable_chat, prompt_participant).
See Simulation for details.
Participants & Roles
Set Up a Processor
The processor role assists communicators privately during draft time. Processor feedback is only visible to the paired communicator — other participants do not see it.
| Field | Type | Description |
|---|---|---|
| targetCommunicators | [Number] | Which communicator slots to assist |
| feedbackVisibility | Enum |
private public
|
| phases | [Phase] | Ordered processing phases |
Phase Definition
| Field | Type | Description |
|---|---|---|
| phaseId | String | Unique identifier |
| mode | Enum |
review generate real-time-assist disabled
|
| transitionTrigger.type | Enum |
on-start message-count time-elapsed keyword participant-event manual on-end
|
| transitionTrigger.value | Mixed | Trigger-specific threshold |
| aiConfig | Object | {provider, model, systemPrompt, temperature, maxTokens} |
| contextLevel | Enum |
none partial full
|
| reviewSettings | Object | {trigger, pauseTimeout, feedbackFormat, mandatory, maxRounds} |
Simulation
How Triggers Work
A trigger is a condition-response pair that fires when something happens in the chatroom. Every trigger has three parts:
- Condition — what to watch for (a trigger type + value).
- Response or Action — what to do when matched (send a message or submit to a segment).
- Lifecycle controls — cooldown, maxTriggers, priority.
Trigger Definition Fields
| Field | Type | Description |
|---|---|---|
| triggerId | String | Unique identifier (required) |
| enabled | Boolean | Whether active (default: true) |
| condition.type | Enum | One of the trigger types |
| condition.value | Mixed | Type-specific match value |
| condition.caseSensitive | Boolean | Case-sensitive matching (default: false) |
| condition.matchMode | Enum |
any all
|
| condition.senderFilter | Enum |
human specific
|
| priority | Number | Higher evaluated first (default: 0) |
| cooldown | Number | Min ms between firings (default: 0) |
| maxTriggers | Number | Max times this can fire (null = unlimited) |
Simulation
Trigger Types Reference
Message-based
| Type | Description | condition.value |
|---|---|---|
| keyword | Matches keywords/phrases in message text | String or [String] |
| regex | Matches a regular expression pattern | String (regex pattern) |
| sequence | Fires when messages match an ordered sequence | [String] |
Counting
| Type | Description | condition.value |
|---|---|---|
| message-count | Fires after N total messages | Number |
| participant-message-count | Fires after a participant sends N messages | {target, count, countMode} |
For participant-message-count: target can be
'any', 'human', 'bot', or a specific name.
countMode: 'total', 'consecutive',
'since-reset'. Supports resetOnTrigger.
Time-based
| Type | Description | condition.value |
|---|---|---|
| time | Fires after delay (ms) from segment/chamber start | Number (ms) |
| activity-timeout | Fires after a period of inactivity | Number (ms) |
Event-based
| Type | Description | condition.value |
|---|---|---|
| participant-action | Fires on join, leave, etc. | String (event name) |
| after-bot-message | Fires after another bot sends a message | String (bot name) |
| event-monitor | Monitors chatroom events and chains triggers | Object |
| participant-count | Fires based on participant count thresholds | {operator, count} |
LLM
| Type | Description | condition.value |
|---|---|---|
| llm-driven | Sends context to LLM for dynamic response | Object (LLM config) |
Mediator-specific
| Type | Description | condition.value |
|---|---|---|
| periodic | Fires at regular intervals | Number (interval ms) |
| aggregate | Fires after collecting N messages | Number (message count) |
| topic-detected | Fires on topic/keyword pattern | String or [String] |
| discussion-phase | Fires at specific discussion phases | String (phase name) |
Meta
| Type | Description | condition.value |
|---|---|---|
| chain-only | Only fires when chained from another trigger | — |
Simulation
Trigger Responses
When a trigger fires, it can send a message to the chatroom.
Response Fields
| Field | Type | Description |
|---|---|---|
| response.message | String | Single response message |
| response.messages | [String] | Array — one selected randomly |
| response.delay | Number | Delay in ms before sending |
| response.probability | Number | Chance of firing (0–1, default: 1.0) |
LLM-driven path: For llm-driven triggers, context is sent to
the LLM which returns {content, rationale, actions}. content is the
message (null = stay silent). rationale is logged only.
actions is mediator-only (see
Mediator Actions).
Simulation
Segment Actions
Beyond sending messages, triggers can submit to interactive segments via the
segmentAction field. This enables bots to participate in votes, rankings,
and other interactive activities.
Supported segment types: selection (including slider mode), ranking.
Data Modes
| Mode | Description |
|---|---|
| static | Hardcoded values (index-based selection) |
| random | Pick randomly from options (optional weighting) |
| referenced | React to human submissions using a strategy |
Referenced Strategies
| Strategy | Description |
|---|---|
| match-first-human | Copy the first human's choice |
| match-majority | Copy the most popular choice |
| oppose-majority | Pick the least popular choice |
| random-different | Pick something different from humans |
Submission Metadata
| Field | Type | Description |
|---|---|---|
| countTowardTotal | Boolean | Bot's submission counts toward completion |
| showInResults | Boolean | Submission appears in results display |
| tagAsBot | Boolean | Show bot indicator in UI |
Simulation
Mediator Actions
Mediator-role agents can issue actions that control the chat flow, returned in the
actions array of their JSON response.
disable_chat
Mutes a participant's input. Include release conditions to control when chat is re-enabled.
| Condition | Description |
|---|---|
| timeout | Re-enable after N milliseconds |
| participant_message | After the target sends a message |
| message_count | After N total messages |
| all_others_responded | After all others have sent a message |
| keyword_mentioned | When a keyword appears |
| mediator_release | Only mediator can re-enable |
Release logic: type: "any" (first condition met) vs
type: "all" (all conditions must be met).
enable_chat
Re-enables a muted participant. Clears all restrictions and pending timers.
prompt_participant
Sends a private message visible only to the target participant.
Example Action
{
"action": "disable_chat",
"target": "participant_name",
"rationale": "reason (logged)",
"release_conditions": {
"type": "any",
"conditions": [
{ "type": "timeout", "value": 30000 },
{ "type": "keyword_mentioned", "value": "ready" }
]
},
"on_release": {
"notify": true,
"message": "You can now type again."
}
}
Simulation
Chaining & Lifecycle
Triggers can be linked together to create multi-step behaviour:
chainTrigger— fires a follow-up trigger after the current one completes.chain-only— a trigger type that only fires when chained from another trigger.after-bot-message— fires when another bot sends a message (cross-bot coordination).event-monitor— monitors events and chains based on observations.
Lifecycle Controls
| Control | Type | Description |
|---|---|---|
| cooldown | Number | Min ms between firings (default: 0) |
| maxTriggers | Number | Max total fires (null = unlimited) |
| priority | Number | Higher evaluated first (default: 0) |
| enabled | Boolean | Activate/deactivate without removing |
Conditional Logic
Participant Properties
Properties are key-value pairs that accumulate on participants throughout their session, used for matching, chamber visibility, and dynamic assignment.
Initial Properties
Assigned at chamber line assignment. Each property uses a strategy to determine its value.
| Strategy | Description |
|---|---|
| fixed | Always assigns fixedValue |
| random | Picks randomly from options array |
| counterbalance | Picks least-used option (random tie-breaking) |
Property Rules
Evaluated after survey responses. Each rule checks a survey field and assigns a property when the condition is met.
| Field | Type | Description |
|---|---|---|
| ruleId | String | Unique identifier |
| source | Enum |
global-pre-survey chamber-pre-survey segment-survey
|
| surveyField | String | Survey question name to evaluate |
| condition.operator | Enum |
eq neq in gt
gte lt lte between
|
| condition.value | Any | Comparison value |
| assigns.key | String | Property key to set |
| assigns.value | Any | Value to assign |
Condition Operators
| Operator | Description |
|---|---|
| eq | Equality (string) |
| neq | Inequality (string) |
| in | Value in target array |
| gt | Greater than (numeric) |
| gte | Greater than or equal |
| lt | Less than |
| lte | Less than or equal |
| between | Between min and max inclusive |
Conditional Logic
Route by Survey Response
Two mechanisms let you route participants based on survey answers:
Survey-based Assignment
Set the experiment's assignment method to survey-based and specify
the surveyField. The global pre-survey response determines which
chamber line the participant enters.
Property Rules (more flexible)
Assign a property from any survey, then use it for visibility or matching. Works with chamber and segment surveys too — not just the global pre-survey. See Participant Properties for rule definitions.
Conditional Logic
Show/Hide Chambers
Each chamber can define visibilityConditions — an array of property
conditions that must ALL be satisfied (AND logic) for the chamber to appear in a
participant's run plan.
Visibility Condition Fields
| Field | Type | Description |
|---|---|---|
| key | String | Property key to check |
| operator | String | Condition operator |
| value | Any | Target value |
Conditional Logic
Property-Based Matching
Chamber slots can define requiredProperties. The matching algorithm uses
most-constrained-first ordering: slots with the fewest eligible candidates are filled
first, with FIFO within tiers.
Required Property Fields
| Field | Type | Description |
|---|---|---|
| key | String | Property key required |
| operator | String | Condition operator |
| value | Any | Required value |
donorType: 'high', slot 2
has no requirements. The matcher fills slot 1 first (more constrained), then slot 2 from
remaining candidates.
Running & Monitoring
Launch Your Experiment
When your experiment design is ready, change its status from draft to
active in the Experiment Builder. This enables participant access and starts
the matching system for your experiment.
Data Collection Mode
Choose between two data collection modes when activating:
| Mode | Description |
|---|---|
| testing | Data is marked as test data and can be filtered out during export. Use this for pilot runs and debugging. |
| live | Production data collection. Use this when running with real participants. |
Sharing the Participant URL
Share the participant URL with your participants: /experiment/[experimentId].
When the first participant joins, they progress through: session initialization, identity
setup, global pre-survey, chamber line assignment, and into the matching queue.
External Platform Integration
When recruiting from external platforms (e.g., Prolific, MTurk), append participant IDs via
query parameters. The parameter name is configurable via participantIdParam
(defaults to pid). For example:
/experiment/[experimentId]?pid=PROLIFIC_PID
Content Filtering
For experiments using LLM agents, content filters are available in experiment settings. These include profanity filtering, PII detection, and harmful content filters to help maintain safe interactions.
Running & Monitoring
Monitor Live Sessions
The Experimenter Dashboard provides real-time visibility into your running experiment.
Access it from /dashboard/experiment/[experimentId].
Dashboard Features
The dashboard displays active sessions with real-time participant status, chatroom activity, and connection information. You can see which participants are in which chambers, their current segment, and how long they have been active.
Matching Queue
View who is waiting in the matching queue, how long they have been waiting, and which experiment they belong to. This helps you identify bottlenecks when participants are waiting too long for a match.
Alerts
The dashboard surfaces alerts for situations that may need attention: disconnected participants, long wait times in the matching queue, participant drop-outs, and idle sessions where no activity has occurred for an extended period.
Running & Monitoring
Manage Participants
From the dashboard you can take action on individual participant sessions.
Session Actions
Pause a session to temporarily suspend a participant's progress. Resume a paused session to let them continue. End a session to terminate a participant's run early. These actions are available per participant from the dashboard session list.
Participant Status
View each participant's current phase (e.g., identity setup, pre-survey, chamber execution), their chamber progress within the run plan, and their connection status (online, offline, or disconnected).
Chatroom Inspection
Inspect any active chatroom to view its contents: the live chat history, participant list, and current segment. This is useful for verifying agent behaviour and monitoring the quality of interactions during a live session.
Running & Monitoring
Export Data
Export your experiment data for analysis from the dashboard.
Formats
Data can be exported in JSON or CSV format.
Data Types
| Type | Description |
|---|---|
| participants | Participant records with status, properties, and session metadata |
| chatrooms | Chatroom records including full chat history and participant lists |
| responses | Survey responses from global, chamber, and segment-level surveys |
| all | Combined export of all data types |
Completion Redirect
Configure a completionRedirectUrl in global settings to redirect participants
to an external URL when they finish. The configurable participantIdParam is
appended to the redirect URL so external platforms can match completions back to their
records.
Examples
Two-Person Text Chat
The simplest experiment — two humans chatting in real time.
Open the Builder, create a new experiment, and add a single chamber line.
Inside the chamber line, add a chamber and give it a chat segment.
See Chambers & Segments.
Configure the chamber with 2 human communicator participant slots. See Human Participants.
Set the experiment to active and share the participant URL. See Launch Your Experiment.
Examples
Human-AI Collaboration
One human paired with an LLM-driven agent for a conversational study.
Create a new experiment with one chamber line and a chamber containing a chat segment.
Add one human communicator slot and one agent communicator slot. See Add an Agent.
Set the agent's systemPrompt and response logic (trigger on first
message, respond to every message, typing delay).
Attach a chamber post-survey to collect participant feedback after the interaction. See Surveys.
Set the experiment to active. Each human participant will be paired with an AI agent automatically.
Examples
Multi-Condition Study
A between-subjects design with 3 conditions: control, treatment-A, and treatment-B.
Add three chamber lines — one for each condition (control, treatment-A, treatment-B). See Conditions & Chamber Lines.
Set the chamber line assignment method to counterbalance to distribute
participants evenly across conditions.
Give each line the same chamber structure but with different agent configurations (or no agent for the control condition).
Add global pre-survey and post-survey to collect demographics and outcome measures across all conditions.
Use property rules to record which condition each participant was assigned to for later analysis. See Properties & Rules.
Examples
Mediated Group Discussion
Four humans with an AI mediator managing the conversation flow.
Create a chamber with a chat segment for the group discussion.
Add 4 human communicator slots and 1 agent mediator slot. See Mediator Role.
Set up the mediator with periodic broadcasts and the disable_chat
action to manage turn-taking.
See Mediator Actions.
Add an embedded selection segment for mid-discussion voting.
See Embed Activities.
Configure segment actions so the mediator bot also participates in the voting segment. See Segment Actions.
Reference
Experiment
Top-level container for a complete research study. Holds global settings, surveys, chamber line definitions, and bot templates. Each experiment has an owner and optional collaborators.
| Field | Type | Description |
|---|---|---|
| name | String | Experiment name (required) |
| description | String | Detailed description |
| status | Enum | draftactivepausedcompletedarchived |
| version | Number | Configuration version (default: 1) |
| globalSettings.timezone | String | Timezone for timestamps (default: UTC) |
| globalSettings.dataRetentionDays | Number | Days to retain data (default: 90) |
| globalSettings.chamberLineAssignment.method | Enum | randomcounterbalancesurvey-basedfixed |
| globalSettings.completionRedirectUrl | String | URL to redirect participants after completion |
| globalSettings.participantIdParam | String | Query parameter name for participant ID (default: pid) |
| globalSettings.dataCollectionMode | Enum | testinglive |
| globalPreSurvey | [Survey] | Survey.js surveys shown before any chambers |
| globalPostSurvey | [Survey] | Survey.js surveys shown after all chambers |
| experiment.chamberlines | [ChamberLine] | Array of chamber line configurations |
| experiment.botTemplates | [BotTemplate] | Reusable bot/AI configurations |
Reference
Chamber Line
A condition track comprising an ordered sequence of chambers. Participants are assigned to one chamber line based on the experiment's assignment method.
| Field | Type | Description |
|---|---|---|
| name | String | Display name (e.g., control, treatment-a) |
| chambers | [Chamber] | Ordered array of chamber configurations |
Reference
Chamber
A container for one or more sequential segments that matched participants progress through together. At runtime, becomes a chatroom. See Add Chambers & Segments.
| Field | Type | Description |
|---|---|---|
| chamberId | String | Unique identifier within the experiment |
| name | String | Display name |
| communicationChannel | Enum | textaudiovideo |
| segments | [Segment] | Ordered segments |
| participants | [Config] | Slot definitions for humans, AI, and agents |
| maxParticipants | Number | Total participant slots |
| preSurvey | [Survey] | Survey shown before this chamber |
| postSurvey | [Survey] | Survey shown after this chamber |
| visibilityConditions | [Condition] | Property conditions for visibility |
Reference
Segment
A single activity within a chamber timeline. See Add Chambers & Segments and Embed Activities.
| Field | Type | Description |
|---|---|---|
| segmentId | String | Unique identifier within the chamber |
| name | String | Display name |
| type | Enum | chatselectionrankinginputslideinstructionmediatimertasksurveyattention-check |
| order | Number | Position in the chamber timeline |
| config.displayMode | Enum | standaloneembedded |
| timing.duration | Number | Duration in ms (null = unlimited) |
| timing.minDuration | Number | Minimum time before advance |
| timing.warningTime | Number | Warning before auto-advance |
| transition.mode | Enum | automanualsynchost |
| transition.countdown | Number | Countdown before advance (ms) |
| transition.allowEarlyAdvance | Boolean | Whether participants can skip ahead |
| agentOverrides | [Override] | Per-segment agent behaviour overrides |
Reference
Participant
An entity in a chamber. See Add Human Participants and Add an Agent.
| Field | Type | Description |
|---|---|---|
| participantId | String | Unique identifier |
| participantType | Enum | humanagent |
| role | Enum | communicatormediatorprocessor |
| displayName | String | Name shown in chat (max 30 chars) |
| avatar | String | Avatar image URL or identifier |
| connectionStatus | Enum | offlineonlinedisconnected |
| matchingStatus | Enum | not_readyready_for_matchingwaiting_for_matchmatchedin_chatroomcompleted |
| status | Enum | activecompleteddropped_outpaused |
| properties | Map | Key-value properties for conditional logic |
Reference
Roles
Functional capabilities assigned to any participant. See Mediator and Processor for role-specific configuration.
| Role | Description | Visibility |
|---|---|---|
| communicator | Primary interactant. Sends and receives messages directly. | All participants |
| mediator | Observes all messages, broadcasts to group, controls chat flow. | All participants |
| processor | Private draft-time assistant for paired communicators. | Paired communicator only |
Reference
Triggers
Condition-response rules for agent automation. See How Triggers Work, Trigger Types, Segment Actions, and Mediator Actions.
| Field | Type | Description |
|---|---|---|
| triggerId | String | Unique identifier (required) |
| enabled | Boolean | Whether active (default: true) |
| condition.type | Enum | Trigger type (see reference) |
| condition.value | Mixed | Type-specific match value |
| response.message | String | Response message (or .messages for random selection) |
| response.delay | Number | Delay before sending (ms) |
| response.probability | Number | Chance of firing (0–1) |
| segmentAction | Object | Segment submission config |
| cooldown | Number | Min ms between firings |
| maxTriggers | Number | Max fire count (null = unlimited) |
| priority | Number | Evaluation order (higher first) |
| chainTrigger | String | triggerId to fire after this one |
Reference
Run / Session
A single participant's end-to-end journey through an experiment. See Launch Your Experiment.
| Field | Type | Description |
|---|---|---|
| runId | String | Human-readable unique identifier |
| experimentId | ObjectId | Parent experiment |
| participantId | String | Reference to participant |
| assignedChamberLine | String | Which chamber line this run follows |
| currentPhase | Enum | initializationidentity_setupglobal_pre_surveychamber_line_executionglobal_post_surveycompletedterminated |
| currentChamberIndex | Number | Position in chamber sequence |
| status | Enum | activepausedcompleteddroppedterminated |
| runPlan | Object | {chamberLineId, chambers: [{chamberId, order, status}]} |
| surveyResponses | Object | {globalPreSurvey, globalPostSurvey, chamberSurveys} |
Reference
Chatroom
Runtime instantiation of a chamber, created when participants are matched. See Monitor Live Sessions.
| Field | Type | Description |
|---|---|---|
| chatroomId | String | Unique identifier |
| experimentId | ObjectId | Parent experiment |
| chamberId | String | Source chamber template |
| status | Enum | waitingreadyactivepausedcompletedclosed |
| participants | [Entry] | {participantId, slot, role, joinedAt, isActive} |
| chatHistory | [Message] | All messages with sender info, type, timestamps |
| processorInteractions | [Record] | Review/generate/suggestion records |
| settings | Object | {allowParticipantChat, maxMessageLength, chatDuration, enableReactions} |
senderType: participant, human, system, mediator, agent, mediator_bot.Message
messageType: text, system, broadcast, bot_response, ai_response, processor_suggestion.
Reference
Matching
The runtime process that groups participants into chatrooms. Runs every 5 seconds. See Property-Based Matching.
| Strategy | Description |
|---|---|
| Simple FIFO | First-in-first-out — matches in queue order |
| Chatroom-based | Matches based on chamber slot requirements (human count, roles) |
| Conditional | Matches based on property conditions on slots |
ready-for-matching. The matching manager
creates a chatroom when enough participants are available to fill the chamber's required slots.
Annotator Documentation
The Annotator is a batch LLM annotation engine for processing text data at scale. Upload a CSV, configure LLM annotators, and download structured results.
Getting Started
What is the Annotator?
The Annotator is a batch LLM annotation engine. Upload a CSV, configure one or more LLM annotators with prompt templates, run the task at scale, and download structured results.
Common use cases include text classification, sentiment analysis, content coding, and replicating published annotation schemes from peer-reviewed research.
Getting Started
Key Concepts
| Concept | Description |
|---|---|
| Task | Top-level container holding CSV data, LLM configs, and processing settings |
| Row | One CSV record, processed independently |
| LLM Config | A provider + model + prompt template combination |
| Repetition | Running each row through each config multiple times for reliability |
| Template | Reusable annotation configuration that can be shared |
| Work Unit | One row × one config × one repetition = one API call |
Getting Started
Your First Annotation Task
Get started in four steps:
Your CSV should contain the text you want annotated. Column names become template variables.
Choose a provider and model, then write a prompt template using
{{columnName}} syntax to reference your data.
Start processing. The engine sends each row through your LLM config and stores the results.
Export your annotated data as CSV, Excel, or JSON.
Getting Started
Providing API Keys
The Annotator requires API keys for the LLM providers you use: OpenAI, Anthropic, and/or Google.
User-level keys are set in your account settings and reused across all your tasks. Per-task keys can be provided when creating or editing a task and override user-level keys for that task only.
Task Setup
Upload & Preview CSV Data
Upload a CSV file (max 10 MB). After upload you can preview the headers and the first
rows of data. Column names become {{columnName}} template variables for use
in your prompt templates.
Task Setup
Configure LLM Annotators
Add one or more LLM configurations to a task. Each configuration specifies a provider (OpenAI, Anthropic, or Google), a model, and prompt templates. You can add multiple configs to compare models or prompt strategies side by side.
Each config supports temperature and maxTokens settings
to control response variability and length.
Task Setup
Write Prompt Templates
Each LLM config has a system prompt and a user prompt.
Use {{columnName}} syntax to insert values from each CSV row into the prompt.
Task Setup
Set Repetitions
Set between 1 and 20 repetitions per row per config. Multiple repetitions let you measure reliability and use majority voting to determine final labels.
The total number of work units (API calls) is:
rows × configs × repetitions.
Processing
Estimate Costs
Before running a full task, use the cost estimator. It runs a sample of up to 10 rows, measures the tokens consumed, and extrapolates to give you an estimated cost for the complete task.
Processing
Standard Processing
Standard mode streams results in real time using 1–20 parallel workers. Failed requests are retried automatically with exponential backoff. Processing is crash-safe — results are saved per row, so progress is never lost.
Processing
Batch Processing
Batch mode uses the OpenAI and Anthropic batch APIs for approximately 50% cost savings with a 24-hour turnaround. Google requests fall back to standard processing automatically.
Processing
Pause, Resume & Cancel
In standard mode, you can pause processing at any time. All completed results are preserved. Resume picks up where you left off. Cancel stops the task permanently but keeps all results that were completed before cancellation.
Templates
Use Research Templates
The Annotator includes 25+ peer-reviewed annotation presets from published research. Select a template to pre-fill your LLM configs with validated prompt designs.
| Authors | Configs | Domain |
|---|---|---|
| Gilardi et al. (2023) | 7 annotators | Text classification |
| Rathje et al. (2024) | 6 annotators | Psychological text analysis |
| Bhatia et al. (2025) | 3 annotators | Choice dilemma annotation |
| Bojic et al. (2025) | 5 annotators | Latent content analysis |
| Kumar et al. (2026) | 4 annotators | Empathic communication evaluation |
Templates
Create Custom Templates
Save any task configuration as a reusable template. Custom templates are private by default and available only to you. They capture the full LLM config including prompts, model settings, and repetition count.
Results
Monitor Progress
A progress bar shows real-time completion status. Each task follows a status lifecycle:
pending → processing →
completed or cancelled. In standard mode, a
paused state is also available.
Results
Download Results
Export results in CSV, Excel, or JSON format. You can download partial results while the task is still running — useful for spot-checking quality before the full run completes.
Results
Understanding Output Format
Results use a flattened format with one row per input record. Columns include all original input data, the rendered prompts, and response columns for each config and repetition combination.
read.csv().
In Python, use pandas.read_csv(). In
Excel, open the Excel export for automatic column formatting.
Response columns follow the naming pattern
[configName]_rep[N].