bead.deployment¶
Stage 5 of the bead pipeline: jsPsych 8.x batch experiment generation for JATOS.
Distribution Strategies¶
distribution
¶
List distribution configuration and strategies for batch experiments.
This module provides Pydantic models for configuring list distribution strategies in JATOS batch experiments. It supports 8 different distribution strategies for assigning participants to experiment lists.
DistributionStrategyType
¶
Bases: StrEnum
Available distribution strategies for list assignment.
Attributes:
| Name | Type | Description |
|---|---|---|
RANDOM |
str
|
Random selection from available lists. |
SEQUENTIAL |
str
|
Round-robin assignment (list 0, 1, 2, ..., N, 0, 1, ...). |
BALANCED |
str
|
Assign to least-used list (minimizes imbalance). |
LATIN_SQUARE |
str
|
Latin square counterbalancing for order effects. |
STRATIFIED |
str
|
Balance across multiple factors (e.g., condition × list). |
WEIGHTED_RANDOM |
str
|
Random assignment with non-uniform probabilities. |
QUOTA_BASED |
str
|
Fixed quota per list, stop when reached. |
METADATA_BASED |
str
|
Intelligent assignment based on list metadata properties. |
QuotaConfig
¶
Bases: BeadBaseModel
Configuration for quota-based assignment.
Assigns participants to lists until each list reaches a target quota. When all quotas are filled, either raises an error or allows overflow.
Attributes:
| Name | Type | Description |
|---|---|---|
participants_per_list |
int
|
Target number of participants per list (must be > 0). |
allow_overflow |
bool
|
Whether to allow assignment after quota reached (default: False). If True, uses balanced assignment after quotas filled. If False, raises error when all quotas reached. |
Examples:
>>> config = QuotaConfig(participants_per_list=25, allow_overflow=False)
>>> config.participants_per_list
25
Raises:
| Type | Description |
|---|---|
ValueError
|
If participants_per_list <= 0. |
WeightedRandomConfig
¶
Bases: BeadBaseModel
Configuration for weighted random assignment.
Assigns lists with non-uniform probabilities based on metadata expressions. Useful for oversampling certain lists or adaptive designs.
Attributes:
| Name | Type | Description |
|---|---|---|
weight_expression |
str
|
JavaScript expression to compute weight from list metadata. Expression is evaluated with 'list_metadata' in scope. Example: "list_metadata.priority || 1.0" |
normalize_weights |
bool
|
Whether to normalize weights to sum to 1.0 (default: True). |
Examples:
>>> config = WeightedRandomConfig(
... weight_expression="list_metadata.priority || 1.0",
... normalize_weights=True
... )
>>> config.weight_expression
'list_metadata.priority || 1.0'
Raises:
| Type | Description |
|---|---|
ValueError
|
If weight_expression is empty. |
validate_weight_expression(v: str) -> str
classmethod
¶
Validate weight expression is non-empty.
LatinSquareConfig
¶
Bases: BeadBaseModel
Configuration for Latin square counterbalancing.
Generates a Latin square design for systematic counterbalancing of order effects. Ensures each condition appears at each position across participants.
Attributes:
| Name | Type | Description |
|---|---|---|
balanced |
bool
|
Use balanced Latin square vs. standard (default: True). Balanced squares use Bradley's (1958) algorithm. |
Examples:
MetadataBasedConfig
¶
Bases: BeadBaseModel
Configuration for metadata-based assignment.
Filters and ranks lists based on metadata expressions before assignment. Useful for assignment based on list properties like difficulty or priority.
Attributes:
| Name | Type | Description |
|---|---|---|
filter_expression |
str | None
|
JavaScript boolean expression to filter lists (default: None). Expression is evaluated with 'list_metadata' in scope. Only lists where expression evaluates to true are eligible. Example: "list_metadata.difficulty === 'easy'" |
rank_expression |
str | None
|
JavaScript expression to rank/sort lists (default: None). Expression is evaluated with 'list_metadata' in scope. Lists are sorted by this value before assignment. Example: "list_metadata.priority || 0" |
rank_ascending |
bool
|
Sort ascending vs descending when using rank_expression (default: True). |
Examples:
>>> config = MetadataBasedConfig(
... filter_expression="list_metadata.difficulty === 'easy'",
... rank_expression="list_metadata.priority || 0",
... rank_ascending=False
... )
>>> config.filter_expression
"list_metadata.difficulty === 'easy'"
Raises:
| Type | Description |
|---|---|
ValueError
|
If both filter_expression and rank_expression are None. |
validate_at_least_one_expression() -> MetadataBasedConfig
¶
Validate at least one expression is provided.
StratifiedConfig
¶
Bases: BeadBaseModel
Configuration for stratified assignment.
Balances assignment across multiple factors (e.g., list × condition). Ensures even distribution across factor combinations.
Attributes:
| Name | Type | Description |
|---|---|---|
factors |
list[str]
|
List metadata keys to use as stratification factors (must be non-empty). Lists are grouped by unique combinations of these factor values. Example: ["condition", "verb_type"] groups by condition × verb_type. |
Examples:
>>> config = StratifiedConfig(factors=["condition", "verb_type"])
>>> config.factors
['condition', 'verb_type']
Raises:
| Type | Description |
|---|---|
ValueError
|
If factors list is empty. |
validate_factors(v: list[str]) -> list[str]
classmethod
¶
Validate factors list is non-empty and contains no duplicates.
ListDistributionStrategy
¶
Bases: BeadBaseModel
Configuration for list distribution strategy in batch experiments.
Defines how participants are assigned to experiment lists using JATOS batch sessions for server-side state management.
Attributes:
| Name | Type | Description |
|---|---|---|
strategy_type |
DistributionStrategyType
|
Type of distribution strategy (required, no default). |
strategy_config |
dict[str, Any]
|
Strategy-specific configuration parameters (default: empty dict). Required keys depend on strategy_type: - quota_based: requires 'participants_per_list' - weighted_random: requires 'weight_expression' - metadata_based: requires 'filter_expression' or 'rank_expression' - stratified: requires 'factors' |
max_participants |
int | None
|
Maximum total participants across all lists (None = unlimited) (default: None). |
error_on_exhaustion |
bool
|
Raise error when max_participants reached (default: True). If False, continues assignment (may exceed max_participants). |
debug_mode |
bool
|
Enable debug mode (always assign same list) (default: False). Useful for development testing without batch session state. |
debug_list_index |
int
|
List index to use in debug mode (default: 0, must be >= 0). |
Examples:
>>> # Balanced assignment
>>> strategy = ListDistributionStrategy(
... strategy_type=DistributionStrategyType.BALANCED,
... max_participants=100
... )
>>> # Quota-based with 25 per list
>>> strategy = ListDistributionStrategy(
... strategy_type=DistributionStrategyType.QUOTA_BASED,
... strategy_config={"participants_per_list": 25, "allow_overflow": False},
... max_participants=400
... )
>>> # Debug mode (always list 0)
>>> strategy = ListDistributionStrategy(
... strategy_type=DistributionStrategyType.RANDOM,
... debug_mode=True,
... debug_list_index=0
... )
Raises:
| Type | Description |
|---|---|
ValueError
|
If strategy_config doesn't match requirements for strategy_type. |
validate_strategy_config_matches_type() -> ListDistributionStrategy
¶
Validate strategy_config has required keys for strategy_type.
Raises:
| Type | Description |
|---|---|
ValueError
|
If required configuration keys are missing for the strategy type. |
validate_debug_list_index(v: int) -> int
classmethod
¶
Validate debug_list_index is non-negative.
jsPsych Experiment Generation¶
generator
¶
jsPsych batch experiment generator.
Generates complete jsPsych 8.x experiments using JATOS batch sessions for server-side list distribution.
JsPsychExperimentGenerator
¶
Generator for jsPsych 8.x experiments.
This class orchestrates the generation of complete jsPsych experiments, including HTML, CSS, JavaScript, and data files. It converts bead's ExperimentList and Item models into a deployable jsPsych experiment.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
config
|
ExperimentConfig
|
Experiment configuration. |
required |
output_dir
|
Path
|
Output directory for generated files. |
required |
rating_config
|
RatingScaleConfig | None
|
Configuration for rating scale trials (required for rating experiments). Defaults to RatingScaleConfig() if not provided. |
None
|
choice_config
|
ChoiceConfig | None
|
Configuration for choice trials (required for choice experiments). Defaults to ChoiceConfig() if not provided. |
None
|
Attributes:
| Name | Type | Description |
|---|---|---|
config |
ExperimentConfig
|
Experiment configuration. |
output_dir |
Path
|
Output directory for generated files. |
rating_config |
RatingScaleConfig
|
Configuration for rating scale trials. |
choice_config |
ChoiceConfig
|
Configuration for choice trials. |
jinja_env |
Environment
|
Jinja2 environment for template rendering. |
Examples:
>>> from pathlib import Path
>>> config = ExperimentConfig(
... experiment_type="likert_rating",
... title="Acceptability Study",
... description="Rate sentences",
... instructions="Rate each sentence from 1 to 7"
... )
>>> generator = JsPsychExperimentGenerator(
... config=config,
... output_dir=Path("/tmp/experiment")
... )
>>> # generator.generate(lists, items)
generate(lists: list[ExperimentList], items: dict[UUID, Item], templates: dict[UUID, ItemTemplate]) -> Path
¶
Generate complete jsPsych batch experiment.
Creates a unified batch experiment that uses JATOS batch sessions for server-side list distribution. All participants are automatically assigned to lists according to the distribution strategy specified in the experiment configuration.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
lists
|
list[ExperimentList]
|
Experiment lists for batch distribution (required, must be non-empty). All lists will be serialized to lists.jsonl and made available for participant assignment. |
required |
items
|
dict[UUID, Item]
|
Dictionary of items keyed by UUID (required, must be non-empty). All items referenced by lists must be present in this dictionary. |
required |
templates
|
dict[UUID, ItemTemplate]
|
Dictionary of item templates keyed by UUID (required, must be non-empty). All templates referenced by items must be present in this dictionary. |
required |
Returns:
| Type | Description |
|---|---|
Path
|
Path to the generated experiment directory containing: - index.html - js/experiment.js, js/list_distributor.js - css/experiment.css - data/config.json, data/lists.jsonl, data/items.jsonl, data/distribution.json |
Raises:
| Type | Description |
|---|---|
ValueError
|
If lists is empty, items is empty, templates is empty, or if any referenced UUIDs are not found in the provided dictionaries. |
SerializationError
|
If writing JSONL files fails. |
Examples:
>>> from pathlib import Path
>>> from bead.deployment.distribution import (
... ListDistributionStrategy, DistributionStrategyType
... )
>>> strategy = ListDistributionStrategy(
... strategy_type=DistributionStrategyType.BALANCED
... )
>>> config = ExperimentConfig(
... experiment_type="forced_choice",
... title="Test",
... description="Test",
... instructions="Test",
... distribution_strategy=strategy
... )
>>> generator = JsPsychExperimentGenerator(
... config=config, output_dir=Path("/tmp/exp")
... )
>>> # output_dir = generator.generate(lists, items, templates)
config
¶
Configuration models for jsPsych experiment generation.
This module provides Pydantic models for configuring jsPsych experiment generation, including experiment types, UI settings, and display options.
SpanDisplayConfig
¶
Bases: BaseModel
Visual configuration for span rendering in experiments.
Attributes:
| Name | Type | Description |
|---|---|---|
highlight_style |
Literal['background', 'underline', 'border']
|
How to visually indicate spans. |
color_palette |
list[str]
|
CSS color values for span highlighting (light backgrounds). |
dark_color_palette |
list[str]
|
CSS color values for subscript label badges (dark, index-aligned with color_palette). |
show_labels |
bool
|
Whether to show span labels inline. |
show_tooltips |
bool
|
Whether to show tooltips on hover. |
token_delimiter |
str
|
Delimiter between tokens in display. |
label_position |
Literal['inline', 'below', 'tooltip']
|
Where to display span labels. |
DemographicsFieldConfig
¶
Bases: BaseModel
Configuration for a single demographics form field.
Used to configure fields in a demographics form that appears before the experiment instructions. Supports various input types including text, number, dropdown, radio buttons, and checkboxes.
Attributes:
| Name | Type | Description |
|---|---|---|
name |
str
|
Field name (used as key in collected data). |
field_type |
Literal['text', 'number', 'dropdown', 'radio', 'checkbox']
|
Type of form input. |
label |
str
|
Display label for the field. |
required |
bool
|
Whether this field is required (default: False). |
options |
list[str] | None
|
Options for dropdown/radio fields (default: None). |
range |
Range[int] | Range[float] | None
|
Numeric range constraint for number fields (default: None). |
placeholder |
str | None
|
Placeholder text for text/number inputs (default: None). |
help_text |
str | None
|
Help text displayed below the field (default: None). |
Examples:
>>> age_field = DemographicsFieldConfig(
... name="age",
... field_type="number",
... label="Your Age",
... required=True,
... range=Range[int](min=18, max=100),
... )
>>> education_field = DemographicsFieldConfig(
... name="education",
... field_type="dropdown",
... label="Highest Education Level",
... required=True,
... options=["High School", "Bachelor's", "Master's", "PhD"],
... )
DemographicsConfig
¶
Bases: BaseModel
Configuration for participant demographics form.
Defines a demographics form that appears before experiment instructions. When enabled, participants must complete this form before proceeding.
Attributes:
| Name | Type | Description |
|---|---|---|
enabled |
bool
|
Whether to show the demographics form (default: False). |
title |
str
|
Title displayed at the top of the form (default: "Participant Information"). |
fields |
list[DemographicsFieldConfig]
|
List of fields to include in the form. |
submit_button_text |
str
|
Text for the submit button (default: "Continue"). |
Examples:
>>> config = DemographicsConfig(
... enabled=True,
... title="About You",
... fields=[
... DemographicsFieldConfig(
... name="age",
... field_type="number",
... label="Age",
... required=True,
... ),
... ],
... )
>>> config.enabled
True
InstructionPage
¶
Bases: BaseModel
A single instruction page for multi-page instructions.
Attributes:
| Name | Type | Description |
|---|---|---|
content |
str
|
HTML content for this page. |
title |
str | None
|
Optional title for this page (displayed above content). |
Examples:
>>> page = InstructionPage(
... title="Welcome",
... content="<p>Thank you for participating in this study.</p>",
... )
InstructionsConfig
¶
Bases: BaseModel
Configuration for multi-page experiment instructions.
Allows creating rich, multi-page instructions with navigation controls. Participants can optionally navigate backwards through pages.
Attributes:
| Name | Type | Description |
|---|---|---|
pages |
list[InstructionPage]
|
List of instruction pages to display. |
show_page_numbers |
bool
|
Whether to show page numbers (default: True). |
allow_backwards |
bool
|
Whether to allow navigating to previous pages (default: True). |
button_label_next |
str
|
Label for the next button (default: "Next"). |
button_label_finish |
str
|
Label for the final button (default: "Begin Experiment"). |
Examples:
>>> config = InstructionsConfig(
... pages=[
... InstructionPage(title="Welcome", content="<p>Welcome!</p>"),
... InstructionPage(title="Task", content="<p>Your task is...</p>"),
... ],
... allow_backwards=True,
... )
>>> len(config.pages)
2
>>> # Create from plain text (single page)
>>> config = InstructionsConfig.from_text("Please rate each sentence.")
>>> len(config.pages)
1
from_text(text: str) -> InstructionsConfig
classmethod
¶
Create single-page instructions from plain text.
Provides backward compatibility with simple string instructions.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
text
|
str
|
Plain text or HTML content for a single instruction page. |
required |
Returns:
| Type | Description |
|---|---|
InstructionsConfig
|
Instructions config with a single page. |
Examples:
ExperimentConfig
¶
Bases: BaseModel
Configuration for jsPsych experiment generation.
Defines all configurable aspects of a jsPsych experiment, including experiment type, UI settings, trial presentation options, and list distribution strategy.
Attributes:
| Name | Type | Description |
|---|---|---|
experiment_type |
ExperimentType
|
Type of experiment (likert_rating, slider_rating, binary_choice, forced_choice, span_labeling). |
title |
str
|
Experiment title displayed to participants |
description |
str
|
Brief description of the experiment |
instructions |
str | InstructionsConfig
|
Instructions shown to participants before the experiment. Can be a simple string (single page) or InstructionsConfig for multi-page instructions. |
demographics |
DemographicsConfig | None
|
Optional demographics form shown before instructions (default: None). When provided and enabled, participants must complete this form first. |
distribution_strategy |
ListDistributionStrategy
|
List distribution strategy for batch mode (required, no default). Specifies how participants are assigned to experiment lists using JATOS batch sessions. See bead.deployment.distribution for available strategies. |
randomize_trial_order |
bool
|
Whether to randomize trial order (default: True) |
show_progress_bar |
bool
|
Whether to show a progress bar during the experiment (default: True) |
ui_theme |
UITheme
|
UI theme for the experiment (light, dark, auto; default: light) |
on_finish_url |
str | None
|
URL to redirect to after experiment completion (default: None) If prolific_completion_code is set, this will be auto-generated |
allow_backwards |
bool
|
Whether participants can go back to previous trials (default: False) |
show_click_target |
bool
|
Whether to show click target for accuracy tracking (default: False) |
minimum_duration_ms |
int
|
Minimum trial duration in milliseconds (default: 0) |
use_jatos |
bool
|
Whether to enable JATOS integration (default: True) |
prolific_completion_code |
str | None
|
Prolific completion code for automatic redirect URL generation (default: None)
When set, on_finish_url will be auto-generated as:
https://app.prolific.co/submissions/complete?cc= |
slopit |
SlopitIntegrationConfig
|
Slopit behavioral capture integration configuration (default: disabled). When enabled, captures keystroke dynamics, focus patterns, and paste events during experiment trials for AI-assisted response detection. |
span_display |
SpanDisplayConfig | None
|
Span display configuration (default: None). Auto-enabled when items contain span annotations. Controls highlight style, colors, and label placement for span rendering. |
Examples:
>>> from bead.deployment.distribution import (
... ListDistributionStrategy,
... DistributionStrategyType
... )
>>> strategy = ListDistributionStrategy(
... strategy_type=DistributionStrategyType.BALANCED,
... max_participants=100
... )
>>> config = ExperimentConfig(
... experiment_type="likert_rating",
... title="Sentence Acceptability Study",
... description="Rate the acceptability of sentences",
... instructions="Please rate each sentence on a scale from 1 to 7.",
... distribution_strategy=strategy
... )
>>> config.randomize_trial_order
True
>>> config.ui_theme
'light'
RatingScaleConfig
¶
Bases: BaseModel
Configuration for rating scale trials.
Attributes:
| Name | Type | Description |
|---|---|---|
scale |
Range[int]
|
Numeric range for the rating scale with min and max values. Default is Range(min=1, max=7) for a standard 7-point Likert scale. |
min_label |
str
|
Label for the minimum value (default: "Not at all"). |
max_label |
str
|
Label for the maximum value (default: "Very much"). |
step |
int
|
Step size between values (default: 1). |
show_numeric_labels |
bool
|
Whether to show numeric labels on the scale (default: True). |
required |
bool
|
Whether a response is required (default: True). |
Examples:
>>> config = RatingScaleConfig()
>>> config.scale.min
1
>>> config.scale.max
7
>>> config.scale.contains(4)
True
>>> # Custom 5-point scale
>>> config = RatingScaleConfig(scale=Range[int](min=1, max=5))
>>> config.scale.max
5
ChoiceConfig
¶
Bases: BaseModel
Configuration for choice trials.
Attributes:
| Name | Type | Description |
|---|---|---|
button_html |
str | None
|
Custom HTML for choice buttons (default: None) |
required |
bool
|
Whether a response is required (default: True) |
randomize_choice_order |
bool
|
Whether to randomize the order of choices (default: False) |
trials
¶
Trial generators for jsPsych experiments.
This module provides functions to generate jsPsych trial objects from Item models. It supports various trial types including rating scales, forced choice, binary choice, and span labeling trials. Composite tasks (e.g., rating with span highlights) are also supported.
SpanColorMap
dataclass
¶
Light and dark color assignments for spans.
Attributes:
| Name | Type | Description |
|---|---|---|
light_by_span_id |
dict[str, str]
|
Light (background) colors keyed by span_id. |
dark_by_span_id |
dict[str, str]
|
Dark (badge) colors keyed by span_id. |
light_by_label |
dict[str, str]
|
Light (background) colors keyed by label name. |
dark_by_label |
dict[str, str]
|
Dark (badge) colors keyed by label name. |
create_trial(item: Item, template: ItemTemplate, experiment_config: ExperimentConfig, trial_number: int, rating_config: RatingScaleConfig | None = None, choice_config: ChoiceConfig | None = None) -> dict[str, JsonValue]
¶
Create a jsPsych trial object from an Item.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
item
|
Item
|
The item to create a trial from. |
required |
template
|
ItemTemplate
|
The item template for this item. |
required |
experiment_config
|
ExperimentConfig
|
The experiment configuration. |
required |
trial_number
|
int
|
The trial number (for tracking). |
required |
rating_config
|
RatingScaleConfig | None
|
Configuration for rating scale trials (required for rating types). |
None
|
choice_config
|
ChoiceConfig | None
|
Configuration for choice trials (required for choice types). |
None
|
Returns:
| Type | Description |
|---|---|
dict[str, JsonValue]
|
A jsPsych trial object with item and template metadata. |
Raises:
| Type | Description |
|---|---|
ValueError
|
If required configuration is missing for the experiment type. |
Examples:
>>> from uuid import UUID
>>> from bead.items.item_template import TaskSpec, PresentationSpec
>>> item = Item(
... item_template_id=UUID("12345678-1234-5678-1234-567812345678"),
... rendered_elements={"sentence": "The cat broke the vase"}
... )
>>> template = ItemTemplate(
... name="test",
... judgment_type="acceptability",
... task_type="ordinal_scale",
... task_spec=TaskSpec(prompt="Rate this"),
... presentation_spec=PresentationSpec(mode="static")
... )
>>> config = ExperimentConfig(
... experiment_type="likert_rating",
... title="Test",
... description="Test",
... instructions="Test"
... )
>>> rating_config = RatingScaleConfig()
>>> trial = create_trial(item, template, config, 0, rating_config=rating_config)
>>> trial["type"]
'bead-slider-rating'
create_consent_trial(consent_text: str) -> dict[str, JsonValue]
¶
Create a consent trial.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
consent_text
|
str
|
The consent text to display. |
required |
Returns:
| Type | Description |
|---|---|
dict[str, JsonValue]
|
A jsPsych html-button-response trial object. |
create_completion_trial(completion_message: str = 'Thank you for participating!') -> dict[str, JsonValue]
¶
Create a completion trial.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
completion_message
|
str
|
The completion message to display. |
'Thank you for participating!'
|
Returns:
| Type | Description |
|---|---|
dict[str, JsonValue]
|
A jsPsych html-keyboard-response trial object. |
create_demographics_trial(config: DemographicsConfig) -> dict[str, JsonValue]
¶
Create a demographics survey trial.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
config
|
DemographicsConfig
|
The demographics form configuration. |
required |
Returns:
| Type | Description |
|---|---|
dict[str, JsonValue]
|
A jsPsych survey trial object. |
Examples:
>>> from bead.deployment.jspsych.config import (
... DemographicsConfig, DemographicsFieldConfig
... )
>>> config = DemographicsConfig(
... enabled=True,
... title="About You",
... fields=[
... DemographicsFieldConfig(
... name="age",
... field_type="number",
... label="Your Age",
... required=True,
... ),
... ],
... )
>>> trial = create_demographics_trial(config)
>>> trial["type"]
'survey'
create_instructions_trial(instructions: str | InstructionsConfig) -> dict[str, JsonValue]
¶
Create an instruction trial supporting both simple strings and rich config.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
instructions
|
str | InstructionsConfig
|
Either a simple instruction string (single page, keyboard response) or an InstructionsConfig for multi-page instructions. |
required |
Returns:
| Type | Description |
|---|---|
dict[str, JsonValue]
|
A jsPsych trial object. For simple strings, returns html-keyboard-response. For InstructionsConfig, returns an instructions plugin trial. |
Examples:
>>> # Simple string instructions
>>> trial = create_instructions_trial("Rate each sentence from 1-7.")
>>> trial["type"]
'html-keyboard-response'
>>> # Multi-page instructions
>>> from bead.deployment.jspsych.config import InstructionsConfig, InstructionPage
>>> config = InstructionsConfig(
... pages=[
... InstructionPage(title="Welcome", content="<p>Welcome!</p>"),
... InstructionPage(title="Task", content="<p>Rate sentences.</p>"),
... ],
... )
>>> trial = create_instructions_trial(config)
>>> trial["type"]
'instructions'
>>> len(trial["pages"])
2
randomizer
¶
JavaScript randomizer code generator from OrderingConstraints.
This module converts Python OrderingConstraint models into JavaScript code that performs constraint-aware trial randomization at jsPsych runtime. This enables per-participant randomization while satisfying all ordering constraints.
generate_randomizer_function(item_ids: list[UUID], constraints: list[OrderingConstraint], metadata: dict[UUID, dict[str, JsonValue]]) -> str
¶
Generate JavaScript code for constraint-aware trial randomization.
This function converts OrderingConstraints into JavaScript code that can randomize trial order at runtime while satisfying all constraints. The generated code uses seeded randomization for reproducibility and rejection sampling to satisfy constraints.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
item_ids
|
list[UUID]
|
List of item IDs included in the experiment. |
required |
constraints
|
list[OrderingConstraint]
|
Ordering constraints to enforce. |
required |
metadata
|
dict[UUID, dict[str, JsonValue]]
|
Item metadata needed for constraint checking (keyed by item UUID). |
required |
Returns:
| Type | Description |
|---|---|
str
|
JavaScript code implementing randomizeTrials() function. |
Examples:
>>> from uuid import UUID
>>> item1 = UUID("12345678-1234-5678-1234-567812345678")
>>> item2 = UUID("87654321-4321-8765-4321-876543218765")
>>> constraint = OrderingConstraint(
... no_adjacent_property="item_metadata.condition"
... )
>>> metadata = {
... item1: {"condition": "A"},
... item2: {"condition": "B"}
... }
>>> js_code = generate_randomizer_function(
... [item1, item2],
... [constraint],
... metadata
... )
>>> "function randomizeTrials" in js_code
True
>>> "checkNoAdjacentConstraints" in js_code
True
JATOS Export¶
exporter
¶
JATOS exporter for jsPsych experiments.
This module provides the JATOSExporter class for creating JATOS study packages (.jzip) from generated jsPsych experiments.
JATOSExporter
¶
Bases: BeadBaseModel
Exports jsPsych experiments as JATOS study packages (.jzip).
A .jzip file is a ZIP archive containing: - study.json: JATOS metadata - experiment/: All experiment files (HTML, JS, CSS, data)
Attributes:
| Name | Type | Description |
|---|---|---|
study_title |
str
|
Title of the JATOS study. |
study_description |
str
|
Description of the study. |
Examples:
>>> from pathlib import Path
>>> exporter = JATOSExporter("Test Study", "A test study")
>>> # exporter.export(Path("experiment"), Path("study.jzip"))
export(experiment_dir: Path, output_path: Path, component_title: str = 'Main Experiment') -> None
¶
Create JATOS .jzip file.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
experiment_dir
|
Path
|
Directory containing experiment files (from JsPsychExperimentGenerator). Expected structure: - index.html - css/experiment.css - js/experiment.js - data/timeline.json - data/config.json |
required |
output_path
|
Path
|
Output path for .jzip file. |
required |
component_title
|
str
|
Title for the JATOS component. |
'Main Experiment'
|
Raises:
| Type | Description |
|---|---|
ValueError
|
If experiment_dir does not exist or is missing required files. |
FileNotFoundError
|
If required experiment files are not found. |
Examples:
api
¶
JATOS REST API client.
This module provides the JATOSClient class for interacting with JATOS servers via the REST API.
JATOSClient
¶
Client for JATOS REST API.
Supports uploading study packages (.jzip), listing studies, deleting studies, and retrieving study results.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
base_url
|
str
|
Base URL for JATOS instance (e.g., "https://jatos.example.com"). |
required |
api_token
|
str
|
API token for authentication. |
required |
Attributes:
| Name | Type | Description |
|---|---|---|
base_url |
str
|
Base URL for JATOS instance (trailing slash removed). |
api_token |
str
|
API token for authentication. |
session |
Session
|
HTTP session with authentication headers configured. |
Examples:
>>> client = JATOSClient("https://jatos.example.com", "my-api-token")
>>> # studies = client.list_studies()
upload_study(jzip_path: Path) -> dict[str, JsonValue]
¶
Upload study package to JATOS.
POST /api/v1/studies
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
jzip_path
|
Path
|
Path to .jzip file to upload. |
required |
Returns:
| Type | Description |
|---|---|
dict[str, JsonValue]
|
Response with study ID, UUID, and URL. |
Raises:
| Type | Description |
|---|---|
HTTPError
|
If the upload fails. |
FileNotFoundError
|
If the .jzip file does not exist. |
Examples:
list_studies() -> list[dict[str, JsonValue]]
¶
List all studies.
GET /api/v1/studies
Returns:
| Type | Description |
|---|---|
list[dict[str, JsonValue]]
|
List of study dictionaries. |
Raises:
| Type | Description |
|---|---|
HTTPError
|
If the request fails. |
Examples:
get_study(study_id: int) -> dict[str, JsonValue]
¶
Get study details.
GET /api/v1/studies/{study_id}
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
study_id
|
int
|
Study ID. |
required |
Returns:
| Type | Description |
|---|---|
dict[str, JsonValue]
|
Study details dictionary. |
Raises:
| Type | Description |
|---|---|
HTTPError
|
If the request fails. |
Examples:
delete_study(study_id: int) -> None
¶
get_results(study_id: int) -> list[int]
¶
Get all result IDs for a study.
GET /api/v1/studies/{study_id}/results
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
study_id
|
int
|
Study ID. |
required |
Returns:
| Type | Description |
|---|---|
list[int]
|
List of result IDs. |
Raises:
| Type | Description |
|---|---|
HTTPError
|
If the request fails. |
Examples: