Skip to content

job

ApplyWorkflow

Bases: _ApplyWorkflowBaseV1, SQLModel

Represent a workflow run

This table is responsible for storing the state of a workflow execution in the database.

Attributes:

Name Type Description
id Optional[int]

Primary key.

project_id Optional[int]

ID of the project the workflow belongs to, or None if the project was deleted.

input_dataset_id Optional[int]

ID of the input dataset, or None if the dataset was deleted.

output_dataset_id Optional[int]

ID of the output dataset, or None if the dataset was deleted.

workflow_id Optional[int]

ID of the workflow being applied, or None if the workflow was deleted.

status str

Job status

workflow_dump dict[str, Any]

Copy of the submitted workflow at submission.

input_dataset_dump dict[str, Any]

Copy of the input_dataset at submission.

output_dataset_dump dict[str, Any]

Copy of the output_dataset at submission.

start_timestamp datetime

Timestamp of when the run began.

end_timestamp Optional[datetime]

Timestamp of when the run ended or failed.

status str

Status of the run.

log Optional[str]

Forward of the workflow logs.

user_email str

Email address of the user who submitted the job.

slurm_account Optional[str]

Account to be used when submitting the job to SLURM (see "account" option in sbatch documentation).

first_task_index int
last_task_index int
Source code in fractal_server/app/models/v1/job.py
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
class ApplyWorkflow(_ApplyWorkflowBaseV1, SQLModel, table=True):
    """
    Represent a workflow run

    This table is responsible for storing the state of a workflow execution in
    the database.

    Attributes:
        id:
            Primary key.
        project_id:
            ID of the project the workflow belongs to, or `None` if the project
            was deleted.
        input_dataset_id:
            ID of the input dataset, or `None` if the dataset was deleted.
        output_dataset_id:
            ID of the output dataset, or `None` if the dataset was deleted.
        workflow_id:
            ID of the workflow being applied, or `None` if the workflow was
            deleted.
        status:
            Job status
        workflow_dump:
            Copy of the submitted workflow at submission.
        input_dataset_dump:
            Copy of the input_dataset at submission.
        output_dataset_dump:
            Copy of the output_dataset at submission.
        start_timestamp:
            Timestamp of when the run began.
        end_timestamp:
            Timestamp of when the run ended or failed.
        status:
            Status of the run.
        log:
            Forward of the workflow logs.
        user_email:
            Email address of the user who submitted the job.
        slurm_account:
            Account to be used when submitting the job to SLURM (see "account"
            option in [`sbatch`
            documentation](https://slurm.schedmd.com/sbatch.html#SECTION_OPTIONS)).
        first_task_index:
        last_task_index:
    """

    class Config:
        arbitrary_types_allowed = True

    id: Optional[int] = Field(default=None, primary_key=True)

    project_id: Optional[int] = Field(foreign_key="project.id")
    workflow_id: Optional[int] = Field(foreign_key="workflow.id")
    input_dataset_id: Optional[int] = Field(foreign_key="dataset.id")
    output_dataset_id: Optional[int] = Field(foreign_key="dataset.id")

    user_email: str = Field(nullable=False)
    slurm_account: Optional[str]

    input_dataset_dump: dict[str, Any] = Field(
        sa_column=Column(JSON, nullable=False)
    )
    output_dataset_dump: dict[str, Any] = Field(
        sa_column=Column(JSON, nullable=False)
    )
    workflow_dump: dict[str, Any] = Field(
        sa_column=Column(JSON, nullable=False)
    )
    project_dump: dict[str, Any] = Field(
        sa_column=Column(JSON, nullable=False)
    )

    working_dir: Optional[str]
    working_dir_user: Optional[str]
    first_task_index: int
    last_task_index: int

    start_timestamp: datetime = Field(
        default_factory=get_timestamp,
        sa_column=Column(DateTime(timezone=True), nullable=False),
    )
    end_timestamp: Optional[datetime] = Field(
        default=None, sa_column=Column(DateTime(timezone=True))
    )
    status: str = JobStatusTypeV1.SUBMITTED
    log: Optional[str] = None