Schemasο
- class lumigator_schemas.experiments.ExperimentCreate(*, name: str, description: str = '', dataset: ~uuid.UUID, max_samples: int = -1, task_definition: ~lumigator_schemas.tasks.SummarizationTaskDefinition | ~lumigator_schemas.tasks.TranslationTaskDefinition | ~lumigator_schemas.tasks.TextGenerationTaskDefinition = <factory>)ο
- model_config: ClassVar[ConfigDict] = {}ο
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class lumigator_schemas.experiments.GetExperimentResponse(*, id: str, name: str, description: str, created_at: datetime, max_samples: int = -1, task_definition: SummarizationTaskDefinition | TranslationTaskDefinition | TextGenerationTaskDefinition, dataset: UUID, updated_at: datetime | None = None, workflows: list[WorkflowDetailsResponse] | None = None)ο
- model_config: ClassVar[ConfigDict] = {'from_attributes': True}ο
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class lumigator_schemas.workflows.WorkflowCreateRequest(*, name: str, description: str = '', experiment_id: str | None = None, model: str, provider: str, secret_key_name: str | None = None, batch_size: ~typing.Annotated[int, ~annotated_types.Gt(gt=0)] = 1, base_url: str | None = None, system_prompt: str = '', inference_output_field: str = 'predictions', config_template: str | None = None, generation_config: ~lumigator_schemas.jobs.GenerationConfig = <factory>, job_timeout_sec: ~typing.Annotated[int, ~annotated_types.Gt(gt=0)] = 3600, metrics: list[str] | None = None)ο
- model_config: ClassVar[ConfigDict] = {}ο
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class lumigator_schemas.workflows.WorkflowDetailsResponse(*, id: str, experiment_id: str, model: str, name: str, description: str, system_prompt: str, status: WorkflowStatus, created_at: datetime, updated_at: datetime | None = None, jobs: list[JobResults] | None = None, metrics: dict | None = None, parameters: dict | None = None, artifacts_download_url: str | None = None)ο
- model_config: ClassVar[ConfigDict] = {'from_attributes': True}ο
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class lumigator_schemas.workflows.WorkflowResponse(*, id: str, experiment_id: str, model: str, name: str, description: str, system_prompt: str, status: WorkflowStatus, created_at: datetime, updated_at: datetime | None = None)ο
- model_config: ClassVar[ConfigDict] = {'from_attributes': True}ο
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class lumigator_schemas.workflows.WorkflowStatus(value)ο
- class lumigator_schemas.datasets.DatasetDownloadResponse(*, id: UUID, download_urls: list[str])ο
- model_config: ClassVar[ConfigDict] = {}ο
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class lumigator_schemas.datasets.DatasetFormat(value)ο
- class lumigator_schemas.datasets.DatasetResponse(*, id: UUID, filename: str, format: DatasetFormat, size: int, ground_truth: bool, run_id: UUID | None, generated: bool | None, generated_by: str | None, created_at: datetime)ο
- model_config: ClassVar[ConfigDict] = {'from_attributes': True}ο
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class lumigator_schemas.jobs.BaseJobConfig(*, secret_key_name: str | None = None)ο
- model_config: ClassVar[ConfigDict] = {}ο
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class lumigator_schemas.jobs.DeepEvalLocalModelConfig(*, model_name: str, model_base_url: str, model_api_key: str | None = 'ollama')ο
- model_config: ClassVar[ConfigDict] = {}ο
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class lumigator_schemas.jobs.GenerationConfig(*, max_new_tokens: int = 1024, frequency_penalty: float = 0.0, temperature: float = 0.5, top_p: float = 0.5)ο
Custom and limited configuration for generation. Sort of a subset of HF GenerationConfig https://huggingface.co/docs/transformers/en/main_classes/text_generation#transformers.GenerationConfig
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid'}ο
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class lumigator_schemas.jobs.Job(*, type: str | None = None, submission_id: str | None = None, driver_info: str | None = None, status: ~lumigator_schemas.jobs.JobStatus, config: dict | None = <factory>, message: str | None = None, error_type: str | None = None, start_time: ~datetime.datetime | None = None, end_time: ~datetime.datetime | None = None, metadata: dict = <factory>, runtime_env: dict = <factory>, driver_agent_http_address: str | None = None, driver_node_id: str | None = None, driver_exit_code: int | None = None, id: ~uuid.UUID, name: str, description: str, job_type: ~lumigator_schemas.jobs.JobType, created_at: ~datetime.datetime, experiment_id: ~uuid.UUID | None = None, updated_at: ~datetime.datetime | None = None)ο
Job represents the composition of JobResponse and JobSubmissionResponse.
JobSubmissionResponse was formerly returned from some /health/jobs related endpoints, while JobResponse was used by /jobs related endpoints.
The only conflicting field in the two schemas is βstatusβ which is consistent in what it intends to represent, but uses different types (JobStatus/str).
The Job type has both id and submission_id which will contain the same data.
NOTE: Job is intended to reduce breaking changes experienced by the UI and other consumers. Tt was not conceived as a type that will be around for long, as the API needs to be refactored to better support experiments.
- model_config: ClassVar[ConfigDict] = {'from_attributes': True}ο
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class lumigator_schemas.jobs.JobAnnotateConfig(*, secret_key_name: str | None = None, job_type: ~typing.Annotated[~typing.Literal[JobType.ANNOTATION], ~pydantic.json_schema.SkipJsonSchema()] = JobType.ANNOTATION, model: str = 'facebook/bart-large-cnn', provider: str = 'hf', task_definition: ~lumigator_schemas.tasks.SummarizationTaskDefinition | ~lumigator_schemas.tasks.TranslationTaskDefinition | ~lumigator_schemas.tasks.TextGenerationTaskDefinition = <factory>, accelerator: str | None = 'auto', revision: str | None = 'main', use_fast: bool = True, trust_remote_code: bool = False, torch_dtype: str = 'auto', base_url: str | None = None, output_field: ~typing.Annotated[str, ~pydantic.json_schema.SkipJsonSchema()] | None = 'ground_truth', generation_config: ~lumigator_schemas.jobs.GenerationConfig = <factory>, store_to_dataset: bool = True, system_prompt: str | None = None)ο
Job configuration for the annotation job type
An annotation job is a special type of inference job that is used to annotate a dataset with predictions from a model. The predictions are stored in the dataset as a new field called ground_truth.
JobAnnotateConfig inherits from JobInferenceConfig but fixes the following fields, using the SkipJsonSchema type to prevent them from being included in the JSON schema: - job_type: Literal[JobType.ANNOTATION] - output_field: βground_truthβ
It also sets sensible defaults for the following fields: - store_to_dataset: True - model: βfacebook/bart-large-cnnβ - provider: βhfβ
Users can change the model and provider fields but cannot change the job_type or output_field fields.
Note that, currently, ground truth generation is limited to summarization tasks from the UI. Users can run any ground truth generation task from the API.
- model_config: ClassVar[ConfigDict] = {}ο
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class lumigator_schemas.jobs.JobAnnotateCreate(*, name: str, description: str = '', dataset: UUID, max_samples: int = -1, batch_size: Annotated[int, Gt(gt=0)] = 1, job_config: JobAnnotateConfig)ο
- model_config: ClassVar[ConfigDict] = {}ο
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class lumigator_schemas.jobs.JobConfig(*, job_id: UUID, job_type: JobType, command: str, args: dict[str, Any] | None = None)ο
- model_config: ClassVar[ConfigDict] = {}ο
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class lumigator_schemas.jobs.JobCreate(*, name: str, description: str = '', dataset: UUID, max_samples: int = -1, batch_size: Annotated[int, Gt(gt=0)] = 1, job_config: JobEvalConfig | JobInferenceConfig | JobAnnotateConfig)ο
Job configuration dealing exclusively with backend job handling
- model_config: ClassVar[ConfigDict] = {}ο
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class lumigator_schemas.jobs.JobEvalConfig(*, secret_key_name: str | None = None, job_type: Literal[JobType.EVALUATION] = JobType.EVALUATION, metrics: list[str] = ['rouge', 'meteor', 'bertscore', 'bleu'], llm_as_judge: DeepEvalLocalModelConfig | None = None)ο
- model_config: ClassVar[ConfigDict] = {}ο
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class lumigator_schemas.jobs.JobEvalCreate(*, name: str, description: str = '', dataset: UUID, max_samples: int = -1, batch_size: Annotated[int, Gt(gt=0)] = 1, job_config: JobEvalConfig)ο
- model_config: ClassVar[ConfigDict] = {}ο
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class lumigator_schemas.jobs.JobEvent(*, job_id: UUID, job_type: JobType, status: JobStatus, detail: str | None = None)ο
- model_config: ClassVar[ConfigDict] = {}ο
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class lumigator_schemas.jobs.JobInferenceConfig(*, secret_key_name: str | None = None, job_type: ~typing.Literal[JobType.INFERENCE] = JobType.INFERENCE, model: str, provider: str, task_definition: ~lumigator_schemas.tasks.SummarizationTaskDefinition | ~lumigator_schemas.tasks.TranslationTaskDefinition | ~lumigator_schemas.tasks.TextGenerationTaskDefinition = <factory>, accelerator: str | None = 'auto', revision: str | None = 'main', use_fast: bool = True, trust_remote_code: bool = False, torch_dtype: str = 'auto', base_url: str | None = None, output_field: str | None = 'predictions', generation_config: ~lumigator_schemas.jobs.GenerationConfig = <factory>, store_to_dataset: bool = False, system_prompt: str | None = None)ο
- model_config: ClassVar[ConfigDict] = {}ο
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class lumigator_schemas.jobs.JobInferenceCreate(*, name: str, description: str = '', dataset: UUID, max_samples: int = -1, batch_size: Annotated[int, Gt(gt=0)] = 1, job_config: JobInferenceConfig)ο
- model_config: ClassVar[ConfigDict] = {}ο
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class lumigator_schemas.jobs.JobLogsResponse(*, logs: str | None = None)ο
- model_config: ClassVar[ConfigDict] = {}ο
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class lumigator_schemas.jobs.JobResponse(*, id: UUID, name: str, description: str, status: JobStatus, job_type: JobType, created_at: datetime, experiment_id: UUID | None = None, updated_at: datetime | None = None)ο
- model_config: ClassVar[ConfigDict] = {'from_attributes': True}ο
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class lumigator_schemas.jobs.JobResultDownloadResponse(*, id: UUID, download_url: str)ο
- model_config: ClassVar[ConfigDict] = {}ο
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class lumigator_schemas.jobs.JobResultObject(*, metrics: dict = {}, parameters: dict = {}, artifacts: dict = {})ο
This is a very loose definition of what data should be stored in the output settings.S3_JOB_RESULTS_FILENAME. As long as a job result file only has the fields defined here, it should be accepted by the backend.
- model_config: ClassVar[ConfigDict] = {'extra': 'ignore'}ο
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class lumigator_schemas.jobs.JobResultResponse(*, id: UUID, job_id: UUID)ο
- model_config: ClassVar[ConfigDict] = {'from_attributes': True}ο
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class lumigator_schemas.jobs.JobResults(*, id: UUID, metrics: list[dict[str, Any]] | None = None, parameters: list[dict[str, Any]] | None = None, metric_url: str, artifact_url: str)ο
- model_config: ClassVar[ConfigDict] = {}ο
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- lumigator_schemas.jobs.JobSpecificConfig = lumigator_schemas.jobs.JobEvalConfig | lumigator_schemas.jobs.JobInferenceConfig | lumigator_schemas.jobs.JobAnnotateConfigο
Job configuration dealing exclusively with the Ray jobs
- class lumigator_schemas.jobs.JobStatus(value)ο
- class lumigator_schemas.jobs.JobSubmissionResponse(*, type: str | None = None, submission_id: str | None = None, driver_info: str | None = None, status: str | None = None, config: dict | None = <factory>, message: str | None = None, error_type: str | None = None, start_time: ~datetime.datetime | None = None, end_time: ~datetime.datetime | None = None, metadata: dict = <factory>, runtime_env: dict = <factory>, driver_agent_http_address: str | None = None, driver_node_id: str | None = None, driver_exit_code: int | None = None)ο
- model_config: ClassVar[ConfigDict] = {}ο
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- classmethod transform(values: dict[str, Any]) dict[str, Any] ο
Pre-processes and validates the βentrypointβ configuration before model validation.
This method uses Pydanticβs βmodel_validatorβ hook to parse the βentrypointβ configuration, and where appropriate, redact sensitive information. It then assigns the processed configuration to the config field of the model (JobSubmissionResponse) before model validation occurs.
- Parameters:
values β The dictionary of field values to be processed. It contains the model data, including the βentrypointβ configuration.
- Returns:
The updated values dictionary, with the processed and potentially redacted βentrypointβ configuration assigned to the config field.
- class lumigator_schemas.jobs.JobType(value)ο
- class lumigator_schemas.jobs.LowercaseEnum(value)ο
Can be used to ensure that values for enums are returned in lowercase.
- class lumigator_schemas.tasks.SummarizationTaskDefinition(*, task: Literal[TaskType.SUMMARIZATION] = TaskType.SUMMARIZATION)ο
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid'}ο
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class lumigator_schemas.tasks.TaskType(value)ο
TaskType refers to the different usecases supported. We use the same terminology as the HuggingFace library and support a subset of the tasks. Refer: https://huggingface.co/tasks When using HF models, the task type is used to determine the pipeline to use.
- class lumigator_schemas.tasks.TextGenerationTaskDefinition(*, task: Literal[TaskType.TEXT_GENERATION] = TaskType.TEXT_GENERATION)ο
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid'}ο
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class lumigator_schemas.tasks.TranslationTaskDefinition(*, task: Literal[TaskType.TRANSLATION] = TaskType.TRANSLATION, source_language: str, target_language: str)ο
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid'}ο
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class lumigator_schemas.extras.DeploymentType(value)ο
- class lumigator_schemas.extras.HealthResponse(*, status: str, deployment_type: DeploymentType)ο
- model_config: ClassVar[ConfigDict] = {}ο
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].