Collect jobs info from orchestrator
Elementary can collect metadata about your jobs from the orchestrator you are using, and enrich the Elementary UI with this information.
The goal is to provide context that is useful to triage and resolve data issues, such as:
- Is my freshness / volume issue related to a job that didn't complete? Which job?
- Which tables were built as part of the job that loaded data with issues?
- Which job should I rerun to resolve?
Elementary collects the following job details:
- Orchestrator name:
orchestrator
- Job name:
job_name
- Job ID:
job_id
- Job results URL:
job_url
- The ID of a specific run execution:
job_run_id
- Job run results URL:
job_run_url
How Elementary collects jobs metadata
Environment variables
Elementary collects jobs metadata in run time from env_vars
.
Orchestration tools usually have default environment variables, so this might happen automatically. The list of supported orchestrators and default env vars is in the following section.
These are the env vars that are collected:
ORCHESTRATOR
, JOB_NAME
, JOB_ID
, JOB_URL
, JOB_RUN_ID
, JOB_RUN_URL
To configure env_var
for your orchestrator, refer to your orchestrator's docs.
dbt vars
Elementary also supports passing job metadata as dbt vars. If env_var
and var
exist, the var
will be prioritized.
To pass job data to Elementary using var
, use the --vars
flag in your invocations:
Variables supported format
var / env_var | Format |
---|---|
orchestrator | One of: airflow , dbt_cloud , github_actions , prefect , dagster |
job_name, job_id, job_run_id | String |
job_url, job_run_url | Valid HTTP URL |
Which orchestrators are supported?
You can pass job info to Elementary from any orchestration tool as long as you configure env_vars
/ vars
.
The following default environment variables are supported out of the box:
Orchestrator | Env vars |
---|---|
dbt cloud | orchestrator job_id: DBT_CLOUD_JOB_ID job_run_id: DBT_CLOUD_RUN_ID job_url: generated from DBT_ACCOUNT_ID , DBT_CLOUD_PROJECT_ID , DBT_CLOUD_JOB_ID job_run_url: generated from ACCOUNT_ID , DBT_CLOUD_PROJECT_ID , DBT_CLOUD_RUN_ID |
Github actions | orchestrator job_run_id: GITHUB_RUN_ID job_url: generated from GITHUB_SERVER_URL , GITHUB_REPOSITORY , GITHUB_RUN_ID |
Airflow | orchestrator |
What if I use dbt Cloud + orchestrator?
By default, Elementary will collect the dbt cloud jobs info.
If you wish to override that, change your dbt cloud invocations to pass the orchestrator job info using --vars
:
Where can I see my job info?
- In your Elementary schema, the raw fields are stored in the table
dbt_invocations
. You could also use the viewjob_run_results
which groups invocation by job. - In the Elementary UI, if the info was collected successfully, you can filter the lineage by job and see the details in the node info.
Can't find your orchestrator? Missing info?
We would love to support more orchestrators and collect more useful info! Please open an issue and tell us what we should add.