Elementary can collect metadata about your jobs from the orchestrator you are using, and enrich the Elementary UI with this information.

The goal is to provide context that is useful to triage and resolve data issues, such as:

  • Is my freshness / volume issue related to a job that didn't complete? Which job?
  • Which tables were built as part of the job that loaded data with issues?
  • Which job should I rerun to resolve?

Elementary collects the following job details:

  • Orchestrator name: orchestrator
  • Job name: job_name
  • Job ID: job_id
  • Job results URL: job_url
  • The ID of a specific run execution: job_run_id
  • Job run results URL: job_run_url

How Elementary collects jobs metadata?

Environment variables

Elementary collects jobs metadata in run time from env_vars. Orchestration tools usually have default environment variables, so this might happen automatically. The list of supported orchestrators and default env vars is in the following section.

These are the env vars that are collected: ORCHESTRATOR, JOB_NAME, JOB_ID, JOB_URL, JOB_RUN_ID, JOB_RUN_URL

To configure env_var for your orchestrator, refer to your orchestrator's docs.

dbt vars

Elementary also supports passing job metadata as dbt vars. If env_var and var exist, the var will be prioritized.

To pass job data to elementary using var, use the --vars flag in your invocations:

dbt run --vars '{"orchestrator": "Airflow", "job_name": "dbt_marketing_night_load"}'

Variables supported format

var / env_varFormat
orchestratorOne of: airflow, dbt_cloud, github_actions, prefect, dagster
job_name, job_id, job_run_idString
job_url, job_run_urlValid HTTP URL

Which orchestrators are supported?

You can pass job info to Elementary from any orchestration tool as long as you configure env_vars / vars.

The following default environment variables are supported out of the box:

OrchestratorEnv vars
dbt cloudorchestrator
job_id: DBT_CLOUD_JOB_ID
job_run_id: DBT_CLOUD_RUN_ID
job_url: generated from DBT_ACCOUNT_ID, DBT_CLOUD_PROJECT_ID, DBT_CLOUD_JOB_ID
job_run_url: generated from ACCOUNT_ID, DBT_CLOUD_PROJECT_ID, DBT_CLOUD_RUN_ID
Github actionsorchestrator
job_run_id: GITHUB_RUN_ID
job_url: generated from GITHUB_SERVER_URL, GITHUB_REPOSITORY, GITHUB_RUN_ID
Airfloworchestrator

What if I use dbt cloud + orchestrator?

By default, Elementary will collect the dbt cloud jobs info. If you wish to override that, change your dbt cloud invocations to pass the orchestrator job info using --vars:

dbt run --vars '{"orchestrator": "Airflow", "job_name": "dbt_marketing_night_load"}'

Where can I see my job info?

  • In your Elementary schema, the raw fields are stored in the table dbt_invocations. You could also use the view job_run_results which groups invocation by job.
  • In the Elementary UI, if the info was collected successfully, you can filter the lineage by job and see the details in the node info.

Can't find your orchestrator? Missing info?

We would love to support more orchestrators and collect more useful info! Please open an issue and tell us what we should add.