Elementary OSS
- Introduction
- Quickstart
Guides
- Generate observability report
- Share observability report
- Send alerts
- Collect jobs data
- dbt source freshness
Configuration & usage
Deployment
- Elementary in production
- Deployment options
Integrations
Community & Support
Release Notes
- Upgrading Elementary
- Releases
FAQ
This section is aimed at collecting common questions users have to provide documented answers.
You can use the disablement vars or disable the entire package in your dbt_project.yml
.
It's possible to configure to disable with a condition like specific env.
Here are examples:
Disable specific hooks (the recommended method) -
vars:
disable_run_results: "{{ target.name not in ['prod','analytics'] }}"
disable_tests_results: "{{ target.name != 'prod' }}"
disable_dbt_artifacts_autoupload: "{{ target.name != 'prod' }}"
disable_dbt_invocation_autoupload: "{{ target.name != 'prod' }}"
Disable the whole package (Elementary tests won't work) -
models:
elementary:
+schema: "elementary"
+enabled: "{{ target.name in ['prod','analytics'] }}"
Sure! All you have to do is use the 'elementary-tests' tag in your dbt run command. Here's an example:
dbt test --select tag:elementary-tests
Elementary only needs you to run the models once after you install, and on upgrades of minor versions (like 0.7.X -> 0.8.X). On such upgrades we make schema changes, so we need you to rebuild the tables.
For excluding the elementary models from your runs we suggest 2 options:
- Use the selector
--exclude elementary
when you rundbt run
. - Set a var that disables the models by default. On version upgrades you could pass it as true.
Here is how you implement option 2:
- on your
dbt_project.yml
add:
models:
elementary:
+schema: elementary
edr:
+enabled: "{{ var('enable_elementary_models', false) }}"
- When you upgrade elementary run:
dbt run --select elementary --vars {enable_elementary_models: true}
The short answer is yes.
We recommend that Elementary models will have their own schema, but it is not mandatory.
You can change the schema name by using dbt custom schema configuration on your dbt_project.yml
.
In short, the default dbt generate_schema_name
macro concatenate the value provided in schema
configuration key to the target schema, as in: target_schema_custom_schema
.
If you want a different behaviour, like configuring a full name for the Elementary schema, you can override the default generate_schema_name
macro with your logic.
Before you do that, make sure that there isn't already a macro named generate_schema_name.sql
in your project.
Here is a macro you can use that would search for a config under meta
named schema_name
.
If it exists, that would be the schema name. If not - the original dbt logic would be followed:
{% macro generate_schema_name(custom_schema_name, node) -%}
{%- set default_schema = target.schema -%}
{% set config_meta = node.config.get('meta') %}
{% if config_meta and config_meta is mapping %}
{% set schema_name = config_meta.get('schema_name') %}
{% if schema_name and schema_name is string %}
{{ return(schema_name) }}
{% endif %}
{%- elif custom_schema_name is none -%}
{{ default_schema }}
{%- else -%}
{{ default_schema }}_{{ custom_schema_name | trim }}
{%- endif -%}
{%- endmacro %}
If you implement this macro and want to name the Elementary schema elementary_data_observability
:
models:
elementary:
+meta:
schema_name: "elementary_data_observability"
The Elementary package creates various models to store information about collected dbt artifacts and test results.
To avoid mixing with your existing models, we recommend configuring a dedicated schema for the Elementary models using the dbt custom schema option. Here is an example configuration, that creates a schema with the suffix '_elementary' for elementary models:
models:
elementary:
+schema: elementary
All your dbt tests - the built-in dbt tests, Elementary tests, custom tests and any other package tests (such as dbt_utils or dbt_expectations).
Custom / Singular tests are supported by Elementary.
- Alerts: Full support.
- Report: Only tests that
ref
one model are presented in the report under the relevant model.
You can add configuration to your custom tests with a config block:
{{ config(
tags=["Tag1","Tag2"]
meta={
"description": "This is a description",
"owner": "Maayan Salom"
}
) }}
<test query here>
Elementary’s incremental models aren’t truncated by the standard full refresh flag. This is because generally you wouldn’t want those models to truncate
To run full-refresh, use the elementary_full_refresh
var like this:
dbt run --select elementary --vars '{"elementary_full_refresh": "true"}
The CLI needs to have permissions to access the profiles.yml
file with the relevant profile, write files to disk,
and network access to the data warehouse.
Also, in the elementary
profile, the credentials should have permissions to read and write the elementary schema, and execute queries.
Yes! All the functionality is available and supported for dbt cloud users as well.
Yes you can!
Elementary saves samples of failed test rows and stores them in the table test_result_rows
, then displays them in the Results tab of the report.
By default, Elementary saves 5 rows per test, but you can change this number by setting the variable test_sample_row_count
to the number of rows you want to save. For example, to save 10 rows per test, add the following to your dbt_project.yml
file:
vars:
test_sample_row_count: 10
Or use the --vars
flag when you run dbt test
:
dbt test --vars '{"test_sample_row_count": 10}'
NOTE: The larger the number of rows you save, the more data you will store in your database. This can affect the performance and cost, depending on your database.
Elementary dbt package and CLI are free. Everything that is open source is 100% free and will remain free! Elementary Cloud is a paid SaaS offering, with premium features and integrations. However, we are committed to building a great OSS product first.
Checkout Elementary Cloud pricing.
You can join our Slack and search our #support channel, and of course ask us - we are very responsive!
You could also open a GitHub issue using the template Documentation gap
, and we could add the missing question (and answer) to the docs.