Skip to main content
Test executions represent the results of running a test. Send test execution results to track test runs, failures, and performance metrics.

TestExecution Object

from elementary_python_sdk.core.types.test import (
    TestExecution,
    TestExecutionStatus,
    QualityDimension
)

test_execution = TestExecution(
    id="string",                    # Required: Unique identifier
    test_id="string",                # Required: ID of the test
    test_sub_unique_id="string",    # Required: Sub-test identifier
    sub_type="string",              # Required: Sub-type of the test
    status=TestExecutionStatus.PASS, # Required: Execution status
    start_time=datetime,            # Required: Test execution start time
    quality_dimension=QualityDimension.COMPLETENESS, # Optional: Quality dimension
    failure_count=0,                # Optional: Number of failures
    description="string",            # Optional: Execution description
    code="string",                   # Optional: Test code/query
    duration_seconds=0.0,           # Optional: Execution duration
    exception="string",              # Optional: Exception message
    traceback="string"               # Optional: Exception traceback
)

Required Fields

FieldTypeDescription
idstringUnique identifier for this test execution
test_idstringID of the test that was executed (must match a Test id)
test_sub_unique_idstringSub-test identifier (typically same as test_id for simple tests)
sub_typestringSub-type of the test (e.g., "row_count", "null_rate", "uniqueness")
statusTestExecutionStatusExecution status (see below)
start_timedatetimeWhen the test execution started (UTC timezone)

Optional Fields

FieldTypeDescription
quality_dimensionQualityDimensionQuality dimension being tested (see below)
failure_countintNumber of rows or records that failed the test
descriptionstringHuman-readable description of the execution
codestringTest code or SQL query that was executed
duration_secondsfloatHow long the test took to execute (in seconds)
exceptionstringException message if the test failed with an error
tracebackstringFull exception traceback if the test failed
column_namestringColumn name for column-level test executions

Test Execution Status

Test executions can have the following statuses:
  • TestExecutionStatus.PASS - Test passed successfully
  • TestExecutionStatus.WARN - Test passed with warnings
  • TestExecutionStatus.FAIL - Test failed
  • TestExecutionStatus.ERROR - Test encountered an error
  • TestExecutionStatus.SKIPPED - Test was skipped
  • TestExecutionStatus.NO_DATA - Test had no data to check

Quality Dimensions

Quality dimensions categorize the type of data quality being tested:
  • QualityDimension.COMPLETENESS - Data completeness (nulls, missing values)
  • QualityDimension.UNIQUENESS - Data uniqueness (duplicates)
  • QualityDimension.FRESHNESS - Data freshness (timeliness)
  • QualityDimension.VALIDITY - Data validity (format, constraints)
  • QualityDimension.ACCURACY - Data accuracy (correctness)
  • QualityDimension.CONSISTENCY - Data consistency (across sources)

Example

from elementary_python_sdk.core.types.test import (
    TestExecution,
    TestExecutionStatus,
    QualityDimension
)
from datetime import datetime, timezone

# Successful test execution
success_execution = TestExecution(
    id="users_row_count_test_exec_20240101_001",
    test_id="users_row_count_test",
    test_sub_unique_id="users_row_count_test",
    sub_type="row_count",
    status=TestExecutionStatus.PASS,
    start_time=datetime.now(timezone.utc),
    quality_dimension=QualityDimension.COMPLETENESS,
    failure_count=0,
    duration_seconds=1.5,
    description="Row count check passed: 10,000 rows found"
)

# Failed test execution
failed_execution = TestExecution(
    id="users_email_uniqueness_test_exec_20240101_001",
    test_id="users_email_uniqueness_test",
    test_sub_unique_id="users_email_uniqueness_test",
    sub_type="uniqueness",
    status=TestExecutionStatus.FAIL,
    start_time=datetime.now(timezone.utc),
    quality_dimension=QualityDimension.UNIQUENESS,
    failure_count=5,
    duration_seconds=2.3,
    description="Found 5 duplicate email addresses",
    code="SELECT email, COUNT(*) FROM users GROUP BY email HAVING COUNT(*) > 1"
)

# Error test execution
error_execution = TestExecution(
    id="users_freshness_test_exec_20240101_001",
    test_id="users_freshness_test",
    test_sub_unique_id="users_freshness_test",
    sub_type="freshness",
    status=TestExecutionStatus.ERROR,
    start_time=datetime.now(timezone.utc),
    quality_dimension=QualityDimension.FRESHNESS,
    duration_seconds=0.1,
    exception="Connection timeout",
    traceback="Traceback (most recent call last):\n  ..."
)

Best Practices

  1. Use unique execution IDs - Generate unique IDs for each test execution (e.g., include timestamp)
  2. Link to tests - Always ensure test_id matches an existing Test id
  3. Include timing information - Set start_time and duration_seconds for performance monitoring
  4. Report failures accurately - Set failure_count to the actual number of failed rows/records
  5. Include error details - For failed tests, include exception and traceback for debugging
  6. Set quality dimensions - Assign appropriate quality dimensions to enable filtering and reporting
Test executions are upserted, so you can send the same execution multiple times and it will be updated.