Examining the Tests screen
While the Workspace
screen supports test creation and execution for individual database objects, the Tests
screen is purpose-built to simplify and streamline test management and execution across the entire project. It offers dedicated areas for working with different types of database objects, test definitions, and test execution results.
This page provides a guided overview of the layout and features available on the Tests
screen. Understanding its interface and tools will help you work more efficiently as we move forward with creating and analyzing tests.
Objects tree
Located on the left side of the screen, the objects tree displays all target database objects grouped by type:
Tables
Views
Procedures
Functions

Each group in the tree is expandable, indicated by an arrow symbol before the group name (e.g., > Tables
). Once tests are created and executed, a test result summary appears next to the group name (e.g., > Procedures ✔10 ✘1 ⚠3
). A green checkmark (✔
) indicates the number of passing tests. A red cross (✘
) shows how many tests have failed. A warning icon (âš
) represents untested objects.

Right-clicking a group in the objects tree reveals context menu actions that apply to all objects within that group:
Generate tests for all tables/views
– Attempts to create a test for every object in the selected group. This option is available only for tables and views. If test generation is not possible, SQL Tran displays a clear error message explaining the reason.Run tests
– Executes all existing tests for objects within the group. This option is enabled only when tests have already been created.

When a target database object is selected in the objects tree, its test-related information is loaded into the central test details pane.
Right-clicking an individual object opens the context menu with the following options:
Generate test
– Creates a single test for the selected object. If test generation is not possible, SQL Tran displays a descriptive message explaining the reason.Generate multiple tests
– Attempts to generate several test cases for the object based on available data and static analysis. Like with single test generation, an error message will be shown if the operation is not possible.Run tests
– Executes all tests currently associated with the selected object.

At the top of the objects tree, the toolbar provides buttons for view filtering and test execution.
On the left side of the toolbar are three view filter buttons:
All
– Displays all database objects, regardless of test status.Passing
– Shows only objects with tests that have successfully passed.Failed
– Displays only objects with one or more failing tests.
These filters help users quickly narrow the view, making it easier to identify problematic objects and focus attention where remediation is needed.
On the right side of the toolbar, the Run all
button initiates execution of all tests that have been created across the project. Clicking this button opens a confirmation dialog titled Run all tests
, allowing you to confirm or cancel the operation. To proceed, click Run All
in the dialog. To cancel, click Cancel
instead.
This feature is especially useful when running full regression tests or verifying test outcomes after making broader changes to the environment.

Test details pane
The central section of the Tests
screen is the test details pane, activated when a target database object is selected in the Objects
pane. This area displays all associated tests for the selected object, along with their current statuses and additional details.
In the header area at the top of the pane, the following elements are displayed:
Object type icon and name – Indicates the type of the selected database object (e.g., table, procedure) with its name shown as a clickable link.
+ Test
button – Adds a new test for the selected object.▶ Run tests
button – Executes all tests that have been created for this object.

Beneath the header area is a list displaying all existing tests for the selected object. This list includes a single column labeled Name
, where each test is presented with the following elements:
Status icon – A visual indicator reflecting the test result (e.g.,
✔
,✘
,âš
).Test name – Name of the test.
Status label – A textual description of the test result:
Success
,Failed
,Not tested
, orIgnored
.
Right-clicking any test in the list opens a context menu with the following options:
Run test
– Executes the selected test individually.Ignore test
– Marks the test as ignored and excludes it from test runs. Its status changes toIgnored
.Unignore test
– Resets the test status toNot tested
. Available for ignored tests only.Archive
– Removes the test from the active list and archives it.Override result → Mark as passing
– Manually overrides the result and sets the test status toSuccess (manual override)
. This option is disabled if the test has already been overridden.Override result → Remove override
– Cancels the manual override and reverts the status toNot tested
. This option is only enabled when a test is currently overridden.

When a test is selected in the list, its details are shown in three sections below the list: Parameters
, Results
, and Performance
info.
The Parameters
section displays any parameters used in the test. Parameters apply to procedures and functions, but not to tables or views, which have no input values.
Each parameter appears as a row with the following details:
Parameter name (e.g.,
@FirstName
)Data type (e.g.,
varchar
)Value (e.g.,
John
)

The Results
section shows the result of comparing source and target outputs:
Result icon – Reflects the match outcome visually.
Object type icon and clickable name – Identifies the tested object.
Textual label – Indicates whether the data comparison was successful (
Matched
) or not (Not Matched
).
Below this is a summary of the record comparison. If mismatches are detected, a red Differences
label appears. Clicking it opens a dialog showing detailed row-by-row differences between source and target data.

The Performance info
section displays how long it took SQL Tran to fetch data during the test execution from each staging database:
Green bar – Time to retrieve results from the source staging database (e.g.,
3 ms
).Red bar – Time to retrieve results from the target staging database (e.g.,
5082 ms
).
The height of each bar reflects the relative duration of the query on the respective side, making it easy to visually compare their performance.
Below the bars, a textual summary highlights the performance comparison (e.g., ↓ Target is 15.00x slower
).

Overview pane
The right side of the screen features a testing overview pane that provides a concise summary of the current testing status across the entire project. It presents a breakdown of passing, failing, and untested tests for each object type, helping users quickly understand overall test coverage.
Every object type is displayed with a distinct icon and label (e.g., ▦ Tables
), followed by test result statistics grouped into three categories: passing
, failing
, and not tested
. Each result includes a recognizable status icon, the number of tests in that state, and a descriptive label.
For example (icons are illustrative and may not match the interface exactly):
▦ Tables
✔ 42 passing
✘ 1 failing
âš 2 not tested
This overview acts as a high-level dashboard for quickly identifying which object types are fully tested, where issues may exist, and where additional tests might be required. It allows users to assess overall testing progress at a glance and helps prioritize areas that may need further attention.

Tests screen summary
Testing is a critical final phase in the SQL Tran workflow. While we previously touched on how tests can be created and managed from the Workspace
screen, our focus is now shifting fully to testing. As such, becoming familiar with the dedicated Tests
screen and its key components is essential.
In this section, we explored the layout of the Tests
screen, common test-related operations, and navigation behaviors that support efficient test management. With this foundation in place, we are now ready to proceed with test creation, execution, and analysis.
Last updated