Skip to content

SQL Scope

The SQL editor can consume the current Analyze context instead of making you rebuild it manually.

That means you can:

  • brush dies in Analyze
  • stage what-if limits
  • switch to SQL
  • query the same scoped population directly

This is a one-way flow: Analyze -> SQL. Selections and limits do not push back from SQL to Analyze.

Large results page automatically. Use SQL LIMIT and OFFSET for manual control.

The public SQL endpoint is intentionally constrained:

  • exactly one statement per request
  • only SELECT and EXPLAIN
  • read-only validation before execution
  • blocked filesystem, network, export, and system helpers
  • 30s timeout
  • hard cap of 100,000 rows
  • default page size of 1,000 rows when limit is omitted

So the SQL surface is powerful, but it is not an unrestricted DuckDB shell.

When Analyze has an active lot, the SQL editor receives:

  • lot id
  • current notebook snapshot id
  • current die selection
  • active what-if limit overrides
  • current test and wafer context

That scope is exposed through helper tables and views.

One-row summary of the propagated context.

Notebook what-if limits for the current scope.

Columns:

  • test_number
  • test_name
  • lsl
  • usl

The exact brushed die identities from Analyze.

Columns:

  • wafer_number
  • x
  • y

The die view filtered to the current selection.

If no selection is active, it resolves to the whole current lot.

If Analyze is pinned to a historical snapshot, this helper follows that snapshot.

The test_results view filtered to the current selection and enriched with the active notebook what-if limits.

Important columns:

  • has_scope_override
  • scope_limit_source
  • scope_low_limit
  • scope_high_limit
  • raw_pass
  • scope_pass

has_scope_override and scope_limit_source tell you whether a row is using baseline limits or notebook what-if limits.

scope_pass uses the current notebook what-if limits when they exist.

If Analyze is pinned to a historical snapshot, this helper follows that snapshot too.

The current scoped results filtered to the active Analyze test when one exists.

Use this when you want SQL to stay anchored to the test you were just inspecting in scatter or histogram.

SELECT * FROM current_scope;
SELECT * FROM current_scope_limits ORDER BY test_number;
SELECT * FROM current_scope_active_test_results LIMIT 200;
SELECT
test_number,
test_name,
COUNT(*) AS n,
AVG(result) AS mean_result,
SUM(CASE WHEN scope_pass THEN 1 ELSE 0 END) AS pass_count,
SUM(CASE WHEN NOT scope_pass THEN 1 ELSE 0 END) AS fail_count
FROM current_scope_test_results
GROUP BY test_number, test_name
ORDER BY fail_count DESC, test_number
LIMIT 20;