Analyzing Oracle PL/SQL Dependencies Before Your Snowflake Migration
Oracle-to-Snowflake is the #1 database migration path in 2026. Every major cloud consulting firm has a practice dedicated to it, and the tooling ecosystem has matured significantly: SnowConvert AI (now free), AWS Schema Conversion Tool, and Ora2pg all handle syntax translation from PL/SQL to Snowflake SQL or Snowflake Scripting.
But here is the problem: these tools focus on converting SQL syntax. They skip the critical first step of analyzing what you actually have. If you have 2,000 stored procedures in your Oracle environment, which ones call which? Which columns flow through which procedures? Which procedures contain vendor-specific constructs that no tool can auto-convert? Without answering these questions first, you are converting blind.
Why Pre-Migration Dependency Analysis Matters
Oracle PL/SQL codebases are not flat lists of independent procedures. They are interconnected systems with hidden dependencies that conversion tools do not surface:
- Call graphs: Procedure A calls Procedure B, which calls Package C’s function. If you convert A but not B, A breaks in Snowflake.
- Cross-procedure column flows: Data flows from a source table through a chain of procedures, each transforming specific columns. Conversion tools translate each procedure in isolation — they do not show you the end-to-end column lineage.
- Package-level coupling: Oracle packages group related procedures and functions with shared state (package-level variables, cursors). Snowflake has no direct package equivalent, so each package must be decomposed — but you need to know the internal dependency structure first.
- Dynamic SQL: Procedures that build and execute SQL strings at runtime. These create dependencies that are invisible to static conversion tools.
What Conversion Tools Actually Produce
To be fair, conversion tools are good at what they do — translating syntax. But it helps to understand their limitations for complex PL/SQL:
SnowConvert AI (Mobilize.Net, now free): Handles straightforward PL/SQL well. For complex constructs like CONNECT BY hierarchical queries or the MODEL clause, it generates what the documentation calls “compilable but non-functional objects” — the output compiles in Snowflake without errors, but it does not produce the same results as the Oracle original. You need to know which procedures contain these constructs before conversion so you can plan manual rework.
AWS Schema Conversion Tool (SCT): Wraps untranslatable logic in comments with action items. This is honest and helpful, but with hundreds of procedures, you end up with a long list of flagged items and no way to prioritize them by dependency or business impact.
Ora2pg: The open-source workhorse for Oracle-to-PostgreSQL (and by extension Snowflake via Postgres compatibility). It handles most DML well but struggles with PL/SQL-specific patterns like BULK COLLECT + FORALL batch processing, collection types, and pipelined table functions.
None of these tools answer the fundamental pre-migration questions: What depends on what? What is the migration order? What is the blast radius if this table changes?
PL/SQL Patterns That Migration Tools Struggle With
Let us look at concrete PL/SQL constructs that require pre-migration analysis because no tool auto-converts them reliably.
1. CONNECT BY Hierarchical Queries
Oracle’s CONNECT BY syntax for hierarchical/tree queries has no direct Snowflake equivalent. You must rewrite these as recursive CTEs:
-- Oracle: Hierarchical query with CONNECT BY
CREATE OR REPLACE PROCEDURE get_org_hierarchy(p_root_id NUMBER) AS
CURSOR c_hierarchy IS
SELECT employee_id, manager_id, first_name, last_name,
LEVEL AS depth,
SYS_CONNECT_BY_PATH(last_name, '/') AS path,
CONNECT_BY_ISLEAF AS is_leaf
FROM employees
START WITH employee_id = p_root_id
CONNECT BY PRIOR employee_id = manager_id
ORDER SIBLINGS BY last_name;
BEGIN
FOR rec IN c_hierarchy LOOP
DBMS_OUTPUT.PUT_LINE(RPAD(' ', rec.depth * 2) || rec.path);
END LOOP;
END;
This procedure uses LEVEL, SYS_CONNECT_BY_PATH, CONNECT_BY_ISLEAF, and ORDER SIBLINGS BY — all Oracle-specific. Before converting, you need to know: what other procedures call get_org_hierarchy? What tables does it read from? Are there other procedures that also query the employees table hierarchically?
2. MODEL Clause for Spreadsheet-Style Calculations
Oracle’s MODEL clause allows spreadsheet-like inter-row calculations directly in SQL. There is no Snowflake equivalent:
-- Oracle: MODEL clause for financial projections
CREATE OR REPLACE VIEW revenue_forecast AS
SELECT country, product, year, amount
FROM quarterly_revenue
MODEL
PARTITION BY (country)
DIMENSION BY (product, year)
MEASURES (revenue AS amount)
RULES (
amount['Electronics', 2027] =
amount['Electronics', 2026] * 1.12,
amount['Software', 2027] =
amount['Software', 2026] * 1.25
+ amount['Services', 2026] * 0.05
);
The MODEL clause creates column-level dependencies that span across dimension values. In this example, the 2027 revenue for Software depends on both the 2026 Software and 2026 Services rows. Understanding these cross-row data flows is essential before attempting a manual rewrite for Snowflake.
3. BULK COLLECT + FORALL Batch Processing
High-performance PL/SQL uses BULK COLLECT and FORALL for batch operations. This pattern has no Snowflake equivalent — Snowflake Scripting processes rows differently:
-- Oracle: Batch processing with BULK COLLECT + FORALL
CREATE OR REPLACE PROCEDURE sync_customer_scores AS
TYPE t_customer_ids IS TABLE OF customers.customer_id%TYPE;
TYPE t_scores IS TABLE OF NUMBER;
l_ids t_customer_ids;
l_scores t_scores;
BEGIN
SELECT customer_id, calculate_score(customer_id)
BULK COLLECT INTO l_ids, l_scores
FROM customers
WHERE last_updated < SYSDATE - 30;
FORALL i IN 1..l_ids.COUNT
UPDATE customer_scores
SET score = l_scores(i),
updated_at = SYSDATE
WHERE customer_id = l_ids(i);
COMMIT;
END;
This procedure references the customers table, the customer_scores table, and calls the calculate_score function. A conversion tool might translate the SQL syntax, but it will not tell you that calculate_score itself calls three other functions and reads from two additional tables. You need the full call graph to plan the migration.
4. Package-Level Dependencies
Oracle packages bundle related procedures, functions, and shared state. Snowflake has no package concept — you must decompose them:
-- Oracle: Package with internal dependencies
CREATE OR REPLACE PACKAGE customer_mgmt AS
PROCEDURE onboard_customer(p_name VARCHAR2, p_email VARCHAR2);
FUNCTION get_risk_score(p_customer_id NUMBER) RETURN NUMBER;
PROCEDURE update_credit_limit(p_customer_id NUMBER);
END customer_mgmt;
/
CREATE OR REPLACE PACKAGE BODY customer_mgmt AS
-- Private package variable (shared state)
g_default_credit_limit NUMBER := 5000;
FUNCTION get_risk_score(p_customer_id NUMBER) RETURN NUMBER IS
l_score NUMBER;
BEGIN
SELECT risk_rating INTO l_score
FROM customer_risk_profiles
WHERE customer_id = p_customer_id;
RETURN l_score;
END;
PROCEDURE update_credit_limit(p_customer_id NUMBER) IS
l_risk NUMBER;
BEGIN
l_risk := get_risk_score(p_customer_id); -- internal call
UPDATE customers
SET credit_limit = g_default_credit_limit * (1 - l_risk/100)
WHERE customer_id = p_customer_id;
END;
PROCEDURE onboard_customer(p_name VARCHAR2, p_email VARCHAR2) IS
l_id NUMBER;
BEGIN
INSERT INTO customers(name, email, credit_limit)
VALUES (p_name, p_email, g_default_credit_limit)
RETURNING customer_id INTO l_id;
-- Cross-package call
audit_pkg.log_event('CUSTOMER_ONBOARD', l_id);
update_credit_limit(l_id); -- internal call
END;
END customer_mgmt;
This package has internal function calls (update_credit_limit calls get_risk_score), cross-package calls (onboard_customer calls audit_pkg.log_event), shared state (g_default_credit_limit), and touches multiple tables (customers, customer_risk_profiles). Decomposing this for Snowflake requires understanding all these relationships first.
Using GSP to Analyze PL/SQL Before Migration
General SQL Parser (GSP) is a SQL parsing library that handles Oracle PL/SQL natively, including all the constructs shown above. Here is how it supports pre-migration analysis.
Extracting Table and Column References
GSP parses each stored procedure and extracts every table and column reference, including those inside complex constructs like CONNECT BY, MODEL, and BULK COLLECT:
- Direct table references: Which tables does each procedure SELECT from, INSERT into, UPDATE, or DELETE from?
- Column-level detail: Which specific columns are read and which are written, even through aliases, subqueries, and CTEs?
- Vendor-specific constructs: GSP flags Oracle-specific syntax (
CONNECT BY,MODEL,MERGEwith error logging,FORALL) so you can identify procedures that need manual rework.
For the sync_customer_scores procedure above, GSP identifies:
- Reads from:
customers.customer_id,customers.last_updated - Writes to:
customer_scores.score,customer_scores.updated_at - Calls:
calculate_score()function - Vendor-specific:
BULK COLLECT INTO,FORALL,SYSDATE,%TYPEattribute
Building the Call Graph
By parsing all procedures in your Oracle schema, GSP builds a complete call graph showing which procedures call which other procedures. For the customer_mgmt package example:
customer_mgmt.onboard_customer
├── audit_pkg.log_event (cross-package)
└── customer_mgmt.update_credit_limit
└── customer_mgmt.get_risk_score
Tables affected:
customers ← INSERT (onboard), UPDATE (update_credit_limit)
customer_risk_profiles ← SELECT (get_risk_score)
audit_log ← INSERT (audit_pkg.log_event)
This call graph tells you the migration order: get_risk_score has no procedure dependencies (migrate first), update_credit_limit depends on get_risk_score (migrate second), and onboard_customer depends on both plus the audit_pkg (migrate last, and audit_pkg must also be ready).
Identifying Vendor-Specific Syntax
GSP categorizes Oracle-specific constructs by migration complexity so you can estimate effort:
| Oracle Construct | Snowflake Equivalent | Migration Effort |
|---|---|---|
CONNECT BY / START WITH | Recursive CTE | Medium — mechanical rewrite but must verify edge cases (cycles, NOCYCLE) |
MODEL clause | None — must redesign as window functions, CTEs, or application logic | High — requires understanding business logic intent |
BULK COLLECT + FORALL | Set-based SQL or Snowflake Scripting loops | Medium — often simplifies to a single SQL statement |
| Package spec + body | Individual stored procedures + UDFs | High — shared state and internal dependencies must be untangled |
DBMS_OUTPUT, UTL_FILE | Snowflake logging, stages | Medium — different patterns but well-documented |
| Pipelined table functions | UDTFs or JavaScript UDFs | High — fundamental paradigm shift |
When GSP identifies that 15% of your procedures use CONNECT BY and 3% use the MODEL clause, you can estimate the manual rework effort before committing to a migration timeline.
Visualizing Lineage with SQLFlow
SQLFlow takes the parsing results from GSP and renders them as interactive lineage diagrams. For pre-migration analysis, this provides two capabilities that spreadsheets and documentation cannot match.
Column-Level Lineage Visualization
Upload your PL/SQL to SQLFlow and see column-level data flow across procedures. For the examples above, SQLFlow shows:
customers.customer_idflows intocustomer_scores.customer_idviasync_customer_scorescustomer_risk_profiles.risk_ratingflows throughget_risk_scoreintocustomers.credit_limitviaupdate_credit_limitemployees.employee_idandemployees.manager_idform a self-referential hierarchy inget_org_hierarchy
This visual lineage map helps migration teams understand data flow before they touch any code.
Impact Analysis: What Breaks When a Table Changes?
The most practical use of SQLFlow during migration planning is impact analysis. Select any Oracle table — say, customers — and SQLFlow shows every procedure, view, and downstream table that depends on it. If you are migrating the customers table first, you can see that onboard_customer, update_credit_limit, and sync_customer_scores all need to be migrated or adapted at the same time.
This answers the question that migration project managers ask most often: what is the minimum set of objects I need to migrate together for a working increment?
For VS Code Users: SQL Omni
If your team works in VS Code and wants to run this analysis locally without uploading SQL to any cloud service, SQL Omni brings GSP’s parsing and lineage capabilities into VS Code as an extension. It supports Oracle PL/SQL natively and runs entirely offline — useful for organizations where SQL code cannot leave the local environment.
SQL Omni provides the same dependency analysis and column-level lineage visualization described above, directly in your editor. You can install it from the VS Code Marketplace.
Putting It Together: Analyze Before You Convert
Oracle-to-Snowflake migration tools have come a long way. SnowConvert AI being free removes a major cost barrier. AWS SCT and Ora2pg are solid open and semi-open options. But all of them work best when you know what you are feeding them.
A practical pre-migration workflow looks like this:
- Inventory: Use GSP to parse all PL/SQL objects and extract table/column references, call graphs, and vendor-specific syntax.
- Visualize: Use SQLFlow to render the dependency graph and column-level lineage. Identify clusters of tightly-coupled objects.
- Assess: Categorize each procedure by migration complexity based on which Oracle-specific constructs it uses.
- Prioritize: Use the dependency graph to determine migration order — start with leaf objects that have no downstream dependencies.
- Convert: Run SnowConvert AI, AWS SCT, or Ora2pg on each batch, knowing what to expect and where manual rework is needed.
- Validate: After conversion, use GSP to parse the Snowflake output and verify that the lineage is preserved.
The tools for converting Oracle SQL to Snowflake are good and getting better. The missing piece for most teams is understanding what they have before they start converting. GSP and SQLFlow fill that gap.
Try It With Your Own PL/SQL
- SQLFlow online: Visit gudusoft.com, select Oracle as the dialect, and paste a stored procedure to see its column-level lineage.
- GSP: Visit sqlparser.com for the parsing library that powers the analysis.
- SQL Omni for VS Code: Install from the VS Code Marketplace for offline analysis in your editor.
Planning an Oracle-to-Snowflake migration? Start with analysis, not conversion. Your migration timeline will thank you.

