February 07, 2025

Snowflake vs dbt

Snowflake Related Questions


Data Vault Related Questions


Questions Answers
What is Data Vault Modeling? Data Vault is a data modeling methodology optimized for historical tracking, auditability, and scalability. It separates raw data ingestion from business logic using three core components: Hubs, Links, and Satellites.
How does Data Vault handle historical data? Data Vault captures historical data by using a combination of hubs, links, and satellites to track changes over time.
What are Hubs, Links, and Satellites?
  • Hubs: Represent core business entities (e.g., Customer, Product). Store unique business keys.
  • Links: Capture relationships between Hubs (e.g., Customer-Order).
  • Satellites: Store descriptive attributes and track historical changes over time.
Why is Data Vault considered audit-friendly? Data Vault is considered audit-friendly because it provides a clear and traceable data lineage. Its architecture allows for easy auditing and validation of data, ensuring that all changes are tracked and can be reviewed. It never deletes or updates records — it only appends. This ensures a complete historical trail of all data changes, making it ideal for compliance and forensic analysis.
How does Data Vault differ from Dimensional Modeling? Data Vault focuses on providing a flexible and scalable architecture for data warehousing, while Dimensional Modeling is more rigid and optimized for query performance.
Features Data Vault Dimensional Modeling
Flexibility High (schema evolution is easy) Rigid (requires redesign) and optimized for query performance
Historical Tracking Built in; Data Vault provides a clear focuses on historical tracking Often limited; Dimensional Modeling focuses on current dataFocuses on current data
Normalization Highly normalized Denormalized (star/snowflake)
Use Case Raw data ingestion and business logic BI/reporting layer
What is the role of surrogate keys in Data Vault? Each Hub, Link, and Satellite uses a surrogate key (often a hash or sequence) to uniquely identify records. This ensures consistency and supports parallel processing.
How do you handle schema changes in Data Vault? Data Vault is designed to accommodate schema changes easily. You can add new Satellites or attributes — you don't need to modify existing Hubs or Links. New Hubs, Links, and Satellites can be added without impacting existing structures. This flexibility allows organizations to adapt to changing business requirements without significant rework.
Can Data Vault support real-time or big data environments? Yes. Data Vault is highly scalable and works well with streaming ingestion, cloud platforms, and massive datasets. It’s often paired with tools like Snowflake, dbt, or Spark.
What is a PIT (Point-in-Time) table and Bridge table in Data Vault?
  • PIT Table: Captures snapshot views across multiple Satellites for fast querying.
  • Bridge Table: Used to handle many-to-many relationships between Hubs, providing a way to link related data.
How do you load data into a Data Vault model? Data is loaded into a Data Vault model using an ETL (Extract, Transform, Load) process. The raw data is ingested into Staging tables, then processed and loaded into Hubs, Links, and Satellites according to the Data Vault architecture.
  • Load raw data into Hubs (business keys)
  • Create Links (relationships)
  • Populate Satellites (descriptive data + timestamps)
Tools like Airflow, dbt, or Snowflake Tasks can orchestrate this.
What are the main benefits of using Data Vault?
  • Scalability: Easily handles growing data volumes; Handles petabyte-scale data
  • Flexibility: Adapts to changing business requirements and adapts to schema drift & new sources easily
  • Auditability: Complete historical tracking of data changes.
  • Separation of Concerns: Isolates raw data from business logic.
  • Parallel Loading: Supports high-performance data ingestion.
  • Modularity: Easy to extend and refactor
  • Business Alignment: Separates raw data from business logic


dbt Related Questions


Questions Answers
What is dbt and how does it fit into the data engineering workflow? dbt is an open-source tool that focuses on the Transform step in ELT. It lets you write modular SQL models, test them, document them, and orchestrate them—all within your data warehouse.
How is dbt different from traditional ETL tools? Unlike ETL tools that transform data outside the warehouse, dbt performs transformations inside the warehouse using SQL. It’s lightweight, version-controlled, and designed for analytics engineering.
What are dbt models and how do you create them? A dbt model is simply a .sql file containing a SELECT statement. dbt compiles these into tables or views based on your materialization strategy (table, view, incremental, etc.).
What is materialization in dbt? Materialization defines how dbt stores the result of a model:
  • Table: Creates a physical table.
  • View: Creates a view.
  • Incremental: Updates / appends new or changed data.
  • Ephemeral: In-memory CTE, used for intermediate logic; not persisted.
How does dbt handle testing and validation? dbt supports schema tests (e.g., not_null, unique, accepted_values) and custom tests using SQL. These ensure data quality and catch issues early in the pipeline.
What is the role of surrogate keys in Data Vault? Each Hub, Link, and Satellite uses a surrogate key (often a hash or sequence) to uniquely identify records. This ensures consistency and supports parallel processing.
What is the role of dbt_project.yml? This config file defines your project’s structure, model paths, materializations, and metadata. In other simple words, it’s the central brain of your dbt project.
How does dbt support documentation and lineage? dbt auto-generates documentation and a DAG (Directed Acyclic Graph) showing model dependencies. You can annotate models and columns for stakeholder clarity.
Can dbt be used with version control and CI/CD? If so, how? Yes. dbt projects are just code, so they integrate seamlessly with Git for version control. You can set up CI/CD pipelines using tools like GitHub Actions, GitLab CI, or Jenkins to automate testing and deployment.
What are sources and seeds in dbt?
  • Sources: Reference raw tables already in your warehouse.
  • Seeds: Load static CSV files into your warehouse for use in models (e.g., lookup tables).
How do you orchestrate dbt workflows? You can run dbt models using:
  • dbt run: executes transformations.
  • dbt test: runs validations.
  • dbt docs generate: builds documentation.
  • External orchestration via Airflow, Dagster, or dbt Cloud jobs.


Snowflake Related Questions


Questions Answers
What is Snowflake? A cloud-based data warehousing service.
How does Snowflake work? Snowflake uses a unique architecture that separates storage and compute resources.
How is Snowflake different from traditional databases? Snowflake is built for the cloud and offers features like automatic scaling and data sharing.
What is Snowflake's architecture? Snowflake's architecture is a multi-cluster shared data architecture that allows for concurrent processing and scaling.
What is Snowflake and how is it different from traditional data warehouses? Snowflake is a cloud-native data platform that separates compute and storage, supports multi-cloud, and offers near-zero maintenance. Unlike legacy systems, it’s built for elasticity, scalability, and secure data sharing.
What is Time Travel in Snowflake? Time Travel lets you access historical data (up to 90 days) even after it's been updated or deleted. Useful for data recovery, auditing, and rollback operations.
What is SnowPipe and how does it work? SnowPipe is Snowflake’s continuous data ingestion service.It loads data from cloud storage (like S3) automatically using event notifications or REST APIs.
What are micro-partitions in Snowflake? Snowflake stores data in compressed columnar micro-partitions (50–500MB chunks).These enable fast pruning, efficient storage, and optimized query performance.
What is Zero-Copy Cloning? You can clone databases, schemas, or tables instantly without duplicating data.Great for testing, sandboxing, or branching environments.
How does Snowflake handle semi-structured data (JSON, XML, Avro)? Snowflake supports native ingestion and querying of semi-structured data using the VARIANT data type.You can use FLATTEN, LATERAL, and dot notation to extract nested fields.
What is a Task in Snowflake and how is it used? Tasks automate SQL execution on a schedule or trigger.You can chain tasks using AFTER to build DAG-like orchestration inside Snowflake.
How does Snowflake ensure security and compliance? Role-based access control (RBAC), data encryption (at rest and in transit), masking policies, and support for HIPAA, GDPR, SOC 2, etc.
What is the difference between Snowflake and Redshift/BigQuery? Snowflake separates compute/storage, supports multi-cloud, and offers features like Time Travel, cloning, and native semi-structured support.Redshift is AWS-only and less flexible; BigQuery is serverless but lacks some orchestration features.
How do you load data from S3 into Snowflake?
  • Create an external stage pointing to your S3 bucket using CREATE STAGE.
  • Define a file format (e.g., CSV, JSON) with CREATE FILE FORMAT.
  • Use the COPY INTO command to load data from the stage into Snowflake tables.
Optionally automate with SnowPipe or orchestrate with Tasks.


dbt vs. Snowflake Tasks for Data Orchestration


Feature dbt (Data Build Tool) Snowflake Tasks
Primary Purpose Data transformation and modeling within the data warehouse. Automating SQL or procedure execution and scheduling within Snowflake.
Language SQL with Jinja templating for modularity and reusability. Standard SQL or JavaScript for procedural logic.
Execution Model SQL compiled into models, run via dbt run or scheduler SQL/procedure runs on schedule or trigger
Orchestration Requires external orchestration tools (e.g., Airflow, Dagster, dbt Cloud). DAG of models with dependencies (ref()) Built-in scheduling and chaining of tasks using AFTER clauses.
Materialization table, view, incremental, ephemeral Executes logic, no materialization concept.
Documentation Auto-generated docs + lineage graph No native docs, but can be documented via metadata
Environment Support Git-based versioning, CI/CD pipelines Manual or scripted deployment across environments
Monitoring dbt Cloud UI, logs, CI alerts Snowsight UI, task history, query logs
Flexibility Great for modular SQL transformations Great for procedural logic and automation
Integration Works with Airflow, Dagster, Prefect, dbt Cloud Can be triggered by Airflow, ADF, or REST API

When to Use Each:
Scenario dbt Snowflake Tasks
Transforming raw → curated data
Running stored procedures or admin tasks
Testing data quality
Refreshing KPIs or snapshots
Building reusable SQL logic with lineage
Automating multi-step orchestration inside Snowflake

Hybrid Strategy (Best of Both Worlds):
  • Use dbt for complex data transformations, modeling, testing, and documentation.
  • Use Snowflake Tasks for scheduling, automating administrative tasks, and orchestrating simple SQL workflows within Snowflake.
  • Combine both by triggering dbt runs from Snowflake Tasks or using external orchestrators to manage end-to-end workflows. Both can be triggered using Airflow or dbt Cloud jobs for full-stack orchestration