top of page

ETL Developer with Python/Pyspark & Snowflake experience.

Job Type

Full time or contract hybrid

Experience

8 years

Location

Must be in either New York/ Toronto or Montreal from day one.

Job Description

We are looking for a Data and Warehouse Engineer with strong Python, PySpark, SQL ETL Warehousing and ideally some Snowflake with a financial background typically who plays a crucial role in a cloud-based data warehousing platform to manage and analyze financial data.

Key Responsibilities

Top technical skills required:

  • Python/ PySpark

  • SQL

  • Data Warehousing

  • Data integration/ETL

  • Azure

  • Snowflake

Primary Responsibilities:

  • Working on Azure Architecture, design, implementation and operationalization of large-scale data and analytics and warehousing solutions on Snowflake Cloud Data Warehouse.

  • Hands-on development experience with Snowflake features such as; Python; SQL, Tasks; Streams; Time travel; Zero Copy Snow SQL; Snow Pipe Cloning; Optimizer; Metadata Manager; data sharing; and stored procedures.

  • Experience in Data warehousing - OLTP, OLAP, Dimensions, Facts, and Data modeling.

  • Need to have working knowledge of MS Azure configuration items with respect to Snowflake.

  • Developing ETL pipelines in and out of data warehouse using combination of Data bricks, Python and SQL/Snow SQL.

  • Developing scripts UNIX, Python etc. to Extract, Load and Transform data, as well as other utility functions.

  • Provide production support for Data Warehouse issues such data load problems, transformation translation problems

  • Translate mapping specifications to data transformation design and development strategies and code, incorporating standards and best practices for optimal execution.

  • Understanding data pipelines and modern ways of automating data pipeline using cloud based testing and clearly document implementations, so others can easily understand the requirements, implementation, and test conditions.

  • Perform code reviews to ensure fit to requirements, optimal execution patterns and adherence to established standards.

Qualifications

  • Minimum 8 years of hands on experience with productionized data ingestion and processing pipelines using Python, PySpark, Data bricks and SQL/Snow SQL.

  • Experience designing and implementing an operational production grade large-scale data solution

  • Microsoft Azure Snowflake Data Warehouse experience highly desired.

  • Understanding of Snowflake Internals and integration of Snowflake with other data processing and reporting technologies desired.

  • Excellent presentation and communication skills, both written and verbal, ability to problem solve and design in an environment with unclear requirements.

bottom of page