Job Openings >> ETL Data Engineer
ETL Data Engineer
Summary
Title:ETL Data Engineer
ID:3363
Job Type:6-Month Contract
Location:Lewis Center, OH
Description

About:

  • Location: Lewis Center, OH

  • Contract: 6 months

The Role:

  • Strong experience in Databricks.

  • Expertise implementing batch and real time data process solutions using Azure Data Lake storage, Azure Data Factory and Databricks.

  • Experience in built ETL pipelines for ingesting, transforming and loading data from multiple sources into Cloud data warehouses.

  • Proficient in Docker for containerization, utilizing REST API in Python for seamless system integration, and applying containerization concepts to improve deployment efficiency and scalability.

  • Experience in data extraction, data acquisition, transformation, data manipulation, performance tuning and data analysis.

  • Experience in Python libraries to build efficient data processing workflows and streamline ETL operations across large data sets and similar distributed systems.

  • Expertise in automating data quality checks, reducing data errors by 40% and ensuring more reliable reporting and analytics with data marts.

  • Expertise in data orchestration and automation tools such as Apache Airflow, Python, and PySpark, supporting end- to-end ETL workflows.

  • Experience in deployment activities.

Must Have:

  • Bachelor’s degree and 10+ years of relevant experience required.

  • Strong experience in Databricks.

  • Expertise implementing batch and real time data process solutions using Azure Data Lake storage, Azure Data Factor and Databricks.

  • Experience in built ETL pipelines for ingesting, transforming and loading data from multiple sources into Cloud data warehouses.

  • Proficient in Docker for containerization, utilizing REST API in Python for seamless system integration, and applying containerization concepts to improve deployment efficiency and scalability.

  • Experience in data extraction, data acquisition, transformation, data manipulation, performance tuning and data analysis.

  • Experience in Python libraries to build efficient data processing workflows and streamline ETL operations across large data sets and similar distributed systems.

  • Expertise in automating data quality checks, reducing data errors by 40% and ensuring more reliable reporting and analytics with data marts.

  • Expertise in data orchestration and automation tools such as Apache Airflow, Python, and PySpark, supporting end- to-end ETL workflows.

  • Experience in deployment activities.

ApplicantStack powered by Swipeclock