top of page

Spark with Scala or python

Job Type

Contract to hire ( 3 Months)

Experience

3 to 8 years

Location

India

Job Description

This is a contract remote Data Engineer- Spark with Python and Scala role at Fanisko. The Data Engineer will be responsible for implementing and maintaining data pipelines using Spark and Python/Scala, building and optimizing data models, designing and implementing data warehousing solutions, and performing data analytics.

Key Responsibilities

  • Collaborate with data engineers, data scientists, and other stakeholders to understand business requirements and translate them into Spark-based solutions.

  • Design, develop, and optimize Spark applications to process and analyze large volumes of data efficiently.

  • Implement data processing pipelines using Spark with a focus on scalability, reliability, and performance.

  • Troubleshoot and resolve issues related to data processing jobs and performance bottlenecks.

  • Work closely with cross-functional teams to integrate Spark applications with other components of the data architecture.

  • Stay current with emerging trends and technologies in the big data and analytics space.

Qualifications

  • Data Engineering, Data Modeling, and Extract Transform Load (ETL) skills

  • Proven experience(3 to 8 years) as a Data Engineer, with a focus on Spark, Scala, and Python

  • Data Warehousing and Data Analytics skills

  • Experience in working with large-scale distributed systems

  • Experience in using Spark and Scala/Python for data processing

  • Experience with cloud computing platforms like AWS, Azure, or Google Cloud Platform

  • Bachelor's or Master's degree in Computer Science, Software Engineering, or related field

  • Proficiency in SQL and experience with database systems such as MySQL, PostgreSQL, or similar.

  • Excellent problem-solving and analytical skills

  • Strong communication and collaboration skills

bottom of page