Back to Jobs
//} ?>
Pyspark Data Engineer
DATAECONOMY
Hyderabad - Secunderabad
Not Disclosed
5 - 8 Years
Full Time - Permanent
Views:129
Applicants:0
Posted on 21 Aug, 2025
In Office
Job Description | Responsibilities
- Design, develop, and maintain large-scale ETL pipelines using PySpark and AWS Glue.
- Orchestrate and schedule data workflows using Apache Airflow.
- Optimize data processing jobs for performance and cost-efficiency.
- Work with large datasets from various sources, ensuring data quality and consistency.
Overview
- Industry - IT - INFORMATION TECHNOLOGY
- Functional Area - IT Software Programming / Analysis / Quality / Testing / Training
- Job Role - Data Quality Engineer
- Employment type - Full Time - Permanent
- Work Mode - In Office
Qualifications
- Any Graduate - Any Specialization
- Any Post Graduate - Any Specialization
- Any Doctorate - Any Specialization
Job Related Keywords
Microsoft MDS
SQL
Data Pipelines
Data Warehousing
ETL
Azure Cloud
Data Modeling
Pyspark Data Engineer