– A Bachelor’s degree in Computer Science, Software Engineering, or a related field
– Minimum 5 years of hand-on experience in data engineering, SQL / ETL programming on Azure stack i.e. Azure Data factory, Azure Databricks, Azure DevOps, Azure Data Lake storage (Gen1/Gen2), Azure Synapse/SQL.
– Expertise in designing/deploying data pipeline, from data crawling, ETL, Data warehousing, data applications on Azure.
– Proficient in programming languages like Python
– Experienced in using tools like Rational suite, Enterprise Architect, Eclipse, and Source code versioning systems like Git
– Experience with different development methodologies (RUP | Scrum | XP)
Roles and Responsibilities
– Build systems or interfaces that gather data from various data sources such as: flat files, data extracts & incoming feeds from various data sources as well as directly interfacing with customer applications
– Transform the raw data into usable information as per the defined requirements
– Load the data in a defined structured format(s) into the respective data store
– Implement entire data pipeline of data crawling, ETL , creating Fact Tables, Data quality management etc.
– Collaborate with operations SMEs and MIS / Reporting teams to ensure that solutions are delivered on time, within budget, and to a high quality
– Evaluate existing data files landscape and design an implementation path for the desired or to-be data solution(s).
– Collaborate, validate, and provide frequent updates to internal stakeholders.