Job Type : contract & permanent both available
Job Description :
Key Responsibilities:
- AWS Data Engineering: Design and implement robust data solutions using AWS services such as S3, EMR/HDFS, RedShift, Snowflake, and the Spark framework.
- Data Orchestration: Develop and manage data workflows using AWS Glue, DAGs, and MWAA.
- Metadata Management: Utilize AWS Glue Catalogue and AWS Iceberg for metadata management and data cataloging.
- Data Processing: Leverage the Spark framework for large-scale data processing tasks.
- Database Management: Manage and optimize data storage and retrieval with RedShift and Snowflake.
- AWS Security: Ensure the security of data solutions by implementing IAM policies and tag-based configurations.
- Industry Expertise: Apply knowledge and experience within the banking/finance industry to align data engineering solutions with sector-specific requirements.
- Transition Management: Handle the transition of data engineering projects from development to operational support at L2/L3.
- Development Tools: Use development tools like Ab-Initio or other graph IDEs for data processing tasks.
Skills and Qualifications:
- AWS Proficiency: Strong experience with AWS data services, including S3, EMR/HDFS, RedShift, Snowflake, AWS Glue, and AWS Iceberg.
- Data Processing: Expertise in the Spark framework and knowledge of data orchestration and workflow management.
- Security Practices: Solid understanding of AWS security practices, including IAM and tagging strategies.
- Industry Experience: Preference for candidates with experience in the banking or finance industry.
- Project Transition: Ability to manage the transition from project mode to BAU L2/L3 support.
- Tool Expertise: Experience with data engineering and ETL tools, particularly Ab-Initio or similar graph-based IDEs.
- Problem-Solving: Strong analytical and problem-solving skills.
- Communication: Excellent communication abilities to interface with both technical and non-technical stakeholders.