Title: AWS Solution Architect – Big Data & Data Modernization
Location: Frisco, TX - 3 days – Hybrid
Duration: Frisco, TX - 3 days - Hybrid
Key Responsibilities:
- Architecture Design: Design and implement scalable and secure architecture solutions for big data, data pipelines, and data modernization using AWS services (e.g., AWS Redshift, AWS Glue, Amazon EMR, AWS Data Pipeline, Amazon Kinesis).
- Big Data Solutions: Develop and deploy big data solutions that manage, process, and analyze large volumes of data. Optimize performance and cost-efficiency of data processing and storage.
- Data Pipeline Development: Create and manage data pipelines for ETL/ELT processes to ensure timely, accurate, and reliable data availability for analysis and reporting.
- Data Modernization: Lead initiatives to modernize legacy data systems and practices, incorporating modern data architectures and technologies to enhance data accessibility and insights.
- Technical Leadership: Provide technical guidance and support to development teams, ensuring best practices in data management, security, and compliance.
- Stakeholder Collaboration: Work closely with business stakeholders to understand their data needs and translate them into technical solutions. Provide recommendations and insights to improve data strategies and outcomes.
- Documentation and Training: Develop comprehensive documentation for data architectures, processes, and solutions. Conduct training sessions and knowledge transfer to internal teams.
- Continuous Improvement: Stay updated with emerging AWS technologies and industry trends. Evaluate and integrate new tools and technologies to continuously improve data infrastructure and processes.
Qualifications:
- Education: Bachelor’s degree in Computer Science, Engineering, or a related field. Advanced degrees or certifications (e.g., AWS Certified Solutions Architect – Professional) are a plus.
- Experience: Minimum of 12 years of experience as an AWS Solution Architect or in a similar role with a focus on big data, data pipelines, and data modernization.
- Technical Skills:
- Proficiency with AWS services relevant to big data and data pipelines (e.g., AWS Redshift, AWS Glue, Amazon EMR, Amazon Kinesis, AWS Lambda).
- Experience with data modeling, ETL/ELT processes, and data warehousing.
- Familiarity with big data technologies (e.g., Apache Hadoop, Apache Spark) and database systems (SQL and NoSQL).
- Strong understanding of data security, compliance, and best practices.
- Soft Skills:
- Excellent problem-solving and analytical skills.
- Strong communication and interpersonal skills.
- Ability to work independently and as part of a team in a fast-paced environment.