Description
Are you ready to make a significant impact at a leading global media and technology company? We're looking for a Senior Data Engineer with Neo4j experience to become a key member of our innovative team. As part of our organization, you'll be at the forefront of technology, helping to shape the future of customer experiences and connectivity.
Our Technology, Product, and Experience (TPX) team is dedicated to transforming media and technology through cutting-edge products and next-generation technologies. In this role, you’ll lead the development and maintenance of large-scale data systems and influence coding practices, architecture, and team growth.
Key Responsibilities
- Design, develop, test, and maintain large-scale ETL processes.
- Present and defend architectural and technical decisions to internal stakeholders.
- Collaborate with leadership and architects on design processes, focusing on scalability and sustainability.
- Manage and maintain extensive Graph Data platforms.
- Handle large data sets and provide actionable insights through data analytics.
- Develop software systems in an Agile environment using Python, Spark, Scala, Databricks, Neo4j, and AWS.
- Contribute to application metrics and strive to enhance them.
- Create automated processes for pattern recognition and decision-making.
- Advocate for improvements in data quality, security, and platform performance.
- Mentor junior and senior software engineers.
- Collaborate with data scientists and architects to develop cross-functional solutions.
- Apply software design and redesign skills effectively.
Required Skills And Experience
- Proficiency in Python, including concurrent processing.
- Hands-on experience with complex ETL applications using Spark and SQL.
- Familiarity with data engineering systems such as data lakes, file formats, and batch/stream processing (Kafka). Databricks experience is essential.
- Experience with Docker and Kubernetes, especially at scale, is a plus.
- Knowledge of CI/CD tools like Git, Jenkins, and Concourse.
- Experience with Graph Databases such as Neptune or Neo4j.
- Proficiency with AWS services (S3, Lambda, EC2, EKS, CloudWatch, Kinesis, SNS, SQS).
- Strong communication and debugging skills.
- Experience with observability tools (ELK, Prometheus, Grafana, OpenTelemetry).
- Proficiency in unit/integration testing frameworks like pytest and cucumber.
- Knowledge of GraphQL, Cypher, and Django is a bonus.
Qualifications
- Bachelor’s or Master's degree in Computer Science or a related field.
- Extensive experience in data engineering, with at least 7 years in software development.
- At least 3 years of experience with data applications and tools (e.g., Snowflake, Databricks, Spark).
- Proven experience in designing and building scalable ETL pipelines.
- Hands-on experience with AWS services and CI/CD tools.
- Excellent problem-solving skills and communication abilities.
What We Offer
- Competitive salary and comprehensive benefits package, including medical and dental coverage, a 401(k) savings plan, and generous paid time off.
- Support for life milestones such as adoption assistance and childcare resources.
- Free digital TV and internet services in eligible areas.
- Discount tickets for theme parks and hotel stays.
Commitment to Inclusion
We are committed to providing reasonable accommodations for individuals with disabilities during the application and interview process. Please contact us if you need assistance.
We value diversity and are an equal opportunity employer. All qualified applicants will be considered for employment without regard to race, color, religion, age, sex, sexual orientation, gender identity, national origin, disability, veteran status, genetic information, or any other protected status.
Employment Type: Full-Time