Are you looking for a role where you can use your knowledge of big data pipelines and python scripting to make a difference in the automotive industry? Would you like to work for a company that provides an innovative work environment, flexible schedules, and ongoing professional development?
Who we are:
LER TechForce is an industry leader in embedded controls, software, functional safety, and engineering IT talent. For over 20 years LER has been working with customers across North America to meet their engineering resource challenges. We have a position for an experienced data engineer and scripting to work on Big Data pipelines (Ingest-Transform-Deliver).
What you will be doing:
- Troubleshoot spark applications and resolve data pipeline issues.
- Collaborate with stakeholders to understand their business requirements and translate them into technical specifications.
- Provide technical support and guidance to junior team members, ensuring they are equipped with the necessary information and knowledge to deliver high-quality work.
- Lead and motivate a team of data engineers and software developers, creating a positive and collaborative work environment.
- Support ongoing Big Data initiatives
- Big Data Pipeline setup guidelines
- Source Data Identification and analysis
- Data quality research and mitigation
- Design and development
- Data Ingestion into the Data Lake
- Data curation, aggregation effort
- Co-ordinate code promotion with the appropriate support groups
- Onboard, train new members to team
- Work with Advanced Analytics Data Engineering team to document the process for business users to consume the data and big data system, and provide training as needed .
- Agile Story Planning
- Identify business requirements
- Define business process flow and benefits
- Prioritized Functional and non-functional requirements
- High Level Source systems analysis
- POC Production Solution Implementation for key data and analytics need
- Technical /System Requirements
- High Level Architecture
- Architecture Recommendation
- Conceptual Data model
- Data Ingestion, Curation Flow
- Reporting & Analytics Design
- Process to pull data into Lake
- Data Interface requirements
- Data quality requirements
- Solution implementation and development
The ideal candidate will be knowledgeable in the following areas:
- Big Data Pipelines
- Azure Data Services
- Azure Databricks
- Azure API
- Programming languages across Scala, Python, Pyspark, NodeJS and Express
What you'll get:
- Full benefits: medical, dental, 401K match
- Ongoing professional development opportunities
- Flexible Hybrid schedule
- The opportunity to work on industry leading projects
What you'll need to be successful:
- College, university, or equivalent Bachelor's degree in Engineering or other relevant technical disciplines. Degree Programs Considered: Bachelor's, Master's, PhD.
- At least 2 years of experience in both software and data engineering with a strong background in cloud technologies and Scala/Python programming languages.
- Experience in handling unstructured data processing and transformation with programming knowledge.
- Hands on experience in building data pipelines using Scala/Python
- Big data technologies such as Apache Spark, Structured Streaming, SQL, Databricks Delta Lake
- Strong analytical and problem solving skills with the ability to troubleshoot spark applications and resolve data pipeline issues.
- Familiarity with version control systems like Git, CICD pipelines using Jenkins.
- Humble, teachable, and who solve their own problems
- Effective and collaborative team player
- Good communicator - written and verbal
- Great collaborator
Click the Easy Apply button to learn more.