Schedule
Schedule: Monday-Friday 8:00-5:00 (40 hours)
What You’ll Do
Essential Functions:
Reasonable accommodations may be made to enable individuals with disabilities to perform the essential functions.
- As a Data Engineer, you will be part of a team responsible for designing, developing, optimizing, and maintaining the Extract, Transform, and Load (ETL) processes for data used in operational and analytical applications
- Assembling large, complex sets of data that meet non-functional and functional business requirements
- Partners with Solution Architects to perform source system analysis, identification of key data issues, data profiling and development of normalized and star-snowflake physical schemas
- Identifying, designing and implementing internal process improvements including re-designing infrastructure for greater scalability, optimizing data delivery, and automating manual processes
- Building required infrastructure for optimal extraction, transformation and loading of data from various data sources using various technologies
- Creates and implements scheduling strategies for data management processes
- Develops and executes test cases/plans to ensure high-quality releases, based on test driven development using industry best practices such as Gherkin
- Working with stakeholders including data, design, product and executive teams and assisting them with data-related technical issues
- Collaboratively drafts detailed documentation (i.e., Design Specs, Test Plans, etc.) that follow agreed standards
Perform other job-related duties as assigned by Managers(s).
What You’ll Need
- Bachelor's Degree in Information Systems, Computer Science, or related field
- Minimum of 2-4 years in as a Data Engineer
- Minimum of 1 years of experience in Financial Services preferably Banking
- Proficient in writing complex SQL statements and interpreting results for accuracy and performance
- Documented Experience in ETL design and development using tools like SnapLogic, Step Functions, Mulesoft
- Experience in Data Integration, MDM, Business and Data Analysis and Modeling
- Experience with AWS Redshift and S3 (preferred) or other modern cloud-based data platforms such as Azure, GCP, Snowflake… etc.
- Practical development experience in optimization and performance tuning of ETL workloads and understanding of database internals
- Knowledge of data management industry (DW, BI, BD, Cloud, MDM, DG, DQ, Data Modeling)
- Hands-on in programming language such as Python (preferred), Java, or Scala
- Experience with CI/CD and version control frameworks such as Gitlab or Git preferred
- Working knowledge of Agile Methodology
- Out-of-box thinking, and team player approach with a passion for solving complex problems
- Able to work in fast pace continuously evolving environment and ready to take on uphill challenges
What We Do
DCU is the largest credit union headquartered in New England – serving more than one million members in all 50 states. With over 1,900 team members, we strive to make DCU a great place to work with an excellent work-life balance and a community that cares.
DCU is an equal-opportunity employer, and we value diversity, inclusion, and equity at our company. We evaluate qualified applicants without regard to race, color, religion, age, sex, sexual orientation, gender identity, national origin, disability, veteran status, and other legally protected characteristics.
If you’re applying for a job and need a reasonable accommodation for any part of the employment process, please send an email to career@dcu.org and let us know the nature of your request and contact information. Please note that only those inquiries concerning a request for reasonable accommodation will be responded to from this email address.
#INDMI