GOBankingRates.com is a leading destination for personal finance news and information.
Launched by Jeff Bartlett and Brett Rossmann in 2004, GOBankingRates (formerly ConsumerTrack) connects people with products and services that help them to lead a richer life.
We are a multi-media content platform covering all aspects of your personal finance journey. Our curated editorial content, videos and podcasts utilize expert insight to teach readers the six key principles of financial literacy: how to earn, save, invest, spend, borrow and protect money. We strive to give readers the confidence they need to feel empowered about making smart financial decisions today, tomorrow and into the future
Start your adventure with GOBANKINGRATES
GOBankingRates is experiencing significant growth and is actively seeking an experienced Senior Data Engineer with expertise in Cloud Platforms, Modern Data Architecture, and Data Engineering.
As a
Senior Data Engineer, you will play a vital role in our Data Engineering team, responsible for the development and maintenance of our Enterprise Data Platform, which encompasses a Serverless Data Lake, a Cloud Data Warehouse on AWS, and core data/application pipelines.
If you are a rising star in the field of
Data Engineering, ready to challenge norms, and brimming with innovative ideas for managing technology and processes, you will broaden your experience and confront diverse technical challenges to accelerate your career growth.
With this experience, you will become integral in developing our next-generation data platform, emphasizing data quality, speed of execution, and cost optimization to make an impact on our business.
INTERESTING CHALLENGES YOU'LL GET TO CONQUER
- Design and Implement a scalable, fault-tolerant data architecture that can handle large volumes of real-time data without compromising on performance or accuracy
- Build robust data validation / observability frameworks to automatically detect and correct anomalies or inconsistencies as data flows through various transformations and aggregations.
- Reduce the total runtime of our pipelines by supercharging query performance in Snowflake, enabling lightning-fast, complex analytics using advanced optimization techniques while ensuring our data warehouse remains efficient and cost-effective.
HOW YOU'LL MAKE AN IMPACT
- Drive data modernization efforts by building sophisticated data pipelines using dbt, Airflow, and Snowflake, with special emphasis on performance optimization and data observability. .
- Build, scale and maintain large scale data pipelines from internal and external data sources into our data lake/data warehouse – you will be building the data ingestion and transformation layer, including bronze, silver and gold tables
- Build modular set of data services using Python, SQL, DBT, Airflow, Monte Carlo/Similar, AWS services, and other tools
- Work cross-functionally with our data science and product management teams to design, rapidly prototype, and productize new data product ideas and capabilities.
- Conquer complex problems by finding new ways to solve with simple, efficient approaches with a focus on reliability, scalability, quality, and cost of our platforms.
- Build processes supporting data transformation, data structures metadata, and workload management.
- Collaborate with the team to perform root cause analysis and audit internal and external data and processes to help answer specific business questions.
WHAT WILL YOU BRING TO US?
- Master's Degree (or a B.S. degree with relevant industry experience) in math, statistics, computer science, or equivalent technical field
- 5+ years of professional Dimensional Data Warehousing/Data Modeling and 'Big Data' Experience
- 5+ years in a pivotal Software/Data Engineering role, with deep exposure to modern data stacks, particularly Snowflake, Airflow, dbt, and AWS data services
- Expertise in applying Python and SQL to execute complex data operations, customize ETL/ELT processes, and perform advanced data transformations across the platform
- Expertise in establishing data quality assurance frameworks particularly using Great Expectations
- Experience working directly with data analytics to bridge business requirements with data engineering.
- Experience with AWS infrastructure
- Excellent troubleshooting and problem-solving skills
- Ability to operate in an agile, entrepreneurial start-up environment, and prioritize
- Excellent communication and teamwork, and a passion for learning
- Curiosity and passion for data, visualization, and solving problems
- Willingness to question the validity, accuracy of data and assumptions
HOW YOU'LL LEVEL UP
- Experience with Redshift, Snowflake, or other MPP databases is a plus.
- Knowledge for ETL/ELT tools like Informatica, Matillion, IBM DataStage, or SaaS ETL tools is a plus.
- Experience with Tableau or other reporting tools is a plus.
- Experience with CI/CD using tools like CircleCI, Harness is a plus
The salary range for this role is $200,00-$220,000. Pay offered may vary within the posted range based on a number of factors including but not limited to job-related knowledge, skills, experience, and location.
Benefits
- 100% Remote
- Competitive salary with excellent growth opportunity; we pride ourselves in having a team that exudes leadership, high initiative, creativity and passion.
- Awesome medical, dental and vision plans with heavy employer contribution
- Paid maternity leave and paternity leave programs
- Paid vacation, sick days and holidays
- Company funding for outside classes and conferences to help you improve your skills
- Contribution to student loan debt payments after the first year of employment
- 401(k) -- employees can start contributing immediately. After the first year, GOBankingRates matches your contribution up to 4% of your salary
We are an equal-opportunity employer, and all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability status, protected veteran status, or any other characteristic protected by law.