One stop solution to your remote job hunt!

By signing up you get access to highly customizable remote jobs newsletter, An app which helps you in your job hunt by providing you all the necessary tools.

OR
Subscribe to our highly customizable newsletter to get remote jobs from top remote job boards delivered to your inbox.
Plutus 2 months ago
backenddataengineerfront end
Apply Now

We are seeking a skilled Data Warehouse Engineer to join our data team. In this role, you will be responsible for designing, developing, and maintaining scalable data solutions using dbtFivetran, and Braze. You will play a key role in building and optimising data pipelines, ensuring data quality, and supporting our business intelligence and marketing efforts through seamless data integration and automation.As a Data Warehouse Engineer, you will collaborate with data analysts, engineers, and marketing teams to ensure that data flows smoothly between different systems, creating a robust infrastructure that drives business insights and customer engagement.

< class="p-rich_text_section">Key Responsibilities:
  • Data Pipeline Development: Design, build, and maintain ETL/ELT data pipelines using Fivetran to integrate various data sources into the data warehouse.
  • Data Modeling with dbt: Develop and maintain data models and transformations using dbt (Data Build Tool) to optimise the structure of the data warehouse for analytics and reporting.
  • Braze Integration: Work closely with the marketing team to integrate Braze for personalised customer engagement, ensuring smooth data flow between the warehouse and the platform.
  • Data Warehouse Management: Maintain and optimise the performance of the data warehouse (e.g., Snowflake, BigQuery, Redshift) by managing schema design, partitioning, and indexing.
  • Data Quality and Monitoring: Implement data quality checks, conduct audits, and monitor pipeline health to ensure reliable and accurate data delivery.
  • Collaboration: Work closely with data analysts, BI teams, and marketing to understand data needs, improve data availability, and deliver actionable insights.
  • Automation & Optimisation: Implement automation for data ingestion, transformation, and orchestration to improve operational efficiency and reduce manual intervention.
  • Documentation & Best Practices: Create and maintain comprehensive documentation of data architecture, pipeline processes, and best practices for future reference and onboarding.
  • Troubleshooting & Support: Identify, investigate, and resolve data-related issues in a timely manner.
Please mention the word RECOVER when applying to show you read the job post completely (#RMzQuMzAuMTUwLjE0OA==). This is a feature to avoid fake spam applicants. Companies can search these words to find applicants that read this and instantly see they're human.

Salary and compensation

$50,000 — $50,000/year