One stop solution to your remote job hunt!

By signing up you get access to highly customizable remote jobs newsletter, An app which helps you in your job hunt by providing you all the necessary tools.

OR
Subscribe to our highly customizable newsletter to get remote jobs from top remote job boards delivered to your inbox.
ActivTrak over 2 years ago
datadata🇺🇸 usa only🇺🇸 usa only
Apply Now

ActivTrak is a cloud-based platform that provides productivity insights into how teams work, improving employee and customer experience, while also enabling better business outcomes. We are a fast-growing, agile company with a forward-thinking, inclusive culture. Our teams are encouraged to collaborate daily to solve challenges, create and champion new ideas, and execute initiatives that help global customers and their modern workforces succeed by working better together.

Requirements

As a Senior Engineer working on our Data Science team, you will be responsible for building/managing the data and feature pipeline for our data science/ML initiatives. This entails sourcing data, converting data into features, and managing these features and the models that employ them as part of our feature store infrastructure (feature engineering). You will have a background in building data pipelines for the purpose of collecting, cleansing, and transforming data, for use in the initiatives that bring unique insights to our users. You will be working in a close-knit team that is expected to code to scale to hundreds of millions of events per day, leverage our petabyte+ of existing data, and support our users as we disrupt the productivity analytics industry. You will be collaborating with teams across engineering and the business to help provide insights and answers to our customers’ most pressing questions.

 

  • Experience building ETL pipelines in Python
  • Senior Python development skills
  • Professional experience in cloud environments (Google Cloud Platform, AWS, Azure)
  • Experience with horizontally scalable deployments
  • Docker/Containers, Kubernetes
  • Experience with batch and stream data processing
  • Data warehousing (BigQuery or Snowflake)
  • Data modeling (Relational and Dimensional)
  • Strong data fundamentals (SQL and Pandas)
  • Parallel dataframes at scale with Dask or Spark
  • Values software craftsmanship, quality and application of SDLC best practices
  • Experience with feature engineering, standardization, versioning and storage
  • API design/implementation (REST for microservice architectures)
  • Experience working with CI/CD systems and software source control systems, such as Git
  • Adopts a test-driven software development philosophy

Benefits

Work environment:

  • The position is remote within the US
  • Minimal travel
  • Limited physical demands

 

This is an incredible opportunity to embark on an exciting journey with a dynamic, VC-backed company. If you have a positive attitude towards urgency, risk, and challenges that comes with working in a startup environment, then you will be a great fit! ActivTrak is an equal opportunity employer. We celebrate ersity and are committed to creating an inclusive environment for all employees. ActivTrak does not discriminate in employment on the basis of race, color, religion, sex, national origin, political affiliation, sexual orientation, marital status, disability, age, protected veteran status, gender identity, or any other factor protected by applicable federal, state or local laws. #LI-REMOTE